Microsoft Engineer Raises Alarm on AI Image Generator with FTC
A Microsoft engineer, Shane Jones, has expressed serious concerns about the safety of an AI-powered image generator, prompting him to take the matter to the Federal Trade Commission. Jones, with a six-year tenure at Microsoft, has alleged that the company ignored his warnings about the dangers of Copilot Designer, an artificial intelligence tool that has been generating inappropriate and potentially harmful images.
Safety Concerns Ignited by AI Image Generator
During safety evaluations, Jones discovered that Copilot Designer was producing unsettling imagery. According to his findings, the tool created visuals showing 'demons and monsters' in association with sensitive content including abortion rights and gun violence. The AI also generated depictions of women in violent contexts, drug use among minors, and other controversial scenes.
Infringements and Ethical Dilemmas
Further testing revealed that Copilot Designer was not restricted to generic imagery; it also generated pictures involving copyrighted characters in politically charged settings. For instance, it produced images of Elsa from Frozen, depicted in contexts such as the Gaza Strip conflict. These images raise both copyright and ethical questions.
Continuous Warnings and Corporate Inaction
Jones has been vocal about his concerns surrounding the AI tool since December, even attempting to raise awareness through social media platforms like LinkedIn. However, his efforts were met with opposition from Microsoft's legal team, leading to the removal of his post. Despite his continuous urgings for Microsoft to withdraw Copilot Designer until more robust safety measures are installed, the product remains publicly available.
Efforts to Improve Safety Standards
In light of the issues with Copilot Designer, Jones reached out to US senators, especially after the tool produced explicit images of the celebrity Taylor Swift. Thankfully, Microsoft CEO Satya Nadella acknowledged the seriousness of the problem and promised to bolster safety features. Similarly, Google faced its own challenges when its AI image generator began creating controversial historical representations, prompting the tech giant to temporarily halt the service.
Microsoft, AI, safety