Technology

Microsoft Enhances AI Design Tool Security after Deepfake Controversy

Published January 30, 2024

Artificial Intelligence (AI) has the power to unlock creative potential and transform content creation, but it also poses significant risks. Misuse of these technologies can spread misinformation and damage reputations. In light of recent events involving unauthorized deepfake content, Microsoft is taking steps to reinforce the security features of its AI tools.

Patching vulnerabilities

Notably, AI-generated deepfake images depicting Taylor Swift in a sexualized manner circulated on social media, having been shared through online platforms like 4chan and Telegram. These deepfakes were produced using Microsoft Designer, which includes an AI-powered image generator called Image Creator, based on technology like DALLE-3 that can create highly realistic images.

Previously, Image Creator's safeguards were designed to block explicit references to nudity or public figures. Yet, users bypassed these safeguards by intentionally misspelling names and using suggestive descriptions without overt sexual language.

Strengthened security measures

In response, Microsoft has updated these protective measures to close off the exploited loopholes effectively. Now, attempts to create celebrity images, even with misspellings, are met with an immediate block. Microsoft emphasizes the commitment to a safe and respectful user experience and has improved safety systems to prevent the creation of inappropriate images.

Alongside these technical restrictions, Microsoft's Designer Code of Conduct strictly prohibits generating adult content or content that portrays non-consensual intimate acts. Violations could lead to a complete ban from using the service.

Some individuals are already looking for ways to circumvent these new security measures, suggesting an ongoing battle between those who wish to exploit generative AI tools and the developers who strive to secure them.

Microsoft, AI, protections