Microsoft adds new protections for designers after Taylor Swift deepfake debacle

[ad_1]

Microsoft Designer

María Díaz/ZDNET

AI image generators have the potential to stimulate creativity and revolutionize content creation for the better. However, when used incorrectly, they can cause real harm through the spread of misinformation and reputational damage. Microsoft hopes to prevent further misuse of its generative AI tools by implementing new protections.

Last week, AI-generated deepfakes sexualizing Taylor Swift went viral on Twitter. The images were reportedly shared via 4chan and a Telegram channel where users share AI-generated images of celebrities created with Microsoft Designer.

Plus: This new iPhone app merges artificial intelligence with web search, saving you time and energy

Microsoft Designer is Microsoft’s graphic design application that includes Image Creator, the company’s AI image generator that leverages DALLE-3 to generate realistic images. The generator had guardrails that prevented inappropriate messages that explicitly mentioned nudity or public figures.

However, users found loopholes such as misspelling celebrity names and describing images that did not explicitly use sexual terms but generated the same result, according to the report.

Microsoft has now addressed these loopholes, making it impossible to generate celebrity images. I tried to enter the message “Selena Gomez playing golf” into Image Creator and received an alert saying my message was blocked. I also tried to misspell her name and got the same alert.

Also: Microsoft adds Copilot Pro support to iPhone and Android apps

“We are committed to providing a safe and respectful experience for everyone,” a Microsoft spokesperson told ZDNET. “We continue to investigate these images and have strengthened our existing security systems to further prevent our services from being misused to help generate images like these.”

Additionally, the Microsoft Designer Code of Conduct explicitly prohibits the creation of intimate adult or non-consensual content, and a violation of that policy may result in complete loss of access to the service.

Also: The ethics of generative AI: how we can harness this powerful technology

Some users have already expressed interest in finding a solution to these new protections on the Telegram channel, according to the report. So this is likely a game of cat and mouse between bad actors finding and exploiting loopholes in generative AI tools and the companies behind these tools rushing to fix them for a long time.

Leave a Comment