X blocks searches for Taylor Swift after fake AI videos go viral

[ad_1]

Elon Musk’s X has blocked searches for Taylor Swift after sexually explicit images of the pop star created with artificial intelligence were widely spread on the platform.

The incident is the latest example of how social media groups are scrambling to address so-called deepfakes: realistic images and audio, generated by artificial intelligence, that can be abused to portray prominent people in compromising or misleading situations without their consent. consent.

Any search for terms like “Taylor Swift” or “Taylor AI” on X returned an error message for several hours over the weekend, after AI-generated pornographic images of the singer proliferated online in recent days. The change means that even legitimate content about one of the world’s most popular stars is harder to see on the site.

“This is a temporary action and is being done out of an abundance of caution as we prioritize safety on this issue,” said Joe Benarroch, head of business operations at X.

Swift has not commented publicly on the matter.

X was purchased for $44 billion in October 2022 by billionaire entrepreneur Musk, who cut resources dedicated to policing content and loosened his moderation policies, citing his ideals of free speech.

Its use of the forceful moderation mechanism this weekend comes as X and rivals Meta, TikTok and Google’s YouTube face increasing pressure to address abuse of increasingly realistic and easily accessible deepfake technology. A dynamic market of tools has emerged that allows anyone to use generative AI to create a video or image featuring the likeness of a celebrity or politician with several clicks.

Although deepfake technology has been available for several years, recent advances in generative AI have made images easier to create and more realistic. Experts warn that fake pornographic images are one of the most common emerging abuses of deepfake technology, and also point to its increasing use in political disinformation campaigns during an election year around the world.

Responding to a question about the Swift images on Friday, White House press secretary Karine Jean-Pierre said the circulation of fake images was “alarming,” adding: “While social media companies take their own independent decisions about content management, we believe They have an important role to play in enforcing their own rules.” He urged Congress to legislate on the issue.

On Wednesday, social media executives including X’s Linda Yaccarino, Meta’s Mark Zuckerberg and TikTok’s Shou Zi Chew will face questioning at a US Senate Judiciary Committee hearing on online child sexual exploitation, following growing concerns that their platforms do not do enough to protect children.

On Friday, X’s official security account said in a statement that posting “non-consensual nude (NCN) images” was “strictly prohibited” on the platform, which has a “zero tolerance policy towards such content.”

He added: “Our teams are actively removing all identified images and taking appropriate action against the accounts responsible for posting them. “We are closely monitoring the situation to ensure that any further violations are addressed immediately and the content is removed.”

However, of the world.

A report by technology news site 404 Media found that the images appeared to originate from an anonymous message board 4chan and a group on the messaging app Telegram, dedicated to sharing abusive AI-generated images of women, often made with a tool from Microsoft.

Microsoft said it was still investigating the images, but had “reinforced our existing security systems to prevent our services from being used to help generate images like these.”

Telegram did not immediately respond to requests for comment.

Leave a Comment