X pauses searches for Taylor Swift due to explicit deepfake images

[ad_1]

Elon Musk’s X social media platform has blocked some searches for Taylor Swift like pornographic deepfake images of the singer have circulated online.

Attempts to search for his name without quotes on the site on Monday resulted in an error message and a message for users to retry their search, adding: “Don’t worry, it’s not your fault.”

However, putting quotes around his name allowed posts to appear that mentioned his name.

Fake sexually explicit and abusive images. of Swift began circulating widely last week on X, making her the most famous victim of a scourge that tech platforms and anti-abuse groups have struggled to fix.

“This is a temporary action and is being done out of an abundance of caution as we prioritize safety on this issue,” Joe Benarroch, X’s head of business operations, said in a statement.

Unlike more conventional manipulated images that have troubled celebrities in the past, Swift’s images appear to have been created using an artificial intelligence image generator that can instantly create new images from a written message.

After the images began spreading online, the singer’s devoted fan base of “Swifties” quickly mobilized, launching a counteroffensive against X and a #ProtectTaylorSwift hashtag to flood him with more positive images of the pop star. Some said they were reporting accounts that shared the deepfakes.

Deepfake detection group Reality Defender said it tracked a flood of non-consensual pornographic material depicting Swift, particularly on X, formerly known as Twitter. Some images also made their way to Meta-owned Facebook and other social media platforms.

The researchers found at least a couple dozen unique AI-generated images. The most shared were football-related and featured a painted or bloodied Swift that objectified her and, in some cases, inflicted violent damage on her deepfake persona.

Swift’s images first emerged from an ongoing campaign that began last year on fringe platforms to produce AI-generated, sexually explicit images of famous women, said Ben Decker, founder of threat intelligence group Memetica. One of the images of Swift that went viral last week appeared online on Jan. 6, he said.

Most commercial AI image generators have safeguards to prevent abuse, but commenters on anonymous message boards discussed tactics on how to bypass moderation, especially in Microsoft Designer’s text-to-image conversion tool, Decker said.

Microsoft said in a statement Monday that it “will continue to investigate these images and has strengthened our existing security systems to further prevent our services from being misused to help generate images like these.”

Decker said “it’s part of a long-standing adversarial relationship between trolls and platforms.”

“As long as platforms exist, trolls will try to disrupt them,” he said. “And as long as trolls exist, the platforms will be disrupted. So the question really is: how many more times is this going to happen before there is any serious change?

X’s decision to reduce searches for Swift is likely a stopgap measure.

“When you’re not sure where everything is and you can’t guarantee that everything has been removed, the simplest thing you can do is limit people’s ability to search for it,” he said.

Researchers have said that The number of explicit deepfakes has increased. in recent years, as the technology used to produce these types of images has become more accessible and easier to use.

In 2019, a report published by artificial intelligence firm DeepTrace Labs showed that these images were overwhelmingly used as a weapon against women. Most of the victims, she said, were Hollywood actors and South Korean K-pop singers.

In the European Union, several new laws include provisions for deepfakes. The Digital Services Law, which came into effect last year, requires online platforms to take measures to curb the risk of spreading content that violates “fundamental rights” such as privacy, such as “non-consensual” images or deepfake pornography. The 27-nation bloc’s Artificial Intelligence Law, which still awaits final approval, will require companies that create deepfakes with artificial intelligence systems to also inform users that the content is artificial or manipulated.

Leave a Comment