X blocks search for Taylor Swift after explicit AI images of her go viral

[ad_1]

Image source, fake images

Social media platform X blocked searches for Taylor Swift after explicit AI-generated images of the singer began circulating on the site.

In a statement to the BBC, X’s head of commercial operations, Joe Benarroch, said this was a “temporary action” to prioritize safety.

Searching for Swift on the site brings up a message that says, “Something went wrong. Please try reloading.”

Fake graphic images of the singer appeared on the site earlier this week.

Some went viral and were viewed millions of times, causing alarm among US officials and the singer’s fans.

Posts and accounts sharing fake images were reported by her fans, who filled the platform with real images and videos of her, using the words “protect Taylor Swift.”

The photos prompted X, formerly Twitter, to issue a statement on Friday, saying that posting non-consensual nudity on the platform is “strictly prohibited.”

“We have a zero-tolerance policy toward such content,” the statement said. “Our teams are actively removing all identified images and taking appropriate action against the accounts responsible for posting them.”

It’s unclear when X began blocking searches for Swift on the site, or if the site has blocked searches for other public figures or terms in the past.

In his email to the BBC, Benarroch said the action is being taken “with great caution as we prioritize safety on this issue.”

The matter caught the attention of the White House, which on Friday described the spread of the AI-generated photographs as “alarming.”

“We know that lax enforcement of laws disproportionately affects women and also girls, unfortunately, who are overwhelmingly targeted,” White House press secretary Karine Jean-Pierre said during a briefing.

He added that there should be legislation to address the misuse of AI technology on social media, and that platforms should also take their own measures to ban such content from their sites.

“We believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation and intimate, non-consensual images of real people,” Jean-Pierre said.

American politicians have also called for new laws to criminalize the creation of deepfake images.

Deepfakes use artificial intelligence to make a video of someone by manipulating their face or body. A 2023 study found that there has been a 550% increase in the creation of doctored images since 2019, driven by the emergence of AI.

There are currently no federal laws prohibiting the sharing or creation of deepfake images, although steps have been taken at the state level to address the problem.

In the United Kingdom, sharing deepfake pornography became illegal as part of its Online Safety Act in 2023.

Leave a Comment