Taylor Swift Deepfake AI Angry Porn Images May Take Legal Action

[ad_1]

Taylor Swift 'furious' over deepfake images and could take legal action: report

Taylor Swift Deepfake: Images of the American pop star have spread on social networks (File)

New Delhi:

Fake, Sexually explicit images of American megastar Taylor Swift likely generated by artificial intelligence spread rapidly across social media platforms this week, upsetting his fans. Deepfakes have once again revived calls from lawmakers to protect women and crack down on platforms and technology that spread these types of images.

An image of the singer was viewed 47 million times on X, formerly known as Twitter, before being removed on Thursday. According to American media, the publication was active on the platform for approximately 17 hours, the AFP news agency reported.

Taylor Swift is said to be “furious” that AI-generated nude images of her are circulating online and is also weighing possible legal action against the site responsible for generating the photos, according to a report in The New York Post.

“Whether legal action will be taken or not is being decided, but one thing is clear: these fake AI-generated images are abusive, offensive, exploitative and are done without Taylor’s consent and/or knowledge,” a source close to her said. at 34. -said the one-year-old pop star.

“We must close the door on this. It is necessary to pass legislation to prevent it and enact laws,” added a source close to the star.

How has X responded?

In a statement, X said that “posting non-consensual nudity (NCN) images is strictly prohibited on

The Elon Musk-owned platform said it was “actively removing all identified images and taking appropriate action against the accounts responsible for posting them.”

It was also “closely monitoring the situation to ensure that any further violations are addressed immediately and the content is removed.”

However, the images continued to be available and shared on Telegram.

Currently, US law gives tech platforms very broad protection from liability for content posted on their sites, and content moderation is voluntarily or implicitly imposed by advertisers or app stores.

Many highly publicized cases of deepfake audio and video have targeted politicians or celebrities, with women by far the main targets.

According to research cited by Wired magazine, in the first nine months of 2023, 113,000 deepfake videos were uploaded to the most popular pornographic websites.

Leave a Comment