Pornographic Images of Taylor Swift Joe Biden Robocalls Deepfake Generated by AI Flood

[ad_1]

Taylor Swift, Joe Biden and teenagers, the latest victims of deepfakes

Almost 500 videos referencing Taylor Swift were hosted on the main deepfake site (Archive)

Washington:

AI-generated deepfakes have proliferated on social media this month, claiming a number of high-profile victims and elevating the risks of manipulated media into the public conversation ahead of a looming US election cycle.

Pornographic images of singer Taylor Swift, robocalls in the voice of US President Joe Biden and videos of dead children and teenagers detailing their own deaths have gone viral, but none of them were real.

Deceptive audio and images created with artificial intelligence are not new, but recent advances in artificial intelligence technology have made them easier to create and more difficult to detect. The spate of highly publicized incidents just weeks into 2024 has raised concerns about the technology among lawmakers and ordinary citizens.

“We are alarmed by reports of the circulation of fake images,” White House press secretary Karine Jean-Pierre said Friday. “We’re going to do what we can to address this issue.”

At the same time, the spread of fake AI-generated content on social media has offered a stress test to the platforms’ ability to control them. On Wednesday, explicit AI-generated images of Swift racked up tens of millions of views on X, the website formerly known as Twitter and owned by Elon Musk.

Although sites like X have rules against sharing synthetic and manipulated content, posts depicting Swift took hours to be removed. One remained active for about 17 hours and had more than 45 million views, according to Verge, a sign that these images can go viral long before action is taken to stop them.

breaking

Companies and regulators have a responsibility to stop the “perverse customer journey” of obscene manipulated content, said Henry Ajder, a researcher and artificial intelligence expert who has advised governments on legislation against deepfake pornography. We need to “identify how different stakeholders, whether search engines, tool providers, or social media platforms, can do a better job of creating friction in the process from when someone forms the idea to when they actually create and share the content.”

Swift’s episode sparked fury from her legions of fans and others on X, causing the phrase “protect Taylor Swift” to trend on the social platform. It is not the first time that the singer has been subjected to her image being used in explicit AI manipulation, although she is the first with this level of public outrage.

The top 10 deepfake websites hosted about 1,000 videos referencing “Taylor Swift” as of the end of 2023, according to a Bloomberg review. Internet users graft their faces onto the bodies of porn performers or offer paying clients the chance to “undress” victims using artificial intelligence technology.

Many of these videos are available through a quick Google search, which has been the main driver of traffic to counterfeit websites, according to a 2023 Bloomberg report. While Google offers a form that allows victims to request the removal of deepfake content, many complain that the process resembles a game of whack-a-mole. At the time of Bloomberg’s report last year, a Google spokesperson said the Alphabet Inc. company designs its search ranking systems to avoid surprising people with unexpected, harmful or explicit content they don’t want to see.

Nearly 500 videos referencing Swift were hosted on the leading deepfakes site, Mrdeepfakes.com. In December, the site received 12.3 million visits, according to data from Similarweb.

Target women

“This case is horrific and certainly extremely distressing for Swift, but sadly it is not as groundbreaking as some might think,” Ajder said. “The ease of creating this content now is disturbing and affects women and girls, regardless of where they are in the world or their social status.”

As of Friday afternoon, there were still explicit AI-generated images of Swift on X. A spokesperson for the platform directed Bloomberg to the company’s existing statement, which said that non-consensual nudity is against its policy and that the platform is actively trying to remove such images. .

Users of the popular AI image creator Midjourney are already leveraging at least one of Swift’s fake images to create written prompts that can be used to create more explicit AI images, according to requests in a Midjourney Discord channel reviewed by Bloomberg. Midjourney has a feature where people can upload an existing image to their Discord chat channel, where prompts are entered to tell the technology what to create, and it will generate text that can be used to create another similar image through Midjourney or another similar one. service.

The result of that feature is on a public channel for any of the more than 18 million members of Midjourney’s Discord server to view, giving them the equivalent of tips and tricks for adjusting AI-generated pornographic images. As of Friday afternoon there were almost 2 million people active on the server.

Midjourney and Discord did not respond to requests for comment.

Increasing numbers

Amid the rise of artificial intelligence, the number of new deepfake porn videos has already increased ninefold since 2020, according to research by independent analyst Genevieve Oh. At the end of last year, the top 10 sites offering this content hosted 114,000 videos, among which Swift was already a common target.

“Whether it’s AI or real, it still harms people,” said Heather Mahalik Barnhart, a digital forensics expert who develops curriculum for the SANS Institute, a cyber education organization. With Swift’s images, “even if it’s fake, imagine the minds of her parents who had to see that; you know, when you see something, you can’t make it go away.”

Just days before Swift’s footage created a firestorm, a fake audio message from Biden had spread ahead of the New Hampshire presidential primary. Global disinformation experts said the robocall, which sounded like Biden telling voters to skip the primaries, was the most alarming deepfake audio they had heard yet.

There are already concerns that deepfake audio or video could play a role in the upcoming election, fueled by how quickly things spread on social media. Biden’s fake message was dialed directly into people’s phones, providing fewer means for those expected to examine the call.

“The New Hampshire primary gives us a first taste of the situation we have to face,” said Siwei Lyu, a professor at the University at Buffalo who specializes in deepfakes and digital media forensics.

difficult to detect

Even on social media, there are currently no reliable detection capabilities, leaving a frustratingly indirect process that relies on someone spotting a piece of content and being hesitant enough to go to the source to confirm it. That’s supposedly a more likely scenario for a prominent public figure like Swift or Biden than for a local official or private citizen. Even if companies identify and remove these videos, they spread so quickly that the damage is often already done.

A deepfaked viral video of a victim of the October 7 terrorist attack on Israel, Shani Louk, has racked up more than 7.5 million views on ByteDance Ltd.’s TikTok app since it was posted more than three months ago, even after that Bloomberg highlighted it. for the company in a December story about the platform’s struggle to control AI-generated videos of dead victims, including children.

The video-sharing app has banned AI-generated content from private citizens or children, and says “appalling” or “disturbing” videos are also not allowed. As recently as this week, deepfake videos of dead children expressing details of the abuse and their deaths were still appearing in users’ feeds and racking up thousands of views. TikTok removed videos submitted by Bloomberg for comment. As of Friday, dozens of videos and accounts that exclusively post this type of false and disturbing content remain active.

TikTok has said it is investing in detection technologies and working to educate users about the dangers of AI-generated content. Other social networks have expressed similar sentiments.

“You can’t respond to something, you can’t react to something – let alone regulate something – if you can’t detect it first,” said Nick Clegg, president of public affairs at Meta Platforms Inc., which owns Facebook and Instagram. at the World Economic Forum in Davos, Switzerland, earlier this month.

Few laws

There is currently no US federal law prohibiting deepfakes, including those of a pornographic nature. Some states have implemented deepfake porn laws, but their enforcement is inconsistent across the country, making it difficult for victims to hold creators accountable.

White House press secretary Jean-Pierre said Friday that the administration is working with artificial intelligence companies on unilateral efforts that would watermark generated images to make them easier to identify as fake. Biden also named a task force to address online harassment and abuse, while the US Department of Justice created a hotline for victims of image-based sexual abuse.

Congress has begun discussing legislative measures to protect the voices of celebrities and artists from the use of AI in some cases. There is no protection for private citizens in those conversations.

Swift has not made any public comments on the issue, including whether she will take legal action. If she chooses to do so, she might be in a position to take on that kind of challenge, said Sam Gregory, executive director of Witness, a nonprofit that uses ethical technology to highlight human rights abuses.

“In the absence of federal legislation, having a plaintiff like Swift who has the ability and willingness to pursue this using every means available to make a point – even if the likelihood of success is low or long-term – is the next step.” “. Gregory said.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated channel.)

Leave a Comment