Deepfake images of Taylor Swift have emerged on social media: NPR

[ad_1]

A photo illustration created last July shows an advertisement to create girls with artificial intelligence mirrored in a public service announcement issued by the FBI about malicious actors manipulating photos and videos to create explicit content and sextortion schemes. The rise of deepfake porn is outpacing efforts by the United States and Europe to regulate this technology.

Stefani Reynolds/AFP via Getty Images


hide title

toggle title

Stefani Reynolds/AFP via Getty Images


A photo illustration created last July shows an advertisement to create girls with artificial intelligence mirrored in a public service announcement issued by the FBI about malicious actors manipulating photos and videos to create explicit content and sextortion schemes. The rise of deepfake porn is outpacing efforts by the United States and Europe to regulate this technology.

Stefani Reynolds/AFP via Getty Images

A new crop of deepfake videos and images is causing a stir, a periodic phenomenon that appears to be happening with increasing frequency as several deepfake-focused bills remain in Congress.

The issue made headlines this week, with fake pornographic images allegedly showing pop superstar Taylor Swift. proliferated in X (formerly known as Twitter), Telegram and other places. Many posts were deleted, but not before some of them piled up. million views.

The assault on Swift’s famous image serves as a reminder of how deepfakes have become easier to make in recent years. Several apps can change a person’s face to other media with high fidelity, and the latest versions promise to use artificial intelligence to generate even more convincing images and videos.

Deepfakes usually target young women

Many deepfake apps are marketed as a way for everyday people to create funny videos and memes. But many end results don’t match that tone. As Caroline Quirk wrote in the Princeton Law Review Last year, “as this technology has become more available, 90% to 95% of deepfake videos are now non-consensual pornographic videos, and of those videos, 90% are directed at women, mostly minors”.

Deepfake porn was recently used against high school students in New Jersey and in washington state.

At their core, these deepfakes are an attack on privacy, according to law professor Danielle Citron.

“It’s about transforming women’s faces into pornography, stealing their identities, coercing them into sexual expression and giving them an identity they didn’t choose,” Citron said. said last month on a podcast from the University of Virginia, where he teaches and writes about privacy, free expression, and civil rights at the university’s law school.

Citron points out that deepfake images and videos are simply new forms of lies, something humanity has been dealing with for millennia. The problem, he says, is that these lies are presented in video form, which tends to affect people on a visceral level. And in the best deepfakes, the lies are hidden by sophisticated technology that is extremely difficult to detect.

We have seen moments like these come. In recent years, deepfake videos depicting “Tom Cruise” in a variety of implausible settings have racked up hundreds of millions of views. on tiktok and in other places. That project, created by videographer and visual effects artist Chris Umé and Cruise impersonator Miles Fisher, is fairly benign compared to many other deepfake campaigns, and the videos carry a watermark tag that reads “#deeptomcruise,” indicating its unofficial status.

Deepfakes pose a growing challenge, with little regulation

The risk of harm caused by deepfakes is far-reaching, from the appropriation of women’s faces to make explicit sexual videos to the use of celebrities in unapproved promotions and the use of manipulated images in political disinformation campaigns.

The risks were highlighted years ago, especially in 2017, when researchers used what they called “a visual form of lip syncing” to generate several very realistic videos of former President Barack Obama speaking.

In that experiment, researchers combined authentic audio of Obama speaking with computer-manipulated video. But it had a disconcerting effect, showing the potential power of a video that could put words in the mouth of one of the most powerful people on the planet.

That is how a Reddit commenter in a deepfake video from last year described the situation: “I think everyone is about to be scammed: the older people who think everything they see is real and the young people who have seen so many deepfakes that they won’t believe anything of what you see is real.”

As UVA law professor Citron said last month: “I think it’s necessary to reintroduce law into the calculus, because right now ‘the Internet,’ and I’m using quotes, is often seen as the Wild West.”

So far, the strictest US restrictions on the use of deepfakes are not observed at the federal level but in states such as California, Virginia and Hawaiithat prohibit non-consensual deepfake pornography.

But as the Brennan Center for Justice reports, those and other state laws have different standards and focus on different modes of content. At the federal level, the center said last monthat least eight bills seek to regulate deepfakes and similar “synthetic media.”

In addition to revenge porn and other crimes, many laws and proposals aim to impose special limits and requirements on videos related to political campaigns and elections. But some companies are acting on their own, like last year, when Google, and then Meta, announced that they would require political ads to carry a label if they were made with AI.

And then there are the scams

Last month, visitors to YouTube, Facebook and other platforms saw video ads purporting to show Jennifer Aniston offering a deal so good it’s crazy on Apple laptops.

“If you’re watching this video, you’re part of a lucky group of 10,000 people who have the chance to get the Macbook Pro for just $2,” the Aniston ersatz says in the ad. “I’m Jennifer Aniston,” she falsely says in the video, urging people to click on a link to claim her new computer.

A common goal of these types of scams is to trick people into signing up for expensive online subscriptions, such as the website Informed Malware Tips during a similar recent ploy.

Last October, actor Tom Hanks warned people that an AI was using his image, apparently to sell dental insurance online.

“I have nothing to do with it,” Hanks said in an Instagram post.

Shortly after, CBS Mornings Co-host Gayle King has raised the alarm over a video purporting to show her promoting weight loss gummies.

“Please don’t be fooled by these AI videos,” he said.

Leave a Comment