The threat to artists is real, what is the solution?

[ad_1]

Just a few days ago, disturbing Taylor Swift Deep Fakes flooded the social networks of his fans. The “X” marked the location of ground zero. Surprising, I know, because Elon Musk cares a lot about “safety,” even as apologist-in-chief Linda Yaccarino tries to appeal to advertisers.

A few days earlier, ahead of the New Hampshire primary, fake robocalls from President Joe Biden flooded voters’ phones, urging them not to vote. And now, a deep fake george carlin “hosts” a new comedy special on YouTube, although no one from his estate gave any consent.

The transformative power, promises, and perils of generative AI are now sinking in across a wide swath of the entertainment community. These examples are just the canary in the coal mine of threats and damage (commercial and reputational) that we will see in the months and years to come.

So what can and should we all do about it now?

First of all, it is essential to be aware of all this, to monitor these events so that we understand them and take action. Whatever role we play in the creative economy, we should also experiment with generative AI to understand what it can do. To be clear, generative AI will certainly allow us to do great things, of which I gave several examples. in a previous column.

One notable area of ​​opportunity is AI-licensed film and television dubbing, which enables widespread localized international distribution, regardless of the original language. based in Los Angeles Flawless AI He is a leader here. Their technology will allow actors to speak in multiple languages ​​seamlessly while their mouths perfectly match their multilingual words. That means no subtitles are needed, which in turn maximizes distribution, audience receptivity, and monetization.

But on the other hand, we must also take steps to mitigate the danger of generative AI going crazy. The examples above reflect blatant theft and abuse of the names, images, likenesses and voices of famous people resulting in significant commercial and reputational damage. In the case of President Biden, the damage goes even further: he challenges democracy itself.

WIRED reported that the identity hacker behind those Biden fakes likely used artificial intelligence tools developed by a Silicon Valley-based startup oncelabsthat recently scored 80 million dollars in new financing with a valuation of $1.1 billion to add Hollywood AI dubbing to its activities. After this report was made public, the company suspended the account of the relevant user. It also says all the right things about such abuse in its FAQ and terms and conditions. That’s where it notifies users that consent is required for “voice cloning” and indicates that it will remove content that crosses that line once they are notified about it.

george-carlins-american-dream

George Carlin (HBO)

The Silicon Valley Playbook

That all sounds great, of course. But you could say that this entire episode follows the typical Silicon Valley playbook. Top venture-backed tech startups create new enabling technologies that require engaging content from their users to become valuable. Their lawyers then write sound-sounding policies that instruct those users to follow basic copyright and privacy rules. But those policies are often buried in standard text and used as a defense when users violate them, which they know is happening on their platforms.

And why not? These technology companies are interested in not adding friction to the growth of their user base. That makes them more valuable. It’s better to ask for forgiveness than permission, right? In the case of ElevenLabs, I found a lot of discussion about the ease and quality of their “voice cloning” technology on the company’s home page, but essentially none of it required consent. I contacted the company twice to correct any misunderstandings I may have had and for them to respond, but received none.

We saw an analogous episode almost two decades ago. when youtube first launched and users happily uploaded millions of copyrighted videos. SNL’s “Lazy Sunday” rap parody video became the model for this new type of abuse, and Viacom (now Paramount) filed a lawsuit. Google eventually swooped in to save the day for YouTube, purchasing the company and settling the litigation. Google’s valuation now sits at $1.75 trillion, while SNL owner Comcast NBCUniversal’s sits at about 1/10.th at $187 billion, and that figure includes Comcast’s lucrative broadband business.

Perhaps part of ElevenLabs’ new $80 million reward should be invested in so-called “trust and safety” initiatives to minimize the risk of user abuse. It’s all a matter of will, of course. I’m sure brilliant tech engineers can find ways to prevent abuse if resources are allocated to that goal. As my favorite media technology expert, Scott Galloway, would say, “It’s not about the realm of what’s possible. It’s about the realm of what is profitable.” Money talks and money reflects priorities. Growth for growth’s sake may be great for venture capitalists, but it’s certainly not always great for the artists and creators on whose backs big, multi-billion dollar tech companies are significantly built.

AI “forensics” technology already exists to combat the abuse of AI deepfakes. A recent tantalizing example includes Nightshade which supposedly uses AI to combat deep AI. Another company that says it is focused on combating this type of abuse is IA aconite. Meanwhile, blockchain technology is apparently capable of “creating an immutable audit chain to see if consent was given.” That quote comes directly from prominent venture capitalist Chris Dixon of Andreessen Horowitz, who happens to be a major investor in ElevenLabs.

Given these most recent (but certainly not isolated or most recent) disturbing forgeries, does anyone really believe that we have enough guardrails to protect people and their livelihoods, including artists in the creative community?

What we need here, in addition to self-policing on the part of AI technology developers and AI technology used to fight AI abuse, is a direct dialogue with Hollywood and the creative community. Only then will mutually beneficial ground rules of the game and proper economics be established. Flawless understands this precisely because the top executives come from Hollywood and the creative community itself. Silicon Valley entrepreneurs and venture capitalists take note.

We also need tougher criminal penalties and greater visibility and advocacy around issues of abuse of non-consensual names, images, likenesses and voices. While California has statutes to address such NIL issues, most states do not. Congress is considering national legislation right now to fix this mosaic, while SAG-AFTRA and the Human art campaign They are focusing their focus on the issue.

All of us – the creative and tech communities together – must solve these problems one way or another right now before generative AI generates rewards only for itself and at the expense, literally, of the creative community.

Contact Peter at peter@creativemedia.biz. For those of you interested in learning more, sign up for your “Intrepid Media” Newslettervisit his firm Creative Media at creativemedia.bizand follow him on Threads @pcsathy.

The charge Taylor Swift and AI ‘Deep Fakes’: The threat to artists is real, what is the solution? appeared first on The coat.

Leave a Comment