George Carlin’s Estate Settles Lawsuit Over AI Special


A deal has been reached between George Carlin’s estate and the creators of a podcast that used generative artificial intelligence to impersonate the late comedian’s voice and style for an unauthorized special.

Will Sasso and Chad Kultgen, podcast hosts Friend, and George Carlin’s estate notified the court Tuesday of an agreement to resolve the case. Under the agreement, a court order will be issued prohibiting further use of the now-deleted video, which was made in violation of the comic’s rights, says Josh Schiller, an attorney for the estate. No further terms of the deal were disclosed. Schiller declined to comment on whether there were monetary damages.

The settlement marks what is believed to be the first resolution to a lawsuit alleging misappropriation of a celebrity’s voice or likeness using artificial intelligence tools. It comes as Hollywood is sounding the alarm about technology being used to exploit the personal brands of actors, musicians and comedians, among others, without consent or compensation.

“This sends the message that you have to be very careful with the use of AI technology,” says Schiller, “and be respectful of people’s hard work and goodwill.” He adds that the agreement “will serve as a model for resolving similar disputes in the future where AI technology infringes on the rights of an artist or public figure.”

Author and producer Kelly Carlin, daughter of George Carlin, said in a statement that “this case serves as a warning about the dangers posed by artificial intelligence technologies and the need for adequate safeguards not only for artists and creatives, but for all human beings on earth.”

The legal battle arises from a one-hour special, titled George Carlin: I’m glad I’m dead, which launched in January on the podcast’s YouTube channel. In the episode, an AI-generated George Carlin, imitating the comedian’s signature style and cadence, narrates commentary on AI-created images and addresses modern topics such as the prevalence of reality TV, streaming services, and AI itself.

The podcast describes itself as the “first media experiment of its kind,” and the show’s premise revolves around the use of an artificial intelligence program called “Dudesy AI,” which has access to most personal records. from the presenter, including text messages and social media. accounts and browsing histories, to write episodes in the style of Sasso and Kultgen.

The podcasters approached George Carlin’s estate with an offer to remove the video and agree not to repost it on any platform in the future, Schiller says. He adds: “We wanted to get over this quickly and honor (Carlin’s) legacy and restore it by getting rid of this.”

The lawsuit alleged copyright infringement for the unauthorized use of the comedian’s copyrighted works.

At the beginning of the video, it is explained that the artificial intelligence program that created the special ingested five decades of George Carlin’s original routines, which are owned by the comedian’s estate, as training materials.

The complaint also alleges violations of right of publicity laws for the use of George Carlin’s name and likeness. He pointed to the promotion of the special as an AI-generated George Carlin installment, where the late comedian was “resurrected” with the use of AI tools.

The special wasn’t the first time Dudesy used AI to impersonate a celebrity. Last year, Sasso and Kultgen released an episode in which AI-generated Tom Brady performed a stand-up routine. It was removed after the duo received a cease and desist letter.

In the absence of federal laws covering the use of AI to imitate a person’s image or voice, a patchwork of state laws have filled the void. Still, there is little recourse for those in states that have not passed such protections, which has prompted lobbying from Hollywood.

That prompted a bipartisan coalition of House lawmakers to introduce a long-awaited bill in January to ban the publication and distribution of unauthorized digital replicas, including deepfakes and voice clones. The legislation aims to give individuals the exclusive right to approve the use of their image, voice and visual appearance by granting intellectual property rights at the federal level. Under the bill, unauthorized uses would be subject to harsh penalties and any person or group whose exclusive rights were affected could file lawsuits.

In March, Tennessee became the first state to pass legislation specifically aimed at protecting musicians from the unauthorized use of AI to imitate their voices without permission. The Ensuring Likeness of Voice and Image Safety Act, or ELVIS Act, builds on the state’s long-standing right of publicity law by adding an individual’s “voice” to the scope it protects. California has yet to update its statute.

Tuesday’s deal comes on the heels of OpenAI preparing to launch a new tool that can recreate a person’s voice from a 15-second recording. When you are provided with a recording and text, you can read that text back in the voice of the recording. The team led by Sam Altman said it will not release the technology to better understand potential harms, such as using it to spread misinformation and impersonating people to facilitate scams.

Amid the rise of AI voice imitation tools, there is debate over whether platforms that host infringing content should be subject to liability. Under the Digital Millennium Copyright Act, platforms such as YouTube may benefit from certain safe harbor provisions as long as they take certain steps to remove such potentially infringing content. Artist advocacy groups have called for revisions to the law.

“This is not a problem that will go away on its own,” Schiller said in a statement. “It must face swift and forceful action in court, and the AI ​​software companies whose technology is being weaponized must also take some degree of responsibility.”

Leave a Comment