Bryan Cranston has expressed gratitude to OpenAI for tightening safeguards on its generative AI video platform, Sora 2, after users were able to replicate his voice and likeness without permission.
The Breaking Bad star raised concerns through the actors’ union Sag-Aftra when he discovered that Sora 2 users had generated his image during the app’s recent launch. On October 11, The Los Angeles Times described a video featuring “a synthetic Michael Jackson taking a selfie video with an image of Breaking Bad star Bryan Cranston.”
OpenAI maintains that living individuals must consent or opt in to appear on Sora 2, claiming it has “measures to block depictions of public figures” and “guardrails intended to ensure that your audio and image likeness are used with your consent.”
However, reports from The Wall Street Journal, The Hollywood Reporter, and The Los Angeles Times said several Hollywood insiders were angered after OpenAI allegedly told talent agencies and studios that they would need to opt out rather than opt in to prevent likeness replication. OpenAI disputed these reports, clarifying that its policy was always designed to give public figures control over their likeness.
On Monday, Cranston issued a statement through Sag-Aftra, thanking OpenAI for “improving its guardrails” and preventing future misuse.
I was deeply concerned not just for myself, but for all performers whose work and identity can be misused in this way. I am grateful to OpenAI for its policy and for improving its guardrails, and hope that they and all of the companies involved in this work respect our personal and professional right to manage replication of our voice and likeness.
Cranston said.
Two of Hollywood’s largest agencies Creative Artists Agency (CAA) and United Talent Agency (UTA), which represents Cranston have repeatedly voiced concerns over the risks posed by Sora 2 and other AI tools.
In a joint statement released Monday, OpenAI, UTA, CAA, Sag-Aftra, and the Association of Talent Agents described Cranston’s case as an error, pledging to collaborate on stronger protections to uphold actors’ “right to determine how and whether they can be simulated.”
While from the start it was OpenAI’s policy to require opt-in for the use of voice and likeness, OpenAI expressed regret for these unintentional generations. OpenAI has strengthened guardrails around replication of voice and likeness when individuals do not opt in.
the statement read.
Sag-Aftra president Sean Astin praised Cranston for taking swift action.
Bryan did the right thing by communicating with his union and his professional representatives to have the matter addressed. This particular case has a positive resolution. I’m glad that OpenAI has committed to using an opt-in protocol, where all artists have the ability to choose whether they wish to participate in the exploitation of their voice and likeness using AI.
Astin said.
Astin also highlighted the importance of legislative support, adding, “Simply put, opt-in protocols are the only way to do business, and the NO FAKES Act will make us safer,” referring to pending U.S. legislation that would ban unauthorized AI replicas of individuals.
OpenAI has publicly endorsed the NO FAKES Act, with CEO Sam Altman stating the company is “deeply committed to protecting performers from the misappropriation of their voice and likeness.”
While Sora 2 allows users to generate images of “historical figures” broadly defined as famous deceased individuals OpenAI has agreed to let representatives of recently deceased figures request removal. Earlier this month, the company said it had “worked together” with the estate of Martin Luther King Jr. to pause depictions of King as it “strengthens guardrails for historical figures.”
Concerns over AI recreations of deceased celebrities continue to grow. Zelda Williams, daughter of the late actor Robin Williams, recently urged the public to “please stop” sharing AI-generated videos of her father, while Kelly Carlin, daughter of comedian George Carlin, described AI recreations of her father as “overwhelming, and depressing.”
Legal experts warn that the inclusion of historical figures on generative AI platforms could be a way for companies to test the limits of what current laws permit.
Source: The Guardian



