OpenAI blocks AI videos of Martin Luther King Jr.: NPR

The families of some deceased celebrities and public figures, including Martin Luther King Jr., have criticized OpenAI for allowing depictions of vulgar, unflattering or incriminating behavior on its Sora app.
Sora/Open AI/Annotation by NPR
hide caption
toggle caption
Sora/Open AI/Annotation by NPR
OpenAI blocked users from making videos of Martin Luther King Jr. on its Sora app after the civil rights leader’s estate complained it was showing “disrespectful depictions.”

Since the company launched Sora three weeks ago, hyper-realistic videos of King saying crude, offensive or racist remarks have exploded on social media, including fake videos of King stealing from a grocery store, speeding away from police and perpetuating racial stereotypes.
On Thursday evening, OpenAI and King’s estate released a joint statement saying that AI videos depicting King are blocked because the company is “strengthening safeguards for historical figures.”
OpenAI said it believed there were “strong free speech interests” in allowing users to create AI deepfakes of historical figures, but that estates should have ultimate control over how those likenesses are used.

The Sora app, which remains invite-only, has taken a shoot-first, aim-later approach to security guardrails, raising alarms among intellectual property lawyers, public figures and disinformation researchers.
When a person joins the app, they are asked to record a video of themselves from multiple angles and record themselves speaking. Users can control whether others can create deepfake videos of them, what Sora calls a “cameo.”
But the app allowed users to make videos of many celebrities and historical figures without explicit consent, allowing users to create fake images of Princess Diana, John F. Kennedy, Kurt Cobain, Malcolm X and many others.
Kristelia García, a professor of intellectual property law at Georgetown Law, said OpenAI’s action only after the King estate’s complaint is consistent with the company’s approach of “asking for forgiveness, not permission.”
“The AI industry seems to be evolving very quickly, and first to market seems to be the currency of the day (certainly relative to a contemplative, ethically minded approach),” García told NPR in an email.
She noted how right of publicity and defamation laws vary by state and may not always apply to deepfakes, meaning there could be “little legal downside to just letting things play out unless and until someone complains.”
Although the ability to control a person’s likeness depends on where a person’s estate is located, some states have strong protections, such as California, where a public figure’s heirs, or their estate, hold the rights to the likeness for 70 years after a celebrity’s death.
In the days following the release of the Sora app, OpenAI CEO Sam Altman announced changes to the app giving rights holders the option to opt-in to have their image represented by AI, rather than such representations being allowed by default.
Yet the families of some deceased celebrities and public figures have criticized OpenAI for allowing depictions of vulgar, unflattering or incriminating behavior.
After videos of Robin Williams flooded social media, Zelda Williams, the late actor’s daughter, asked the public to stop making videos of her father. “Please stop sending me AI videos of my dad,” she wrote in an Instagram post, adding that “this is NOT what he would want.”
Bernice King, the civil rights leader’s daughter, agreed, writing on X: “Please stop. »

Hollywood studios and talent agencies have also expressed concern that OpenAI unveiled the Sora app without obtaining consent from copyright holders.
This is a similar approach to how the company developed ChatGPT, which sucked up masses of copyrighted content without approval or payment before eventually striking licensing deals with certain publishers. This approach sparked a wave of copyright lawsuits.



