OpenAI has paused all AI video generations depicting Martin Luther King Jr. on its Sora app.
The move follows uproar from King’s estate and daughter over “disrespectful” deepfake content, and signals a shift in how OpenAI handles likeness rights for deceased public figures.
Key Takeaways
- OpenAI suspends MLK portrayals on Sora amid backlash.
- Estates of historical figures can now request opt-out.
- No U.S. federal law yet protects postmortem likeness; state laws vary.
- OpenAI historically used opt-out for copyright; now extends to AI likeness.
- The decision opens debate over legacy control vs. free expression.
OpenAI has paused Sora deepfake videos of Martin Luther King Jr. after his estate and daughter objected to offensive AI depictions. Going forward, representatives or estates of historical figures may request that their likenesses be excluded from Sora-generated content.
Why OpenAI paused MLK depictions
Late Thursday, OpenAI announced it has “paused generations depicting Dr. King” in its Sora video-generation feature, responding to complaints from his estate and daughter Bernice King about “disrespectful” AI videos.
Reports say some AI creations showed King in crude or mocking scenarios.
The company said it will now allow authorized representatives or estates of historical figures to request their likeness be excluded from Sora.
Sora’s evolution and prior controversies
OpenAI’s Sora (text-to-video model) launched in late 2024, evolving into Sora 2 in September 2025.
Previously, OpenAI faced backlash for allowing copyrighted content by default unless rights holders opted out.
This latest move mirrors that model—only now applied to likeness rights. OpenAI frames it as a balance: while free speech supports depictions of historical figures, families and estates should have a say.
Legal and ethical fault lines
Unlike copyright, there is no unified federal law in the U.S. for controlling how a deceased person’s likeness is used.
Some states, such as California, have postmortem rights of publicity or privacy that could apply to AI replicas.
Legal experts warn that while OpenAI’s opt-out policy is reactive, it may not scale. Debates over Section 230, AI liability, and legacy control loom ahead.
Fallout and reactions
Bernice King had publicly urged people to stop circulating AI videos of her father.
The controversy joins a string of family objections to AI reproductions: Zelda Williams, daughter of Robin Williams, has decried unauthorized AI depictions of her father
In contrast, some public figures have embraced Sora use. Mark Cuban, for example, allowed his likeness to be used, provided it promoted his company.
The bigger picture: legacy, control, and AI
The MLK case highlights a central tension: who controls historical memory in an AI era? As more tools democratize AI video generation, the barrier between respectful tribute and mockery grows thin.
OpenAI’s opt-out policy for estates is a step—but critics argue it allows too much leeway to misuse before removal.
What’s next
OpenAI says it is strengthening guardrails and reviewing internal policies.
It’s unclear how many estates will request exclusions or how OpenAI will enforce them at scale.
Congress and courts may soon be forced into action on personality rights in AI.
Conclusion
OpenAI’s halt to MLK deepfakes on Sora signals a turning point: likeness control for the dead is becoming a frontline issue in AI ethics. How far tech firms, legislators, and estates steer that rulemaking will shape digital memory for generations.