All articles

OpenAI’s Sora 2 Launch Spirals Into a Copyright and Deepfake Mess

Ad World News Desk
published
October 5, 2025
Credit: Vox_Oculi via X

Key Points

  • OpenAI's Sora 2 launch faces backlash due to copyright infringement and deepfake concerns, prompting a policy reversal.

  • The app's "cameos" feature led to unauthorized use of popular characters, causing Hollywood agencies to opt out.

  • OpenAI shifts to an "opt-in" model for rights holders, considering future revenue-sharing to encourage participation.

  • Sora 2's potential for misinformation highlighted by viral deepfake videos, raising ethical deployment questions.

OpenAI’s launch of its new text-to-video app, Sora 2, quickly devolved into a chaotic free-for-all of copyright infringement and deepfake creation, forcing the company to reverse a key policy just three days after its release. The social media-style app went live with an "opt-out" copyright model, as first reported by \*The Wall Street Journal\*, placing the burden on creators to police the platform.

  • An IP free-for-all: The app’s standout "cameos" feature allows users to insert their own likeness into videos, which OpenAI pitched as a tool for "interactive fan fiction." Users immediately pushed the app’s boundaries, flooding the feed with a torrent of videos featuring characters from franchises like SpongeBob SquarePants, Pokémon, and Breaking Bad, raising immediate red flags across the entertainment industry.

  • Asking for permission, not forgiveness: Hollywood’s backlash was swift. According to The Hollywood Reporter, the powerful talent agency WME notified OpenAI it was opting out all of its clients. In response to the mounting pressure, OpenAI CEO Sam Altman announced the company was moving to an "opt-in" model that gives rights holders "more granular control," and even suggested a future revenue-sharing system to incentivize participation.

  • Seeing is deceiving: Beyond copyright concerns, the app’s potential as a misinformation engine was instantly on display. As first reported by Futurism, an OpenAI developer created a viral clip of CEO Sam Altman shoplifting GPUs, while NPR's own testing produced a fake video of Richard Nixon claiming the moon landing was a hoax, showing just how easily the tool can generate convincing deepfakes.

OpenAI's "move fast and break things" approach has collided with the messy reality of generative video, proving that simulating physics is far easier than simulating responsible deployment.

The race to generate AI "slop" is heating up, with Meta's recent launch of "Vibes" and its licensing of Midjourney's tech showing a different, more cautious approach. Beyond entertainment, experts suggest these video models are a crucial step toward general intelligence, as the synthetic content is used to train future AIs to better understand the physical world. The lines between model types are also blurring, as Sora 2 has shown an unexpected ability to answer science questions in its generated videos.