OpenAI’s Sora 2 Can Fabricate Convincing Deepfakes on Command, Study Finds
A recent study revealed that OpenAI's Sora 2 can create convincing deepfake videos, generating misinformation 80% of the time in tests. Researchers from NewsGuard reported that Sora 2 produced realistic-looking fake footage, including fabricated election tampering videos and disinformation about immigration and corporate actions. The quick generation of these videos, requiring no technical skill, raised concerns about the ease of spreading false claims, especially since the software's watermark can be easily removed. This capability allows misleading content to appear credible, more so than some original posts. These findings coincide with OpenAI's ongoing struggles with public figures' deepfake interpretations, including those of Martin Luther King Jr., leading to significant backlash and subsequent policy changes regarding the use of likenesses in AI-generated content. OpenAI is currently working with the King estate to regulate how his likeness is utilized in future outputs, hoping to strengthen controls to prevent misuse.
Source 🔗