Welcome to The Synthesis, a monthly column exploring the intersection of Artificial Intelligence and documentary practice. Co-authors shirin anlen and Kat Cizek will lay out ten (or so) key takeaways that synthesize the latest intelligence on synthetic media and AI tools—alongside their implications for nonfiction mediamaking. Balancing ethical, labor, and creative concerns, they will engage Documentary readers with interviews, analysis, and case studies. The Synthesis is part of an ongoing collaboration between the Co-Creation Studio at MIT’s Open Doc Lab and WITNESS.
Satire is a hallmark of a healthy democratic media ecosystem. It acts as the pressure valve of democracy, a way to expose the absurdity of power by exaggerating it. Its very existence signals freedom of expression and freedom of speech, providing a crucial tool for holding power accountable and shaping public discourse. Satire relies on shared context: we knew what was real, so we could laugh at what wasn’t. But in the age of generative video, that shared baseline is dissolving.
In our Just Joking report, we warned about the need to protect the grey areas of deepfakes, including satire, parody, and other creative uses, from automated content moderation systems that can remove or flag synthetic content without understanding context or intent. Those grey areas are vital to creativity and civic space. OpenAI’s Sora 2, released September 2025, has changed the game. The system produces 25-second scenes of cinematic quality, featuring consistent characters and plausible physics. What once appeared to be an animation experiment now resembles live footage. For documentarians, this isn’t just a technical upgrade. It’s an epistemological crisis.
Generative video now enables context collapse at scale. Anyone can produce hyper-realistic footage of events that never happened. And when the internet is already overflowing with slop— AI-generated mush optimized for attention, not intention—synthetic content and social media don’t merely simulate truth; they bury it.
Consider March 2025, when a video of a person in a Pikachu costume sprinting past riot police in Turkey went viral. AI-generated remixes followed instantly: Pikachu defying police lines, waving a protest flag, flanked by Batman and the Joker. The fakes were quickly identified, but the episode revealed something deeper: how effortlessly AI can blur the line between reality and fantasy, how complex our media landscape has become, and how easy it is to drown the internet with cheap AI-generated content. From politicians sharing AI-generated self-portraits to fabricated events quietly infiltrating the news cycle, synthetic content now coexists alongside legitimate reporting, challenging verification, editorial standards, and the public’s ability to distinguish fact from fiction.
Outlets, including The Guardian and The Atlantic, have documented how this content spreads and shapes attention. Truth and parody now share the same stage. Of course, dismissing reality is nothing new. Politicians have been calling real evidence “fake” for years, weaponizing doubt long before AI entered the picture. But now, there’s a machine for it, one that can fabricate visuals, rewrite proof, and industrialize disbelief.
For documentarians, the stakes run deeper than content confusion. It threatens the basic conditions under which documentary can operate: the trust in camera-captured reality, the stability of historical record, and the dignity and control of those whose likenesses are depicted. Sora 2 mainstreams the collapse between irony and information. People cheer its photorealistic “AI video” as a creative revolution, maybe without realizing they are applauding the automation of disbelief itself. Add to that the ability to generate lifelike digital selves, available for anyone to use and manipulate, and the effect is profound: when everything can look like a joke, truth becomes the punchline. And when anything can be fake, what’s left to record? And if everything can be fake, how can we trust what is real?
Sora 2 doesn’t just generate content. It generates confusion. It’s not merely an app; it’s a new reality engine, collapsing the distance between what is filmed and what is fabricated. Earlier AI videos felt like animation, with uncanny faces, rubbery physics, and scenes that broke their own logic. Sora 2 replaces that with coherent camera language, expressive motion, and characters that hold together across shots, all without any resilient safeguards in place. The leap is profound. It flattens meaning, erases authorship, and floods the world with cinematic sameness. What used to be the cultural fringe (remix, meme, mockery) is now a production pipeline. The realism that makes it breathtaking can empower documentarians to visualize the impossible, reconstruct the unrecorded, and tell stories without risk, but it also makes it epistemically toxic.
When we lose trust in what we see, we lose the institutions built on seeing: journalism, memory, evidence. And with them, democracy itself. Meaning becomes optional. Accountability dissolves. And if we lose custody over likeness, authorship, and provenance, we lose the very fabric that holds collective reality together.
What we’re witnessing isn’t the playful future of creativity. It is the erosion of the ground documentary stands on. We urgently need new norms and infrastructure: strong disclosure and verification standards, consent and likeness protections that reflect nuance rather than binary toggles, provenance systems that persist across platforms, creative and ethical frameworks that keep context intact, and public education that helps audiences interpret what they watch.
AI video is moving faster than our safeguards. Documentary must evolve just as quickly. Otherwise, we risk a world where words, images, and videos no longer share meaning between any two people. We risk becoming indistinguishable from the very confusion we have always worked to clarify. This is a fundamental risk to documentary.