Skip to main content

The Synthesis: All Bets Are Off for AI and Documentary—It’s Time for a Reset

By shirin anlen, and Katerina Cizek


An illustration of the see no evil, hear no evil, and speak no evil monkeys against a yellow background.

Illustration by Helios Design Labs. Courtesy of Co-Creation Studio


The Synthesis is a monthly column exploring the intersection of Artificial Intelligence and documentary practice. Co-authors Kat Cizek and shirin anlen will synthesize the latest intelligence on synthetic media and AI tools—alongside their implications for nonfiction mediamaking. Balancing ethical, labor, and creative concerns, they will engage Documentary readers with interviews, analysis, and case studies. The Synthesis is part of an ongoing collaboration between the Co-Creation Studio at MIT’s Open Doc Lab and WITNESS.


When it comes to AI and documentary, all bets are off in 2025. So, we scrapped our column line-up for The Synthesis and hit reset. 

To recap, it’s been a dizzying year so far: in Europe, the February Paris AI Policy Action summit failed to usher in much meaningful regulation, and in the U.S., under the new Presidential administration, a March directive from the National Institute of Standards and Technology eliminates the mention of “AI safety” and “AI fairness.” This, coupled with the obliteration of fact-checking on Meta (and other platforms), is transmuting the media landscape with unprecedented dysregulation. Already, the internet is dealing with a “brute force attack” of AI-generated video slop both in volume and in consumption. We are talking upwards of 100s of millions of views per piece. At the same time, growing concerns about AI’s impact on media integrity and intellectual property have prompted a wave of responses to the upcoming U.S. “AI Action Plan.” Hollywood stakeholders have been pushing for stronger protections against AI-driven copyright exploitation. These developments are shaping the debate ahead of the plan’s release later this year, alongside the EU’s implementation of the AI Act. 

To reboot in this context, we checked in with a few documentarians, artists, and human rights advocates. We asked them this question: In this unregulated and dysregulated landscape, what are the immediate and new concerns of AI shaping the future of documentary filmmaking in 2025?

Their answers can be categorized into the following directions:

1. Any use of AI results in being complicit with the current AI regimes

Jazmin Jones, director of Seeking Mavis Beacon (2024), says: “I don’t fuck with AI at all. This goes far beyond film studios and streaming platforms. Any interaction with AI supports the system by further training its algorithm. We need to form rules and regulations around emerging technology that was trained on our stolen data while devastating the natural environment. I suppose I could borrow from the ‘think global, act local’ framework—how can we, as an industry, role model regulation in a way that will positively influence other systems? Where do we draw the line between time-saving hacks when filmmaking on shoestring budgets and inadvertently beta-testing war technology?”

2. AI poses an existential threat to the genre of documentary. 

According to Dylan Reibling, director of The End of the Internet, “When AI generates images by learning from vast datasets of existing media, it does not ‘create’ in the traditional sense. Instead, it remixes and recontextualizes past visual data. […] Can AI-generated imagery still be considered documentary, or does it risk undermining the very genre we are interested in serving?”

Violeta Ayala, a documentarian and artist who critically engages with AI, warns: “Documentary filmmaking is at a crossroads, either it evolves or it fades away. Technology is not just a tool; it is part of the narrative. We need to be digitally proficient, not merely using AI, but understanding it from the ground up, from its system instructions. […] I believe the impact on documentary filmmaking will be enormous—it will either morph, evolve, or, if we’re not careful, disappear entirely. Social media has already transformed who gets to tell the story, upending traditional power structures.”

3. It’s not just what AI does to documentary, but what it does to audiences. 

“The momentum of AI development is towards the relentless production of AI audiovisual slop sloshing across our shared timelines, drowning reality,” Sam Gregory, the Executive Director of WITNESS, points out. He adds, “Then as we start to move to more personalized content we will not even be able to commiserate over the AI slop we have watched, since content will be made for us and us alone, as we go from algorithmic content selection to algorithmic content creation. Both of these threaten the practice of documentary filmmaking—which is about singular stories, and about building publics, not catering to single individuals.” 

AI is reshaping audience engagement and truth in storytelling by affecting the economic structures that sustain documentary.  “Documentarians, journalists, and storytellers of all sorts will need to take into account the way AI drives profiles around viewership for ad and attention,” says Amelia Winger-Bearskin, an artist and Banks Preeminence Chair, Associate Professor of AI and the Arts: Digital Arts and Sciences, Digital Worlds Institute, University of Florida. She says, “These revenue-generating models for entertainment can often create a type of bias against longer-form storytelling or non-topical content like documentaries which take a long time to create. Many users of these platforms have also leveraged AI to generate misinformation, and to do so at a rate which is hard to combat via journalism and documentary. This is another possible effect of AI in media: to (further) destabilize the notion of truth in storytelling.”

4. Evaluating the effects of AI is increasingly urgent with increased “under the hood” AI adoption.

It gets more nuanced as AI tools spread across so many production pipelines (not just video), all of which affect notions of accuracy in documentary and journalism. Forensic visual investigations now frequently employ 3D scene reconstruction using techniques like photogrammetry, 3D scanning, and LiDAR. These methods provide precise spatial maps––but AI is changing the game.  

“As these models continue to evolve—becoming both more accurate and easier to use within existing production software—the concern shifts into something longer term. Does this added layer of subjectivity matter?” asks Evan Grothjan of Situ Studio. His technical explanation centers the interest in speeding up AI processes while requiring less input, leading to AI-generated 3D models from 2D photographic inputs that look more photorealistic than 3D scanning. “Among the most prominent AI-driven approaches to 3D reconstruction are Gaussian Splatting and NeRFs (Neural Radiance Fields). Unlike photogrammetry or 3D scanning, which rely on direct data interpretation to construct models, these AI-driven techniques generate information from the input data using machine learning. The resulting digital representations may appear more photorealistic than their photogrammetric or scanned counterparts, yet they remain fundamentally shaped by the data and patterns on which they were trained. This process occurs largely under the hood, allowing AI to introduce its own layer of algorithmic subjectivity.”

Making subjectivity more transparent has been a documentarian concern since the earliest days of moving images. That need for transparency has never been more urgent now, and at minimum, it means clearly labeling AI-generated images, indicating the model, dataset and prompt used to create them. But still, these are consumer-end solutions. Ayala points out, “If our goal is truly to understand, we should ask all AI companies to disclose their system instructions. It’s a straightforward request and one that could shed light on how these systems operate but I haven’t heard of it being seriously considered yet. Without this transparency, how can we effectively regulate or even fully grasp the impact AI will have on fields like documentary filmmaking?”

For now, Spain is among the first of EU countries to put some muscle into labeling regulation. They’ve just approved a bill imposing large fines on companies that use content generated by AI without properly labeling it as such, in an attempt to limit the acceleration of so-called deepfakes. It’s a start, but in this new policy landscape, it is unclear who—or if anyone at all —will follow. 


Next time at The Synthesis, we share what happened when we played around with Sora, a text-to-video AI generator, in a documentary context. 

Katerina Cizek is a Peabody- and Emmy-winning documentarian, author, producer, and researcher working with collective processes and emergent technologies. She is a research scientist, and co-founder of the Co-Creation Studio at MIT Open Documentary Lab. She is lead author (with Uricchio et al.) of Collective Wisdom: Co-Creating Media for Equity and Justice, published by MIT Press in 2022.

shirin anlen is an award-winning creative technologist, artist, and researcher. She is a media technologist for WITNESS, which helps people use video and technology to defend human rights. WITNESS’s “Prepare, Don’t Panic” Initiative has focused on global, equitable preparation for deepfakes.