Welcome to The Synthesis, a new monthly column exploring the intersection of Artificial Intelligence and documentary practice. Over the next year, co-authors shirin anlen and Kat Cizek will lay out ten (or so) key takeaways that synthesize the latest intelligence on synthetic media and AI tools—alongside their implications for nonfiction mediamaking. Balancing ethical, labor, and creative concerns, they will engage Documentary readers with interviews, analysis, and case studies. The Synthesis is part of an ongoing collaboration between the Co-Creation Studio at MIT’s Open Doc Lab and WITNESS.
Along with journalists, archivists, human rights activists, and everyone else, doc-makers find themselves navigating worlds littered with the hope and despair of Artificial Intelligence. Practically, no documentary subject is immune to the context of this pervasive technology. AI is running on smartphones and cameras. Soon, AI will turn off-the-shelf earbuds into hearing aids. It’s already used to surveil employees in warehouses, run financial markets autonomously, solve complex medical challenges like protein-binding, wage war remotely, and shape daily interaction via voice assistant devices and apps. In a time when AI is hard to escape, documentarians face profound questions––not just about what we document but how we do it too.
AI tools have already transformed adjacent fields of Hollywood fiction film and in all aspects of the video game industry, especially in the 3D avatar simulation of humans. Life Rights and Generative AI were key issues in the Hollywood actors and writers’ strikes last year. These production concerns may still seem far off for documentarians, but they are closer than they appear. Even historical documentaries are affected by this climate. The newly formed Archival Producers Alliance’s “Best Practices Guidelines” spotlights the importance of safeguarding primary sources and the need for transparency as AI-generated images populate the media landscape, with the aim of protecting the integrity of archives, the public record, and collective memory.
AI is not just a backdrop to the documentary gaze—it is deeply entangled with how doc-makers create and distribute cinema today. AI is quietly being integrated into every stage of documentary making—from research and production to post and distribution. Most documentarians are already tapping into the lure of AI in their daily practice, often without realizing the broader ramifications. A doc researcher might use ChatGPT to provisionally summarize a burgeoning area of interest. Editors might use AI-enabled transcription and translation tools to guide quick edits of lengthy interview footage. And directors and studios may bypass human editors altogether, turning to AI-enabled editing tools for quick sizzlers for funding submissions or marketing. Some reality TV shows are dropping assistant editors completely in favor of AI-enabled assembly edits, shaped by text prompts (think #jealous #hate #fight #cry.) In 2022, the makers of the Anthony Bourdain documentary Roadrunner (2021) came under fire for using, without disclosure, AI to generate a Bourdain deepfake voice to narrate short lines of text from his private emails. The makers of The Andy Warhol Diaries (2022), on the other hand, were celebrated for narrating the four-hour-long series entirely with a deepfaked voice of Warhol “reading” from his written diaries—and boasting about it. It was hailed as a bold artistic choice aligning with Warhol’s own expressed desire to “be a robot.” What some may see as a creative tool, others may view as an ethical breach.
Documentarians are engaging critically with these tools, reshaping cinematic form. In Eno (2024), director Gary Hustwit employs custom-crafted generative AI tools to create different versions of the film at every screening. Welcome to Chechnya (2020) and Another Body (2023) are two trail-blazing examples of the ethical use of deepfakes technology. In both films, the directors used AI technologies to protect the identities of vulnerable subjects by changing (rather than blurring) their faces and voices. For audiences, it humanized the subjects but safeguarded their well-being. Meanwhile, artists are experimenting with AI as their primary creative medium, fostering communities and conversations about what’s possible creatively in the hands of independent (rather than corporate and state) agents.
AI challenges the perception of reality and how truth is constructed, told and believed. An AI-generated reproduction of Werner Herzog will undoubtedly pontificate on the nature of truth and machine’s creativity in this year’s IDFA opening film, About a Hero, which premieres next week (our next column will feature an interview with the director, Piotr Winiewicz). These are not new questions in cinema, art, or history. Fake news is as old as news itself. But the speed and scale of the ability of AI technologies to output media are exponential, and the lack of human oversight is unprecedented, as Yuval Noah Harari lays out in his new bestselling book Nexus. It is creating, he argues, a new kind of information crisis.
So how might documentarians deal with the weight of all this? That’s where this monthly column, The Synthesis, fits in. We aim to connect three conversations that too often happen apart. Ethics are the focus of lawyers, policy-makers, researchers, and activists. Studios push Efficiency while Unions frame it as Labor. Artists, technologists, and platforms get excited about Creative potentials. We don’t promise to be comprehensive, but we do hope to synthesize the key issues facing documentarians through these three lenses. In The Synthesis, you can expect interviews, case studies, and analyses of the quickly evolving waters of AI and documentary.
Up next: “Surprisingly Ordinary Dreams” Generated by AI: Piotr Winiewicz Discusses His IDFA Opening Film About a Hero
Katerina Cizek is a Peabody- and Emmy-winning documentarian, author, producer, and researcher working with collective processes and emergent technologies. She is a research scientist, and co-founder of the Co-Creation Studio at MIT Open Documentary Lab. She is lead author (with Uricchio et al.) of Collective Wisdom: Co-Creating Media for Equity and Justice, published by MIT Press in 2022.
shirin anlen is an award-winning creative technologist, artist, and researcher. She is a media technologist for WITNESS, which helps people use video and technology to defend human rights. WITNESS’s “Prepare, Don’t Panic”Initiative has focused on global, equitable preparation for deepfakes.