Column
Slowness has been an established mode of cinema for many decades, but the commercial demands of television meant it took longer for that medium to adopt such a form, thanks to the medium’s mass-market nature and heavy ties to advertising interests. PBS's Ambient Film arrives amid a glut in such works. But vectors like Instagram, YouTube, and TikTok long ago outpaced what traditional networks and streaming platforms can offer, in terms of both quantity and length.
It seems like the world wakes up to new information about artificial intelligence technology every day. From the lawsuits over the use of unlicensed works to train AI models, to the controversies that arise from late actors’ resurrections in Hollywood’s biggest franchises, it’s easy to see how the use of artificial intelligence can be daunting to filmmakers. To help filmmakers navigate through the tricky waters of AI, our firm has compiled a few items of practical advice that we believe can be helpful to documentary filmmakers who want to utilize AI in a cautious yet effective manner. And although these tips are based on U.S. law, we’re hopeful that their practical nature will benefit filmmakers in international jurisdictions as well.
Cinematographer Iris Ng seeks meaningfulness in her experiences on set, on and off-camera.
When making Deepfaking Sam Altman (2025), documentary director Adam Bhala Lough (Telemarketers) found himself in deep doo doo. Despite months of trying, he still hadn’t gotten access to an interview with Sam Altman (CEO of OpenAI) for a film Lough had promised about AI. So he took a page from Altman’s own MO. The resulting film follows Lough setting about on his journey, working with deepfakers in India, meeting with lawyers, and ultimately spending a lot of time chatting and bonding with the resulting AI chatbot, called SamBot. For this edition of The Synthesis, we spoke with Lough about the film, his use of AI, and its implications for documentary.
In a recent joint submission to a call for contributions on AI and Creativity at the United Nations Human Rights Council Advisory Committee, WITNESS, the Co-Creation Studio at MIT, and the Archival Producers Alliance (APA) outlined these pressing dangers. Drawing from years of frontline research, workshops, and advocacy with creative communities and human rights defenders around the world, we identified seven core threats AI poses to human creativity.
Sora, a new generative AI video tool from Open AI, is named after the Japanese word for sky. Is the sky the limit? Last year, the company gave early access to 300 artists, some of whom later denounced the company’s product release as artwashing. OpenAI responded with a series of exclusive promotional screenings of artist-made films for industry executives in New York, Los Angeles, and Tokyo. What might this all mean for the documentary field? We decided to run our own experiment. To test the limits of Sora, we prompted it with the taglines from the six most recent Oscar-winning documentaries. We showed the resulting 15-second silent clips to a panel of seven documentary luminaries over Zoom.
When it comes to AI and documentary, all bets are off in 2025. So, we scrapped our column line-up for The Synthesis and hit reset. To recap, it’s been a dizzying year so far: in Europe, the February Paris AI Policy Action summit failed to usher in much meaningful regulation, and in the U.S., under the new Presidential administration, a March directive from the National Institute of Standards and Technology eliminates the mention of “AI safety” and “AI fairness.” To reboot in this context, we checked in with a few documentarians, artists, and human rights advocates. We asked them this question: In this unregulated and dysregulated landscape, what are the immediate and new concerns of AI shaping the future of documentary filmmaking in 2025?
