Skip to main content

The Synthesis: Piotr Winiewicz on the “Surprisingly Ordinary Dreams” Generated by AI in His IDFA Opening Film

By shirin anlen, and Katerina Cizek


A woman leans back against a chair.

About a Hero. Courtesy of TAMBO FILM


The Synthesis is a monthly column exploring the intersection of Artificial Intelligence and documentary practice. Co-authors Kat Cizek and shirin anlen will synthesize the latest intelligence on synthetic media and AI tools—alongside their implications for nonfiction mediamaking. Balancing ethical, labor, and creative concerns, they will engage Documentary readers with interviews, analysis, and case studies. The Synthesis is part of an ongoing collaboration between the Co-Creation Studio at MIT’s Open Doc Lab and WITNESS.


About a Hero is set to have its world premiere as the opening film at IDFA later this week. It’s based on a script generated by an AI trained on Werner Herzog’s interviews, voiceovers, and writing. The resulting film, full of ironic self-reflection, explores themes of originality, authenticity, common sense, and the human soul in an era shaped by machine-human relationships. The filmmakers employ a variety of AI tools—from scripting to voice synthesis to image experimentation. The film is also intercut with documentary interviews with various artists about AI. 

We spoke with Piotr Winiewicz, the film’s director, in advance of the film’s premiere over Zoom and email. The following interview has been edited for length and clarity.

 

DOCUMENTARYYou begin your film with a Herzog quote: “A computer will not make a film as good as mine in 4,500 years.” So what do you think, Piotr—is this movie as good as a Herzog film?

PIOTR WINIEWICZ: My idea was never to challenge Herzog. I saw this quote as a reflection of our inherent feeling of superiority. And then, ultimately, the question of where we get our deeply rooted technophobia from. People usually wonder whether the film is a documentary or fiction. But I see it more as an essay. It is not an attempt to make a Werner Herzog-like film—he’s just an involuntary actor in this film, with his permission, of course. It’s not about Herzog; he’s not the subject of the film. He’s an object. It made so much sense to use Herzog because of his distinct voice, vocabulary, extensive filmography, and especially his romantic cinematic approach. 

D: You started developing the project in 2018. Can you walk us through this experimentation process?

PW: When we started in 2018, there weren’t many tools available—at least not like the machine learning tools we see today. I collaborated with Dawid Górny, a software engineer; my producer, Mads Damsbo, who had experience with experimental technology; and Esbern Kaspersen, who was responsible for all machine learning models in the film. Since many of the tools we needed didn’t exist, we had to build them ourselves. For the first two years, we focused heavily on test shoots while developing and experimenting with custom tools for the film and other forms of interactive installations. Our exploration touched on multiple dimensions—text, image, sound, and voice. 

We were particularly interested in writing, as it allowed us to trace Herzog’s vocabulary and style, almost like a Turing test. We developed our own custom writing tool, Kaspar. One early breakthrough came when the model generated the line: This is a movie about a hero dreaming up surprisingly ordinary dreams. It felt surreal yet meaningful, revealing the model’s ability to capture Herzog’s patterns. At that point, our goal wasn’t to generate a full script. We knew the model wouldn’t produce a ready-made screenplay. Instead, we focused on feeding it Herzog’s voice-over transcripts, books, and interviews—essentially anything that captured Herzog’s personal voice, memories, and stories. 

D: Your writing credit includes Kaspar and two humans. Can you unpack that for us? How did you write together? 

PW: We named the AI “Kaspar” to give it a personality and make it easier for people to understand what we were doing, especially back in 2018, when many doubted this work. 

The script was created through a waterfall process: we generated an initial text, then used its final sentence as a prompt for the next iteration, gradually building out characters and places within an evolving narrative like an investigation process. At first, the AI’s output felt like a disjointed exploration, but over time, we shaped the raw text through extensive editing into a coherent script. In the early stages, we couldn’t expect the technology to be able to format the text as a script on its own, so we developed it as an iterative process.

D: That leads to a central question: is AI a tool or an author?

PW: A tool. Not sure if you felt it in the film, but I’m not an advocate for AI.

D: In the opening scene, you take a particularly creative approach to disclosure, which is a crucial challenge for AI artists and the public—how to make clear what the audience is seeing. Can you talk about the role of disclosure in your film? 

PW: It was really important for me that the film is transparent, that no one feels cheated or stupid. I wanted the audience to doubt what is real and what’s not. But on the other hand, it was being said upfront. I was looking for a cinema-style disclosure to provide more dimension. We also spent a lot of time talking to lawyers, which is also included in the opening scene. And through those aspects of disclosure, we wanted not necessarily to provide all the answers but to spark a conversation. 

D: Today, over 25,000 artists and cultural workers have signed a new petition with a clear message—the unlicensed use of creative works for training generative AI poses a serious and unjust threat to the livelihoods of creators and must not be allowed. This isn’t a new debate, but I’d love to hear your thoughts. What’s your perspective on the use of AI trained on unlicensed work?

PW: The film explores exactly this issue—the provocation of hijacking someone’s personality and work, though in our case, with consent. It’s about the confusion and potential nightmares such practices can create. Herzog himself called these “alarmist films” because they highlight the horror quality of what might be coming. I will sign any petition now. But on the other hand, that might sound naive, but what if this is a democratization of the process? 

D: Has Mr. H. himself seen the film yet? What did he say?

PW: Yes. I’m not sure if I can tell you now about what he said. He saw it in May, and we had dinner, then screened the film and had a long discussion about it. I think the most important part is that he’s not going to take me to court. It was important for me to verify that the film didn’t cross any boundaries, and we also wanted his approval. He saw in it more the work of James Joyce than his own work because of the quality… which was not a compliment. 

D: What are your plans to provide more disclosure or transparency on your process later on?

PW: I think the opening is the level of transparency that we offer. For me, the biggest inspiration or reference was the film F for Fake by Orson Welles. I think I have seen this film 50 times, and read so much about it and I still don’t know what was true and what was not. And I enjoy it this way!


Katerina Cizek is a Peabody- and Emmy-winning documentarian, author, producer, and researcher working with collective processes and emergent technologies. She is a research scientist, and co-founder of the Co-Creation Studio at MIT Open Documentary Lab. She is lead author (with Uricchio et al.) of Collective Wisdom: Co-Creating Media for Equity and Justice, published by MIT Press in 2022.

shirin anlen is an award-winning creative technologist, artist, and researcher. She is a media technologist for WITNESS, which helps people use video and technology to defend human rights. WITNESS’s Prepare, Don’t PanicInitiative has focused on global, equitable preparation for deepfakes.