Skip to main content

Doc Star of the Month: Joy Buolamwini, 'Coded Bias'

By Lauren Wissot


Joy Buolamwini, featured in Shalini Kantayya's 'Coden Bias.' Courtsy of 7th Empire Media, Inc.

Safe to say that Joy Buolamwini, civil rights star of both the tech world and of Shalini Kantayya’s Coded Bias, a globetrotting investigation into how the building blocks of AI (built, of course, almost exclusively by straight white men) have basically charted a course for systemically embedding universal inequality into our everyday lives, never set out to be either. The MIT Media Lab researcher just wanted to create a feel-good “aspire mirror” (which she eventually did) and was having trouble getting the facial-recognition software to see her face. This sent Buolamwini, a Rhodes Scholar and Fulbright Fellow (and onetime competitive pole vaulter!), on an innocent search for answers—which ended in the darkest of realizations. Literally. Turns out that the supposedly “neutral” algorithms behind the software rendered her Blackness (and femaleness) invisible.

Which is why Documentary is grateful that the busy Buolamwini, who was born in Canada and grew up in Mississippi, took time away from her online freedom-fighting to be our November Doc Star of the Month. 

DOCUMENTARY: How did you first meet Shalini and her team and get involved in the doc? Had you worked with her on any projects before?

JOY BUOLAMWINI: Shalini reached out quite persistently through a number of different channels. I think she went to poetofcode.com. She reached out on Facebook. She also found my MIT email and reached out that way. So after all that, I decided to get back in contact with her. I hadn’t worked with her before, but I’m glad she was so persistent.

D: Were you at all involved in developing the film’s storyline—or perhaps consulted on making tech more relatable to lay audiences?

JB: I was very excited to be part of shaping the storyline. We worked together, I guess close to two years. Introducing Shalini to various folks, getting in touch with people at Big Brother Watch UK, connecting with funders to help support the Sundance release of the film. And also connecting with the Brooklyn tenants—both in person and then later over video chat during Thanksgiving break. I definitely have to give a lot of credit to the MIT Media Lab and to Jimmy Day for oftentimes letting us use equipment last-minute and that kind of thing.

I know towards the end of putting the film together, I wasn’t sure how it might end. Shalini reached out, and at that point the Brooklyn tenants were tentatively successful in having their landlord no longer install face-recognition technology. So we were able to shoot the day after Thanksgiving quite a few bits to tie into the film. Then there were also just the various opportunities I had to do talks, go to hearings and so forth.

I really was quite glad that Shalini was there when I was testifying in front of Congress, as were various members of her film crew. I felt that overall it was quite collaborative, that I certainly had an opportunity to help shape the direction of the film and to introduce other voices, very powerful voices of women of color. I was also really excited to introduce Shalini to Deborah Raji and Timnit Gebru, who collaborated with me on the Gender Shades project, which was based on my MIT thesis. So many of the faces that I see in the film are people who’ve been part of this journey towards algorithmic justice. It was certainly my pleasure to make those introductions, to sometimes convince people to return an email or show up and make space in their calendar to support the film. I think it was certainly better for it.

As far as the editing, however, I definitely have to take my hat off to Shalini and the Coded Bias team. To try and make a coherent storyline out of all of the many in-depth interviews from so many extremely knowledgeable people, as well as from the real lived experience, was no mean task. I think the results show, and it was brilliantly done. I’m glad I was able to contribute in the ways that I was, and also really happy to see what the team did with all of the interviews and the footage and our time spent together. Again, I just think it was a good overall collaboration.  

D: So, any tips on making tech more accessible to the general public?

JB: With the Algorithmic Justice League, we very much focus on the importance of storytelling and sharing personal experiences. A documentary of this kind is such a wonderful medium because you can make tech relatable. You bring the impacts of AI harms closer by showing those who have experienced them, and by showing the people who are passionate and dedicated about making sure the rest of the world knows what’s going on. Shalini connected with so many of the leading voices and thinkers in the space, as well as with the real-world people being impacted.

The general advice I would give to filmmakers is that, certainly when it comes to explaining tech and so forth, use many different angles and approaches. You see that within the film.

D: There’s a scene in which you admit that as a Black woman working in tech, you “expect to be discredited,” that you “expect to be dismissed.” You note that you’re constantly being underestimated. How do you respond to this condescension?

JB: I want to say, first and foremost, that the overall reception to the work that I’ve done has largely been positive. Even when I reached out to IBM and Microsoft, they were actually receptive to the results, to varying degrees. Amazon chose to go an antagonistic route. I was actually fortunate that the first study, Gender Shades, did not receive that kind of resistance. Once I received resistance on the second study, it seemed a little ridiculous. By then you’d had a year of industry leaders and well-regarded academics acknowledging the importance of the work.

Let me preface the following by saying it’s easy to make it seem as though there’s only one narrative when it comes to how somebody is received or treated. That being said, as a Black woman, as a young Black woman in a heavily white and a heavily male-dominated industry, I’ve certainly experienced being dismissed, diminished, sidelined. It took quite a while for people to even understand on a broad basis why the work I was doing was important, why it mattered if the faces of Black women were being misgendered.

I see this as an overall reflection of society, one that’s not unique to the tech industry, but the tech industry is uniquely positioned to shape society with AI. I think it’s particularly pertinent to the Algorithmic Justice League’s mission. Who is deciding what we focus our systems on? Who gets to shape it? Who gets to make design choices? But I think it’s also really important that we don’t make the mistake of thinking that just making the people who make tech more diverse is enough.

It’s absolutely needed. But I like to point out to people that the project I was working on, where I put on a white mask to have my dark skin detected by this mirror project—I built that project as a Black woman in tech. I was also building it the way most software engineers build a project, which is you use reusable parts. All of this is to say that if you change the faces of who makes tech—but you don’t change the status quo, you don’t change the processes, you don’t do the radical transformation—you’re still going to have algorithmic injustice. So it’s not just changing who’s doing it. It’s changing how we approach it all in the first place. It’s changing how we teach computer science.

I think one way to respond to criticism or dismissal is to not let it discourage you. Sometimes it’s so easy, if the people who hold the power say what you’re doing isn’t important, to doubt yourself. I would say that the most important thing is to keep moving, keep pushing, find the audiences where what you’re doing resonates. I knew the work we were doing in terms of pointing out racial bias and gender bias in AI was important. So even if the white men in charge didn’t necessarily think it was important, it certainly has an impact on so much of the rest of our society.

And in fact, it actually also has an impact on them. Privilege does not bestow immunity to algorithmic harms. That being said, the way to respond is to continue to speak your truth, to continue to do work that you know is meaningful and impactful for the communities that you represent. And be unapologetic with your brilliance and your insight!

D: The doc follows you to DC to address Congress about the biases built into AI. Though leaders on both sides of the aisle expressed concern, I’m wondering how much difference you think your testimony really made—especially since lawmakers were willing to tackle facial recognition but not the core of the problem, the algorithms themselves. Do you see government regulation as the path to change? Pressuring the tech companies directly? Cultural outreach— docs like Coded Bias and your own code poetry—to the public at large?

JB: Going into the Congressional hearing, I wasn’t sure what the impact would be. But after our first hearing there were actually two more hearings held on facial-recognition technologies. In fact, there’s now a robust act to put a moratorium on facial-recognition technology, and remote biometric technology, that’s been introduced. I was actually quite heartened and impressed—you had Jim Jordan and AOC actually in agreement! When it comes to privacy, when it comes to surveillance, when it comes to civil rights, you actually have elements of the threats of facial-recognition technology that touch both sides of the aisle.

At the end of the day, it’s really this question of what kind of society we want to be living in. Beyond the federal level, we’ve been really fortunate to work with ACLU Massachusetts on a Press Pause campaign. This summer I was so happy to testify for the City of Boston, where there was a unanimous vote to ban face surveillance in the city. It’s joined about a dozen cities across the country that have put bans or moratoria on different uses of facial-recognition technology.

One thing that I have seen is that lawmakers are listening and laws are changing. I absolutely see value in making time to educate lawmakers, so that they have the necessary information to put the public interest first. Again, I was not sure what to expect. Even the number of follow-up questions that I got, again from both sides of the aisle—this is certainly an issue that impacts everybody.  

I’ve seen so much legislative success in this space, more than you might expect, and also interest. I think you see that also with the three hearings that have happened thus far. And you’re seeing other bills being proposed that go beyond facial recognition and look at other uses of algorithms. I would say lawmakers are paying attention.  

In fact, Shalini was there in 2018 when I went to brief then-Senator Kamala Harris, now Vice President-elect Harris, on facial-recognition technologies. And so I shared my research with her, I shared clips from “AI, Ain’t I A Woman.” She and six other senators wrote letters to the FBI, to the EEOC, and to the Federal Trade Commission citing some of my research and others’, and also asking what the government is doing on the new frontiers of algorithmic justice. Then in 2019 I had the opportunity to testify at two Congressional hearings, and also at several state and city hearings. 

Now, government regulation is only one piece of the puzzle. We’ve seen that direct pressure on companies can make a difference. We saw this with IBM, Microsoft and Amazon stepping back from face-recognition technology in terms of selling it to law enforcement. I absolutely think you have to play multiple angles when it comes to this fight.

You need people who are on the inside, you need people who are on the outside. You need poets, you need computer scientists, you need activists, you need politicians, you need engineers, you need researchers. I view this as having to fire on all cylinders to make it work. Yes, we can push for legislation. We ought to. I do think there is importance in making sure research that shows the limitations and the capabilities of these systems is out there. And that we continue to make sure we also educate our lawmakers and policymakers.

D: Two realizations that I ultimately took away from your facial-recognition discovery especially unnerved me. One, that AI—nonhumans—are literally deciding who is categorized as human. And two, that AI, which renders everyone that’s not a white guy pretty much invisible, likewise literally “doesn’t see race.” This sort of algorithmic annihilation of both women and BIPOC strikes me as nothing less than existential. Is there anything that gives you hope?     

JB: What gives me hope is I see that change is possible. We saw that in the case of Microsoft, IBM and Amazon backing away from facial-recognition technology in different capacities. It’s not just that this technology exists and there is no way to resist harmful uses or improve beneficial ones. That’s not the case—so that’s what gives me hope. There is evidence of this every single day.

Coded Bias is screening as part of the IDA Documentary Screening Series in January 2021, with filmmaker Q&A on Monday, January 25 at 6pm PT.


Lauren Wissot is a film critic and journalist, filmmaker and programmer, and a contributing editor at both Filmmaker magazine and Documentary magazine. She's served as the director of programming at the Hot Springs Documentary Film Festival and the Santa Fe Independent Film Festival, and has written for SalonBitchThe Rumpus and Hammer to Nail.