ISBN 978-0593241844, Random House Trade Paperbacks, 2024, 336 pages, $11.08 (hardcover)
Reviewed by: Nalonie Tyrrell, National Defense University—College of Information and Cyberspace, Washington, D.C., USA
In Unmasking AI, Joy Buolamwini accomplishes something few authors in the increasingly saturated field of artificial intelligence manage: she bridges the gap between the technical and the human, the empirical and the poetic. She does so not with the detached posture of a systems engineer, but with the voice of a person for whom biased algorithms are not abstract harms, but deeply personal affronts. This book is part memoir, part manifesto, and wholly necessary.
The narrative opens in an unlikely place—at the intersection of loneliness and doctoral research. As Buolamwini recounts her efforts to design a mirror that would change the image of your body and track with your movements. She called it the “Aspire Mirror.” During development, she stumbled upon an alarming discovery: the facial recognition software she uses failed to recognize her own face unless she donned a white mask. This as societal metaphor come true. She was a Black woman forced to wear a white mask to be seen by a machine. The irony is equal parts devastating and real. Thus began the journey that would culminate in the now-famous Gender Shades project and the foundation of the Algorithmic Justice League.
Yet Unmasking AI is not merely a dramatized retelling of academic research. It is a broader exploration of how the technologies we build can replicate, amplify, and even ossify the very human prejudices we claim to escape through innovation. The “coded gaze”—a term Buolamwini coins to describe the embedded bias in machine vision—joins a growing lexicon of resistance to algorithmic injustice. She discusses terms like “algorithmic oppression” and reframes old ones—such as “neutrality”—only to reevaluate them.
Unmasking AI is not the cold, dry technical tome well known in this literary space. Boulamwini’s prose is richly accentuated with her Ghanaian-American heritage, her early exposure to poetry, her artistic sensibilities, and her commitment to justice. Readers expecting equations and model architectures will find, instead, vignettes of failure and perseverance, framed with an eye for the personal and the political. The author’s trajectory—from a child of scholarly immigrants (Her father, Dr. John Buolamwini, a scientist, her mother—Frema the Akan, a multi-faceted artist) to a Fulbright scholar in Zambia to an MIT Media Lab researcher—offers a rare view into the human development behind technological critique.
The stakes of Joy Buolamwini’s argument are critical and significant. She explores how facial recognition technology has been deployed in flawed ways, with real-world implications for policing, surveillance, immigration, and even warfare. When an algorithm misidentifies a person—or fails to recognize them altogether—it is not just a technical hiccup. It is a silent assault of personhood. This is particularly alarming in contexts where AI is given a mask of objectivity: in courtrooms, security systems, drone targeting, and hiring algorithms. One need only remember the early failures of image generation models—such as Gemini’s inability to produce respectable images of non-white religious figures—to grasp the extent to which the training data behind AI reflects historical and cultural erasures.
Buolamwini’s concern is not just with visual recognition but with language. She discusses how large language models (LLMs), trained on vast digital corpora, can associate words like “jihadist” or “offender” with Muslim or Black identities in ways that compound decades of racialized discourse. The implications for national security professionals are dire. If AI is to be a foundational tool for intelligence, policymaking, or defense strategy, then it must be trained, deployed, and audited with a far more robust ethical apparatus.
This is where Unmasking AI shines brightest: it does not simply diagnose a problem; it issues a call to action. Buolamwini insists we resist the tyranny of efficiency—the idea that faster computation is synonymous with better governance. In fields like national security, where split-second decisions can cost lives, the allure of automation must be tempered by a clear-eyed understanding of its limitations. Blind dependence on flawed systems will not lead to safer societies. It will lead to more imperceptible forms of injustice masked in code.
Alas, Unmasking AI is not all digital storm clouds. Joy Buolamwini envisions hope. She champions what she calls “algorithmic accountability”—a vision of the future in which diverse teams build inclusive technologies, and in which public policy plays a role in ensuring fairness. Her optimism is grounded in the belief that we can, and must, do better.
In short, Unmasking AI is a crucial text for anyone working at the intersection of technology and society. For national security professionals in particular, it is a cautionary tale about the perils of uncritical adoption. But more than that, it is a beautifully written argument for why fairness, transparency, and justice must be coded into the systems we create.