Close

The AI Mirror book review

The AI Mirror: How to Reclaim Our Humanity in an Age of Machine ThinkingThe AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking by Shannon Vallor
My rating: 5 of 5 stars

My fellow technologists, policymakers, educators, and education leaders wrestling with the impacts of generative AI should read Shannon Vallor’s excellent book The AI Mirror as soon as possible. In this highly readable and useful work of philosophy, the virtue ethicist Vallor calls for reclaiming our humanity in an age of machine thinking through moral wisdom and prudence.

The book starts with two organizing concepts. First, the metaphor of AI as mirror is carefully constructed to help explain how the current generation of AI technologies operates. They reflect back what is fed into them. They have no sense of the world; they inhabit no moral space. Of course, humans can’t help but anthropomorphize technologies that have human-like behaviors—projecting onto them reasoning abilities and intentions. There is a long history of this, and it’s used as a design pattern in technology to enhance usability and trustworthiness. But this is a trap. Machine thinking should not be mistaken for a machine caring about you or making moral decisions that weigh the true complexity of the world or a given, specific situation. Generative AI predicts the kinds of responses that fit the pattern of content it has been trained on.

Vallor’s other conceptual starting point comes by way of existentialist philosopher José Ortega y Gasset, who suggested that “the most basic impulse of the human personality is to be an engineer, carrying out a lifelong task of autofabrication: literally, the task of making ourselves” (p. 12). Vallor worries about how our future will be shaped if we rely on a tiny subset of humanity to design and build our AI tools—tools based on a sliver of the human experience—and we then rely on those reflections of biased data, filtered by the values of their creators, to guide society via AI-based problem-solving and decision-making.

Explaining why this is such a big problem is helped by Vallor’s use of another metaphor, “being in the space of reasons”, which describes “being in a mental position to hear, identify, offer, and evaluate reasons, typically with others” (p. 107). She uses this to contrast AI possessing knowledge with the psychological and social work necessary to make meaning through reasoning. This is not how machines think. “One of the most powerful yet dangerous aspects of complex machine learning models is that they can derive solutions to knowledge tasks in a manner that entirely bypasses this space,” writes Vallor (p. 107).

Furthermore, the “space of moral reasons” represents not only the private reflective space for working through morally challenging dilemmas to arrive at individual actions, but also the public spaces for shared moral dialogue. This is politics. As Vallor notes, “the space of moral reasons is [already] routinely threatened by social forces that make it harder for humans to be ‘at home’ together with moral thinking” (p. 109). AI threatens our moral capacity by seeming to “offer to take the hard work of thinking off our shaky human hands” in ways that appear “deceptively helpful, neutral, and apolitical” (p. 109). We are on this slippery slope toward eroding our capacity for self-government. Technology can trick us into believing we are solving our biases and injustices via machine thinking, when in fact we are reinscribing those biases and injustices with AI mirrors.

Like any mirror, humans will inevitably use AI to tell us who we are, despite their distortions. Social media algorithms do this every day. “For you” pages on TikTok reflect a mix of our choices, our unconscious behavior, and the opaque economies and input manipulation tuning the algorithm. But is this who we are? Is this who we want to be? At our fingertips, with no human deliberation required, we might casually assume the reflection we see is a fair rendering of ourselves and the world. Vallor distills this threat by writing, “when we can no longer know ourselves, we can no longer govern ourselves. In that moment, we will have surrendered our own agency, our collective human capacity for self-determination. Not because we won’t have it—but because we will not see it in the mirror” (p. 139).

One of the reasons I like Shannon Vallor and her writing is that she is not simply a critic of technology. She loves technology. She wants it to work for us. And she spends time in this book describing the ways generative AI can be useful. Large language models perform pattern recognition on data so vast it would take millennia for a human to encounter let alone comprehend, which allows us to learn things about how systems work and find information and connections beyond the reach of mere human expertise. We are already unlocking scientific discoveries with AI that serve humanity.

Vallor encourages us to reclaim “technology as a human-wielded instrument of care, responsibility, and service” (p. 217). Too much of our rhetoric around AI is about transcending or liberating us “from our frail humanity” (p. 219). Replacing ourselves or our roles in self-governance and as moral arbiters will lead to magnifying injustice, making the same mistakes again and again (e.g., racist legal proceedings, sexist health diagnoses) with greater efficiency. We could be using these technologies to interrogate our broken systems and let us fix them, rather than supercharging them. The chief threat of AI is that we will come to rely on it to make morally challenging decisions for us, and the more we do this, the more we erode our individual and collective ability to exercise our moral agency, leaving AI to govern us with a set of backward and inhumane values.

My favorite part of the book is “Chapter 6: AI and the Bootstrapping Problem.” Here, Vallor returns to her arguments in her brilliant 2016 book Technology and the Virtues (my review) and renews her call for the cultivation of technomoral virtue to help us reclaim our humanity amid the din of AI boosterism. In The AI Mirror, she directs her call to my students—the engineers and technologists who will be tasked with building and using AI technologies. I have been writing for years about the need for a renewed professional identity for engineers and technologists that fully embraces their civic responsibilities. This is what drew me to Vallor’s work originally, and it is exciting to hear our calls echo one another.

She takes issue with Silicon Valley’s emphasis on perseverance as a virtue and the technological value of efficiency. If we allow our technology creators and their products to promulgate such values, we risk dooming ourselves to a less caring, less sustainable, less just future. There are some things we should stop doing. There are some applications of AI that we should refuse. And we need virtues of humility, care, courage, and civility to guide us toward moral and political wisdom. We should no longer allow “the dominant image of technical excellence and the dominant image of moral excellence to drift apart”—”neither alone is adequate for our times” (p. 179).

View all my reviews

Using Civic Professionalism to Frame Ethical and Social Responsibility in Engineering

Citation

Graeff, E. 2025. Using Civic Professionalism to Frame Ethical and Social Responsibility in Engineering. In: Didier, C., Béranger, A., Bouzin, A., Paris, H., Supiot, J., eds. Engineering and Value Change. Philosophy of Engineering and Technology, vol 48. Springer, Cham. https://doi.org/10.1007/978-3-031-83549-0_3

Link

https://link.springer.com/chapter/10.1007/978-3-031-83549-0_3

Abstract

Most common approaches to ethical and social responsibility in engineering are insufficient to addressing the growing need to ensure engineers and technologists serve the common good. In particular, professional codes of ethics, grand challenges and social entrepreneurship, and corporate adoption of self-policed ethical principles are often toothless in shaping individual and corporate behavior and tend to reinscribe irresponsible technocratic ideologies at the heart of engineering culture. Erin Cech argues there is a “culture of disengagement” in engineering that depoliticizes engineering, separates and differentially values technical and social aspects of engineering work, and embraces the problematic values and worldview of meritocracy. Looking beyond STEM (science, technology, engineering, and mathematics) and STEM education to civic education and democratic theory, I argue civic professionalism, based on the work of Harry Boyte and Albert Dzur, offers a framing of professional identity and practice to engineers which articulates a positive ethics of virtue and resists technocratic forms of professionalism. It proactively engages in the broader sociopolitical questions connected to engineering work and embraces a democratic epistemology and way of working. Educating engineers to become civic professionals will require cultivating reflexivity and civic skills and virtues, and the creation of experiential learning opportunities that engage authentically with sociopolitical complexity.

A Call for Civic-minded Technologists

Citation

Graeff, E. 2025. “A Call for Civic-minded Technologists.” Presented at the SNF Agora Institute, Johns Hopkins University, Baltimore, MD, Mar 25.

Presentation

Abstract

Engineering’s “culture of disengagement” (Cech 2014) casts a long shadow on society. The anemic civic philosophy, preached by lauded tech heroes, pretends politics and power don’t apply to technology, that we can reduce most problems to technical challenges, and that meritocracy is justice. There are bright spots—individual, civic-minded technologists; the Tech Workers Coalition; the Public Interest Technology University Network, and the Tech Stewardship Program. But they are insufficient. To address the challenges of our contemporary society, democracy, and sociotechnical systems, we need to understand technology’s civic landscape, reframe the technical expert’s role in democracy, and cultivate engineers to be civic professionals.

Civic Virtue among Engineers

Citation

Graeff, E. 2025. “Civic Virtue among Engineers.” Virtues & Vocations, Spring 2025. https://socialconcerns.nd.edu/virtues/magazine-home-spring-2025/civic-virtue-among-engineers/.

Link

https://socialconcerns.nd.edu/virtues/magazine-home-spring-2025/civic-virtue-among-engineers/.

Introduction

My undergraduates at Olin College of Engineering want to make a positive impact. They see engineering as a career path to building a better world. Their initial theories of change are often naive. But I want them to hold onto the hope of positive impact through four years of equations, prototypes, and internships, and feel like they can live their values wherever their careers take them.

A Culture of Disengagement

The fields of engineering and computing have been experiencing a rightful reckoning with the negative impacts of emerging technologies. Their traditional models of personal, professional, and corporate ethics have long been lacking. Now citizens and their governments are realizing their inadequacy.

New research, curriculum, and ethics codes have emerged in response to the global focus on technology ethics. I’ve participated in countless conferences and meetings with scholars, educators, and practitioners trying to figure out how higher education can cultivate the necessary critical mindsets and ethical skills of technologists. I’ve introduced many of the novel ideas, frameworks, and approaches into the design, computer science, and social science courses I teach.

I’m reaching some students, but not all, and not always in the ways I hope to. Student reactions seem to fall into a few, rough categories: (1) Woah! Engineers have done some really bad things. I don’t want to be an engineer anymore. (2) Ethics and responsibility seem important, but it doesn’t seem relevant to the kind of engineering I want to do. (3) You can’t anticipate how people will misuse technology. This is just the cost of innovation and progress. (4) Building technology in an ethical way sounds like exactly what I want to do. But I’m not seeing job postings for “Ethical Engineer.” Can I get a job doing this?

Sadly, most reactions are not in the minor success that is Category 4. Most are in the spectrum of failure represented by Categories 1–3. In these failure modes, critical examination of how technology is created and its impacts on the world erodes responsibility and the hope of positive impact and elicits defensiveness.

Four years isn’t much time, and the mentorship my colleagues and I offer is only a sliver of the learning experiences students will have during their undergraduate education. I want to make the most of it. I want to increase the likelihood that I cultivate their fragile hope and equip them with sophisticated theories of change.

read more…

How Infrastructure Works book review

How Infrastructure Works: Inside the Systems That Shape Our WorldHow Infrastructure Works: Inside the Systems That Shape Our World by Deb Chachra
My rating: 5 of 5 stars

This is a beautiful crossover science and technology studies (STS) book. It speaks to scholars by offering several novel descriptions and frameworks of infrastructure as sociotechnical systems and introduces the concept of “infrastructural citizenship” at the end. For public interest readers, those academic ideas are made accessible thanks to accessible stories covering the history and context of specific examples of “charismatic megastructures” and the ways they touch us personally. The author Deb Chachra makes infrastructure helpfully human-scale by presenting her own connections to infrastructure in autobiographical vignettes. Her visit to the Dinorwig Power Station in Wales is a standout example. If you are a fan of the podcast 99% Invisible, as I am, you can imagine such stories as excellent episodes.

There are two ideas in the book that changed how I see the world. The first is that “energy is the currency of the material world” (p. 40), and we have an opportunity with renewable energy sources to think not in terms of scarce energy but abundant energy, and address the fact that the material we have to work with is finite (Earth is essentially a closed system of matter). People need energy to survive: food, heating/cooling, mobility, etc. Infrastructure systems generate and deliver energy and also mediate the ways energy and power can be accessed and used. Humanity’s energy consumption has been growing tremendously in recent decades. Our production of energy has relied on finite materials, primarily fossil fuels, and we have produced pollution that corrupts other material resources. We must be stewards of those finite resources, and we have that opportunity if we harness the full potential of renewable energy, which is dramatically more than conceivable consumption when you account for all sunlight hitting the Earth’s surface.

My other favorite idea comes from the metaphor of the “Black Start” (pp. 200–201), which is an emergency system for generating power and getting things online when there is a blackout of all the conventional power generation systems. Chachra argues that fossil fuels, like coal and crude oil, were humanity’s “black start”—”a transition phase.” They enabled our rapid advances in technology and society, “[lifting] us out of darkness, literally and metaphorically, giving us the time and resources to create our global, connected, highly cooperative, technological civilization.” And now that we are at this point, able to invent renewable energy technologies, it’s our duty to transcend “the pollution and inefficiency of our black start.”

The book’s title is punchy, but it would be more accurate if it read How Infrastructure Should Work. Chachra is really making a normative argument through her chapters. Rather than being overly concerned with how infrastructure works technically, she is really trying to describe the social, political, historical, and economic contexts that produce certain infrastructure and also mediate how it works and what it can do. The book is also future-oriented, taking stock of our current infrastructure, whether it serves us well or not, and imagining the future we should be building together. In Chapter 10, “Rethinking the Ultrastructure,” Chachra enumerates 6 actionable principles for new infrastructure that distill several of the key insights of the book:

1. Plan for Abundant Energy and Finite Materials
2. Design for Resilience
3. Build for Flexibility
4. Move Toward an Ethics of Care
5. Recognize, Prioritize, and Defend Nonmonetary Benefits
6. Make It Public

The concluding chapter on “Infrastructural Citizenship” is particularly exciting to me as a scholar of citizenship and civic engagement. Chachra defines it as the “idea of being in an ongoing relationship with others simply by virtue of having bodies that exist in the world and which share common needs […] it carries with it the responsibility to sustainably steward common-pool resources, including the environment itself, so that future communities can support themselves and each other so they all can thrive” (p. 276). This form of citizenship is well aligned with my own writing on civic professionalism in engineering, but it dramatically expands the scope of accounting for the public interest: thinking locally and globally simultaneously and imagining our distant descendants and the Earth we want to leave for others.

Confession: I’m biased. Deb Chachra is a colleague of mine on the faculty of Olin College of Engineering and longtime friend. I observed her develop some of these ideas in earlier writing and a course she taught on Infrastructure Studies for first year students. And I believe the book is truly great. I am impressed by how accessible she makes her ideas and see the value of the work as a scholar.

I unabashedly recommend this book!

View all my reviews