Close

Tech Agnostic book review

Tech Agnostic: How Technology Became the World's Most Powerful Religion, and Why It Desperately Needs a ReformationTech Agnostic: How Technology Became the World’s Most Powerful Religion, and Why It Desperately Needs a Reformation by Greg Epstein
My rating: 3 of 5 stars

Before reading this book, I knew of the author Greg Epstein primarily by his work as a humanist chaplain at Harvard and MIT during times I was affiliated with those institutions. I was unaware of his work as a tech journalist. This book smashes together those two facets of the author in the hope of offering a profound contribution to the “techlash” literature. I think it succeeds at offering a novel way to interpret the ways Big Tech culture has infiltrated all culture and warped the beliefs and values of several recent generations. For those like me, who have been consistently and critically following technology, internet culture, and its tendrils into philosophy, policy, etc., the book is mostly a rehashing of many things you already knew and already felt ambivalent about, except now its being analyzed like a religion.

I did enjoy the lessons on what religion is, how you might spot one and compare it to others you already know, and the key differences and advantages of agnosticism versus atheism. If this gets a bunch of technology navel-gazers to think deeply about the history of religion and why aspects of religion and faith are important to study even if you aren’t religious, then that will be a win. I agree with Epstein about the need for this.

A religious lens turns out to be a useful tool for analyzing the rhetoric around tech. Talking about technology as sociotechnical systems or culture, as many social science and humanities scholars—like me—do, still often misses the importance of belief and faith. When folks have irrational desires or views of the world, its not just that they are being deceived by hucksters. There are complex values systems that live and evolve beyond their progenitors or any isolated trend.

I was also convinced by Epstein’s argument in the conclusion for reclaiming “agnostic” as a noble posture. In the company of fellow readers, I would identify as a tech agnostic. The ambivalence of my feelings about tech is definitely a choice rather than a cop out. It is hard earned by riding the roller coasters of optimism and pessimism across several waves of tech.

I think the book’s main argument—tech (the whole social, political, economic project, not just the creation of widgets or apps) is a religion, we should be skeptical of its claims, and approach it like a religious scholar would—could have landed in an essay rather than a book. But I’ll admit it was fun to hear Epstein’s and his interviewees’ version of events from the past couple decades in tech. I was close to some of the examples and friends with specific interviewees, which added to the value for me. I just didn’t learn anything new about tech’s ethical pitfalls by the end.

If you are already studying technology as culture or just curious, add Tech Agnostic and religious analysis to your quiver.

View all my reviews

The AI Mirror book review

The AI Mirror: How to Reclaim Our Humanity in an Age of Machine ThinkingThe AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking by Shannon Vallor
My rating: 5 of 5 stars

My fellow technologists, policymakers, educators, and education leaders wrestling with the impacts of generative AI should read Shannon Vallor’s excellent book The AI Mirror as soon as possible. In this highly readable and useful work of philosophy, the virtue ethicist Vallor calls for reclaiming our humanity in an age of machine thinking through moral wisdom and prudence.

The book starts with two organizing concepts. First, the metaphor of AI as mirror is carefully constructed to help explain how the current generation of AI technologies operates. They reflect back what is fed into them. They have no sense of the world; they inhabit no moral space. Of course, humans can’t help but anthropomorphize technologies that have human-like behaviors—projecting onto them reasoning abilities and intentions. There is a long history of this, and it’s used as a design pattern in technology to enhance usability and trustworthiness. But this is a trap. Machine thinking should not be mistaken for a machine caring about you or making moral decisions that weigh the true complexity of the world or a given, specific situation. Generative AI predicts the kinds of responses that fit the pattern of content it has been trained on.

Vallor’s other conceptual starting point comes by way of existentialist philosopher José Ortega y Gasset, who suggested that “the most basic impulse of the human personality is to be an engineer, carrying out a lifelong task of autofabrication: literally, the task of making ourselves” (p. 12). Vallor worries about how our future will be shaped if we rely on a tiny subset of humanity to design and build our AI tools—tools based on a sliver of the human experience—and we then rely on those reflections of biased data, filtered by the values of their creators, to guide society via AI-based problem-solving and decision-making.

Explaining why this is such a big problem is helped by Vallor’s use of another metaphor, “being in the space of reasons”, which describes “being in a mental position to hear, identify, offer, and evaluate reasons, typically with others” (p. 107). She uses this to contrast AI possessing knowledge with the psychological and social work necessary to make meaning through reasoning. This is not how machines think. “One of the most powerful yet dangerous aspects of complex machine learning models is that they can derive solutions to knowledge tasks in a manner that entirely bypasses this space,” writes Vallor (p. 107).

Furthermore, the “space of moral reasons” represents not only the private reflective space for working through morally challenging dilemmas to arrive at individual actions, but also the public spaces for shared moral dialogue. This is politics. As Vallor notes, “the space of moral reasons is [already] routinely threatened by social forces that make it harder for humans to be ‘at home’ together with moral thinking” (p. 109). AI threatens our moral capacity by seeming to “offer to take the hard work of thinking off our shaky human hands” in ways that appear “deceptively helpful, neutral, and apolitical” (p. 109). We are on this slippery slope toward eroding our capacity for self-government. Technology can trick us into believing we are solving our biases and injustices via machine thinking, when in fact we are reinscribing those biases and injustices with AI mirrors.

Like any mirror, humans will inevitably use AI to tell us who we are, despite their distortions. Social media algorithms do this every day. “For you” pages on TikTok reflect a mix of our choices, our unconscious behavior, and the opaque economies and input manipulation tuning the algorithm. But is this who we are? Is this who we want to be? At our fingertips, with no human deliberation required, we might casually assume the reflection we see is a fair rendering of ourselves and the world. Vallor distills this threat by writing, “when we can no longer know ourselves, we can no longer govern ourselves. In that moment, we will have surrendered our own agency, our collective human capacity for self-determination. Not because we won’t have it—but because we will not see it in the mirror” (p. 139).

One of the reasons I like Shannon Vallor and her writing is that she is not simply a critic of technology. She loves technology. She wants it to work for us. And she spends time in this book describing the ways generative AI can be useful. Large language models perform pattern recognition on data so vast it would take millennia for a human to encounter let alone comprehend, which allows us to learn things about how systems work and find information and connections beyond the reach of mere human expertise. We are already unlocking scientific discoveries with AI that serve humanity.

Vallor encourages us to reclaim “technology as a human-wielded instrument of care, responsibility, and service” (p. 217). Too much of our rhetoric around AI is about transcending or liberating us “from our frail humanity” (p. 219). Replacing ourselves or our roles in self-governance and as moral arbiters will lead to magnifying injustice, making the same mistakes again and again (e.g., racist legal proceedings, sexist health diagnoses) with greater efficiency. We could be using these technologies to interrogate our broken systems and let us fix them, rather than supercharging them. The chief threat of AI is that we will come to rely on it to make morally challenging decisions for us, and the more we do this, the more we erode our individual and collective ability to exercise our moral agency, leaving AI to govern us with a set of backward and inhumane values.

My favorite part of the book is “Chapter 6: AI and the Bootstrapping Problem.” Here, Vallor returns to her arguments in her brilliant 2016 book Technology and the Virtues (my review) and renews her call for the cultivation of technomoral virtue to help us reclaim our humanity amid the din of AI boosterism. In The AI Mirror, she directs her call to my students—the engineers and technologists who will be tasked with building and using AI technologies. I have been writing for years about the need for a renewed professional identity for engineers and technologists that fully embraces their civic responsibilities. This is what drew me to Vallor’s work originally, and it is exciting to hear our calls echo one another.

She takes issue with Silicon Valley’s emphasis on perseverance as a virtue and the technological value of efficiency. If we allow our technology creators and their products to promulgate such values, we risk dooming ourselves to a less caring, less sustainable, less just future. There are some things we should stop doing. There are some applications of AI that we should refuse. And we need virtues of humility, care, courage, and civility to guide us toward moral and political wisdom. We should no longer allow “the dominant image of technical excellence and the dominant image of moral excellence to drift apart”—”neither alone is adequate for our times” (p. 179).

View all my reviews

Using Civic Professionalism to Frame Ethical and Social Responsibility in Engineering

Citation

Graeff, E. 2025. Using Civic Professionalism to Frame Ethical and Social Responsibility in Engineering. In: Didier, C., Béranger, A., Bouzin, A., Paris, H., Supiot, J., eds. Engineering and Value Change. Philosophy of Engineering and Technology, vol 48. Springer, Cham. https://doi.org/10.1007/978-3-031-83549-0_3

Link

https://link.springer.com/chapter/10.1007/978-3-031-83549-0_3

Abstract

Most common approaches to ethical and social responsibility in engineering are insufficient to addressing the growing need to ensure engineers and technologists serve the common good. In particular, professional codes of ethics, grand challenges and social entrepreneurship, and corporate adoption of self-policed ethical principles are often toothless in shaping individual and corporate behavior and tend to reinscribe irresponsible technocratic ideologies at the heart of engineering culture. Erin Cech argues there is a “culture of disengagement” in engineering that depoliticizes engineering, separates and differentially values technical and social aspects of engineering work, and embraces the problematic values and worldview of meritocracy. Looking beyond STEM (science, technology, engineering, and mathematics) and STEM education to civic education and democratic theory, I argue civic professionalism, based on the work of Harry Boyte and Albert Dzur, offers a framing of professional identity and practice to engineers which articulates a positive ethics of virtue and resists technocratic forms of professionalism. It proactively engages in the broader sociopolitical questions connected to engineering work and embraces a democratic epistemology and way of working. Educating engineers to become civic professionals will require cultivating reflexivity and civic skills and virtues, and the creation of experiential learning opportunities that engage authentically with sociopolitical complexity.

A Call for Civic-minded Technologists

Citation

Graeff, E. 2025. “A Call for Civic-minded Technologists.” Presented at the SNF Agora Institute, Johns Hopkins University, Baltimore, MD, Mar 25.

Presentation

Abstract

Engineering’s “culture of disengagement” (Cech 2014) casts a long shadow on society. The anemic civic philosophy, preached by lauded tech heroes, pretends politics and power don’t apply to technology, that we can reduce most problems to technical challenges, and that meritocracy is justice. There are bright spots—individual, civic-minded technologists; the Tech Workers Coalition; the Public Interest Technology University Network, and the Tech Stewardship Program. But they are insufficient. To address the challenges of our contemporary society, democracy, and sociotechnical systems, we need to understand technology’s civic landscape, reframe the technical expert’s role in democracy, and cultivate engineers to be civic professionals.