Close

Running for re-election as a Trustee of the Needham Free Public Library in 2026

I’ve been a trustee of the Needham Free Public Library since July 2022 and Town Meeting Member (Precinct D) since April 2024. I’m an Olin College of Engineering professor with social science, design, and computing expertise, which I’ve already put to use in tackling my public responsibilities. I also have two kids in Needham public schools.

Needham’s library has a 2023–2027 strategic plan and a space utilization plan with four phases of construction, both of which I helped develop. The strategic plan emphasizes enhancing the library’s services to support our current patrons and growing our outreach to the whole community. I served as the user representative to the Permanent Public Building Committee during the efforts to develop our space utilization plan and the Phase 1 design plan for the new teen space the library will open during Spring 2026. Currently, I serve as the liaison to Library Foundation of Needham to strengthen the partnership between this important fundraising institution and the policy and advocacy work done by the Trustees.

Executing these efforts by employing the expertise and network I cultivated as a PPBC user representative and as Chair of the Trustees from April 2024–April 2025 will be crucial for the library’s ongoing success. I want to see this through as a re-elected Library Trustee.

Cultivating Civic Virtue in Engineering Education

Citation

Graeff, E. 2025. “Cultivating Civic Virtue in Engineering Education.” Presented at the 15th Symposium on Engineering and Liberal Education, Union College, Schenectady, NY, Sep 13.

Presentation Recording

Slides

Abstract

Many undergraduate engineers begin their education with a desire to make a positive impact on the world. Yet their moral ambition and belief in the relationship between public welfare and professional responsibility often diminish over time—a phenomenon sociologist Erin Cech attributes to a “culture of disengagement” in engineering education. While recent attention to technology ethics has spurred new research, curricula, and professional codes, there remains a pressing need to more holistically support the ethical commitments and civic engagement that our complex world demands of engineers.

This presentation argues for emphasizing civic virtue as a framework for reorienting engineering education toward civic-mindedness and public welfare. In her book Technology and the Virtues, philosopher Shannon Vallor proposes a framework of “technomoral” virtues to help individuals navigate the ethical challenges of an increasingly interconnected and uncertain world. For engineers, these virtues offer a richer and more integrated ethical foundation than traditional models of professional conduct or risk mitigation—and they align with the long-standing goals of liberal education. I focus on four technomoral virtues in particular—humility, care, courage, and civility—which I argue are essential to preparing engineers for responsible civic participation and ethical practice.

Crucially, this work should not require a wholesale reinvention of engineering education. Many pedagogical practices already used at the intersection of liberal and engineering education are well-suited to cultivating civic virtue. Critical reflection, democratic pedagogy, community engagement, and experiential learning provide meaningful opportunities for students to wrestle with ethical complexity, practice empathy, and connect their technical work to broader social and political contexts. What’s needed is more intentional and sustained use of such practices in and across courses to support students in developing durable ethical dispositions.

I will share insights from my own teaching and advising, including examples from capstone design courses and community-engaged design projects that have prompted students to critically examine the real-world consequences of their work and rethink their roles as engineers. I will also propose specific strategies for embedding technomoral virtues into existing curricula, drawing on best practices in virtue and character education.

At a time when engineering faces urgent questions about its public purpose and societal impact, we must embrace the full ethical and civic potential of undergraduate engineering education. Cultivating civic virtue can help students sustain their hope of doing good through engineering—and equip them to do so more responsibly, thoughtfully, and justly.

Civic Virtue among Engineers

Citation

Graeff, E. 2025. “Civic Virtue among Engineers.” Virtues & Vocations, Spring 2025. https://socialconcerns.nd.edu/virtues/magazine-home-spring-2025/civic-virtue-among-engineers/.

Link

https://socialconcerns.nd.edu/virtues/magazine-home-spring-2025/civic-virtue-among-engineers/.

Introduction

My undergraduates at Olin College of Engineering want to make a positive impact. They see engineering as a career path to building a better world. Their initial theories of change are often naive. But I want them to hold onto the hope of positive impact through four years of equations, prototypes, and internships, and feel like they can live their values wherever their careers take them.

A Culture of Disengagement

The fields of engineering and computing have been experiencing a rightful reckoning with the negative impacts of emerging technologies. Their traditional models of personal, professional, and corporate ethics have long been lacking. Now citizens and their governments are realizing their inadequacy.

New research, curriculum, and ethics codes have emerged in response to the global focus on technology ethics. I’ve participated in countless conferences and meetings with scholars, educators, and practitioners trying to figure out how higher education can cultivate the necessary critical mindsets and ethical skills of technologists. I’ve introduced many of the novel ideas, frameworks, and approaches into the design, computer science, and social science courses I teach.

I’m reaching some students, but not all, and not always in the ways I hope to. Student reactions seem to fall into a few, rough categories: (1) Woah! Engineers have done some really bad things. I don’t want to be an engineer anymore. (2) Ethics and responsibility seem important, but it doesn’t seem relevant to the kind of engineering I want to do. (3) You can’t anticipate how people will misuse technology. This is just the cost of innovation and progress. (4) Building technology in an ethical way sounds like exactly what I want to do. But I’m not seeing job postings for “Ethical Engineer.” Can I get a job doing this?

Sadly, most reactions are not in the minor success that is Category 4. Most are in the spectrum of failure represented by Categories 1–3. In these failure modes, critical examination of how technology is created and its impacts on the world erodes responsibility and the hope of positive impact and elicits defensiveness.

Four years isn’t much time, and the mentorship my colleagues and I offer is only a sliver of the learning experiences students will have during their undergraduate education. I want to make the most of it. I want to increase the likelihood that I cultivate their fragile hope and equip them with sophisticated theories of change.

read more…

The AI Bargain

Citation

Graeff, E. 2026. “The AI Bargain.” In Anderson, Janna, and Lee Rainie, eds. Chapter 9: Epistemic Vigilance: Discerning Truth, Illusion and Misinformation. Building a Human Resilience Infrastructure for the Age of AI. Imagining the Digital Future Center. https://imaginingthedigitalfuture.org/reports-and-publications/human-resilience-in-the-age-of-ai/.

Link

Essay: The AI Bargain

The AI bargain: AI will be ‘just good enough that we won’t give it up.’ Human resilience requires epistemic humility, cultivating practical reason and investing in humans’ special moral capacities

Artificial intelligence will play a far more significant role in shaping our decisions, work and daily lives over the next decade, not because most people will demand such a transformation, but because AI will be subtly integrated into nearly every digital system we rely on. Even if many of us feel uneasy, resistance will struggle to compete with the promise of efficiency, personalization and productivity. Powerful forces of capital and the lure of perceived convenience may end up deciding for us.

At the moment, there is little appetite for the kind of regulation that might slow this integration. Generative chat assistants are celebrated as helpful companions for writing, coding and learning. Evidence is emerging, contested but concerning, that these tools can undermine attention, learning and even mental health, but the positive press is loud enough to muddy any call for restraint. Protecting children and human resilience more broadly would require moral courage from educators, technologists and policymakers.

We may see pockets of refusal. Elite families already limit screens and social media for their children, while the rest of society is nudged toward greater dependence. But opting out will not be realistic for most people. Technology companies, eager to justify their massive investments in AI infrastructure, are embedding it into learning management systems, workplace software, financial services and everyday tools like email and word processors. Software has long been engineered to be feature-rich rather than fail-safe; AI will amplify that tendency. There will be lawsuits over errors and harms, but large firms will shield themselves behind terms of service and the sheer complexity of their systems. The technology will be just good enough that we won’t give it up.

The AI bargain is no bargain

“This AI bargain comes at a potentially staggering price. In her book ‘The AI Mirror,’ philosopher Shannon Vallor cautions that we are trading something essential when we rely on AI: the ‘space of moral reasons.’

“Democracy depends on our ability to explain and contest decisions, to ask why a loan was denied, a student was flagged or a medical treatment recommended. Yet the deep-learning models powering today’s AI are intrinsically opaque. Vallor, echoing Frank Pasquale’s vision of a ‘black box society,’ reminds us that when reasons disappear behind algorithms, accountability follows.

“The danger to human resilience is not only technical or procedural; it is fundamentally moral. If we cannot meaningfully discuss automated decisions, we will more often than not accept them and grow reliant on them. Vallor warns us about ‘moral deskilling.’ Just as GPS has eroded our ability to navigate with a map, AI may erode our capacity to deliberate, to imagine alternatives and to take responsibility for collective choices.

“If we aren’t cultivating our moral skills in schools, workplaces and civic life, we will erode the practical wisdom that undergirds our human adaptability and resilience. Overreliance on machines risks shrinking our moral imagination precisely when we need it most.

How, then, should we respond?

First, we must cultivate epistemic humility. AI systems speak with unwarranted confidence and humans are tempted to mirror it. Resilience requires the opposite habit: awareness of what we do not know, curiosity about others’ experiences and respect for forms of knowledge that cannot be reduced to data. Schools and workplaces should reward slow reasoning, explanation and disagreement, not just correct answers produced fastest.

Second, we need to maintain social practices that keep the space of moral reasons alive. We should be designing AI systems that show their work. We must create and advocate for more face-to-face human forums in addition to today’s classrooms, juries and community meetings. Automated recommendations should be treated as starting points rather than verdicts. And AI can also be designed and used to reinforce human deliberation. Recent experiments in participatory city visioning in Bowling Green, Kentucky, as well as the large-scale, online deliberations run by Audrey Tang and Taiwan using pol.is, show that AI can widen participation rather than replace it when the design goal is collective reasoning instead of automation.

Third, we should invest in capacities that machines cannot replace: empathy, moral imagination, collective problem-solving and the patience to sit with uncertainty. These are not soft add-ons to technical skill; they are the infrastructure of democratic resilience. If we teach students to use AI and to code AI, we must also teach them when not to automate.

I hope my worries prove overstated. I also fear the kind of cataclysmic failure of an AI-based technology that may shake us out of our complacency. Absent such a unifying event, our adaptability as a species will do what it always does.

Technology, when embraced, always transforms human decision-making, work and daily life in some way. We risk degrading the moral skills and practical wisdom required for decision-making, creativity, self-care and social life until these capacities begin to feel impossible without AI assistance. The AI bargain is not settled. Let us defend the fragile, human space where reasons matter and design technologies that serve that space rather than replace it.

Tech Agnostic book review

Tech Agnostic: How Technology Became the World's Most Powerful Religion, and Why It Desperately Needs a ReformationTech Agnostic: How Technology Became the World’s Most Powerful Religion, and Why It Desperately Needs a Reformation by Greg Epstein
My rating: 3 of 5 stars

Before reading this book, I knew of the author Greg Epstein primarily by his work as a humanist chaplain at Harvard and MIT during times I was affiliated with those institutions. I was unaware of his work as a tech journalist. This book smashes together those two facets of the author in the hope of offering a profound contribution to the “techlash” literature. I think it succeeds at offering a novel way to interpret the ways Big Tech culture has infiltrated all culture and warped the beliefs and values of several recent generations. For those like me, who have been consistently and critically following technology, internet culture, and its tendrils into philosophy, policy, etc., the book is mostly a rehashing of many things you already knew and already felt ambivalent about, except now its being analyzed like a religion.

I did enjoy the lessons on what religion is, how you might spot one and compare it to others you already know, and the key differences and advantages of agnosticism versus atheism. If this gets a bunch of technology navel-gazers to think deeply about the history of religion and why aspects of religion and faith are important to study even if you aren’t religious, then that will be a win. I agree with Epstein about the need for this.

A religious lens turns out to be a useful tool for analyzing the rhetoric around tech. Talking about technology as sociotechnical systems or culture, as many social science and humanities scholars—like me—do, still often misses the importance of belief and faith. When folks have irrational desires or views of the world, its not just that they are being deceived by hucksters. There are complex values systems that live and evolve beyond their progenitors or any isolated trend.

I was also convinced by Epstein’s argument in the conclusion for reclaiming “agnostic” as a noble posture. In the company of fellow readers, I would identify as a tech agnostic. The ambivalence of my feelings about tech is definitely a choice rather than a cop out. It is hard earned by riding the roller coasters of optimism and pessimism across several waves of tech.

I think the book’s main argument—tech (the whole social, political, economic project, not just the creation of widgets or apps) is a religion, we should be skeptical of its claims, and approach it like a religious scholar would—could have landed in an essay rather than a book. But I’ll admit it was fun to hear Epstein’s and his interviewees’ version of events from the past couple decades in tech. I was close to some of the examples and friends with specific interviewees, which added to the value for me. I just didn’t learn anything new about tech’s ethical pitfalls by the end.

If you are already studying technology as culture or just curious, add Tech Agnostic and religious analysis to your quiver.

View all my reviews

The AI Mirror book review

The AI Mirror: How to Reclaim Our Humanity in an Age of Machine ThinkingThe AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking by Shannon Vallor
My rating: 5 of 5 stars

My fellow technologists, policymakers, educators, and education leaders wrestling with the impacts of generative AI should read Shannon Vallor’s excellent book The AI Mirror as soon as possible. In this highly readable and useful work of philosophy, the virtue ethicist Vallor calls for reclaiming our humanity in an age of machine thinking through moral wisdom and prudence.

The book starts with two organizing concepts. First, the metaphor of AI as mirror is carefully constructed to help explain how the current generation of AI technologies operates. They reflect back what is fed into them. They have no sense of the world; they inhabit no moral space. Of course, humans can’t help but anthropomorphize technologies that have human-like behaviors—projecting onto them reasoning abilities and intentions. There is a long history of this, and it’s used as a design pattern in technology to enhance usability and trustworthiness. But this is a trap. Machine thinking should not be mistaken for a machine caring about you or making moral decisions that weigh the true complexity of the world or a given, specific situation. Generative AI predicts the kinds of responses that fit the pattern of content it has been trained on.

Vallor’s other conceptual starting point comes by way of existentialist philosopher José Ortega y Gasset, who suggested that “the most basic impulse of the human personality is to be an engineer, carrying out a lifelong task of autofabrication: literally, the task of making ourselves” (p. 12). Vallor worries about how our future will be shaped if we rely on a tiny subset of humanity to design and build our AI tools—tools based on a sliver of the human experience—and we then rely on those reflections of biased data, filtered by the values of their creators, to guide society via AI-based problem-solving and decision-making.

Explaining why this is such a big problem is helped by Vallor’s use of another metaphor, “being in the space of reasons”, which describes “being in a mental position to hear, identify, offer, and evaluate reasons, typically with others” (p. 107). She uses this to contrast AI possessing knowledge with the psychological and social work necessary to make meaning through reasoning. This is not how machines think. “One of the most powerful yet dangerous aspects of complex machine learning models is that they can derive solutions to knowledge tasks in a manner that entirely bypasses this space,” writes Vallor (p. 107).

Furthermore, the “space of moral reasons” represents not only the private reflective space for working through morally challenging dilemmas to arrive at individual actions, but also the public spaces for shared moral dialogue. This is politics. As Vallor notes, “the space of moral reasons is [already] routinely threatened by social forces that make it harder for humans to be ‘at home’ together with moral thinking” (p. 109). AI threatens our moral capacity by seeming to “offer to take the hard work of thinking off our shaky human hands” in ways that appear “deceptively helpful, neutral, and apolitical” (p. 109). We are on this slippery slope toward eroding our capacity for self-government. Technology can trick us into believing we are solving our biases and injustices via machine thinking, when in fact we are reinscribing those biases and injustices with AI mirrors.

Like any mirror, humans will inevitably use AI to tell us who we are, despite their distortions. Social media algorithms do this every day. “For you” pages on TikTok reflect a mix of our choices, our unconscious behavior, and the opaque economies and input manipulation tuning the algorithm. But is this who we are? Is this who we want to be? At our fingertips, with no human deliberation required, we might casually assume the reflection we see is a fair rendering of ourselves and the world. Vallor distills this threat by writing, “when we can no longer know ourselves, we can no longer govern ourselves. In that moment, we will have surrendered our own agency, our collective human capacity for self-determination. Not because we won’t have it—but because we will not see it in the mirror” (p. 139).

One of the reasons I like Shannon Vallor and her writing is that she is not simply a critic of technology. She loves technology. She wants it to work for us. And she spends time in this book describing the ways generative AI can be useful. Large language models perform pattern recognition on data so vast it would take millennia for a human to encounter let alone comprehend, which allows us to learn things about how systems work and find information and connections beyond the reach of mere human expertise. We are already unlocking scientific discoveries with AI that serve humanity.

Vallor encourages us to reclaim “technology as a human-wielded instrument of care, responsibility, and service” (p. 217). Too much of our rhetoric around AI is about transcending or liberating us “from our frail humanity” (p. 219). Replacing ourselves or our roles in self-governance and as moral arbiters will lead to magnifying injustice, making the same mistakes again and again (e.g., racist legal proceedings, sexist health diagnoses) with greater efficiency. We could be using these technologies to interrogate our broken systems and let us fix them, rather than supercharging them. The chief threat of AI is that we will come to rely on it to make morally challenging decisions for us, and the more we do this, the more we erode our individual and collective ability to exercise our moral agency, leaving AI to govern us with a set of backward and inhumane values.

My favorite part of the book is “Chapter 6: AI and the Bootstrapping Problem.” Here, Vallor returns to her arguments in her brilliant 2016 book Technology and the Virtues (my review) and renews her call for the cultivation of technomoral virtue to help us reclaim our humanity amid the din of AI boosterism. In The AI Mirror, she directs her call to my students—the engineers and technologists who will be tasked with building and using AI technologies. I have been writing for years about the need for a renewed professional identity for engineers and technologists that fully embraces their civic responsibilities. This is what drew me to Vallor’s work originally, and it is exciting to hear our calls echo one another.

She takes issue with Silicon Valley’s emphasis on perseverance as a virtue and the technological value of efficiency. If we allow our technology creators and their products to promulgate such values, we risk dooming ourselves to a less caring, less sustainable, less just future. There are some things we should stop doing. There are some applications of AI that we should refuse. And we need virtues of humility, care, courage, and civility to guide us toward moral and political wisdom. We should no longer allow “the dominant image of technical excellence and the dominant image of moral excellence to drift apart”—”neither alone is adequate for our times” (p. 179).

View all my reviews