Close

Cultivating Civic Virtue in Engineering Education

Citation

Graeff, E. 2025. “Cultivating Civic Virtue in Engineering Education.” Presented at the 15th Symposium on Engineering and Liberal Education, Union College, Schenectady, NY, Sep 13.

Presentation Recording

Slides

Abstract

Many undergraduate engineers begin their education with a desire to make a positive impact on the world. Yet their moral ambition and belief in the relationship between public welfare and professional responsibility often diminish over time—a phenomenon sociologist Erin Cech attributes to a “culture of disengagement” in engineering education. While recent attention to technology ethics has spurred new research, curricula, and professional codes, there remains a pressing need to more holistically support the ethical commitments and civic engagement that our complex world demands of engineers.

This presentation argues for emphasizing civic virtue as a framework for reorienting engineering education toward civic-mindedness and public welfare. In her book Technology and the Virtues, philosopher Shannon Vallor proposes a framework of “technomoral” virtues to help individuals navigate the ethical challenges of an increasingly interconnected and uncertain world. For engineers, these virtues offer a richer and more integrated ethical foundation than traditional models of professional conduct or risk mitigation—and they align with the long-standing goals of liberal education. I focus on four technomoral virtues in particular—humility, care, courage, and civility—which I argue are essential to preparing engineers for responsible civic participation and ethical practice.

Crucially, this work should not require a wholesale reinvention of engineering education. Many pedagogical practices already used at the intersection of liberal and engineering education are well-suited to cultivating civic virtue. Critical reflection, democratic pedagogy, community engagement, and experiential learning provide meaningful opportunities for students to wrestle with ethical complexity, practice empathy, and connect their technical work to broader social and political contexts. What’s needed is more intentional and sustained use of such practices in and across courses to support students in developing durable ethical dispositions.

I will share insights from my own teaching and advising, including examples from capstone design courses and community-engaged design projects that have prompted students to critically examine the real-world consequences of their work and rethink their roles as engineers. I will also propose specific strategies for embedding technomoral virtues into existing curricula, drawing on best practices in virtue and character education.

At a time when engineering faces urgent questions about its public purpose and societal impact, we must embrace the full ethical and civic potential of undergraduate engineering education. Cultivating civic virtue can help students sustain their hope of doing good through engineering—and equip them to do so more responsibly, thoughtfully, and justly.

The AI Mirror book review

The AI Mirror: How to Reclaim Our Humanity in an Age of Machine ThinkingThe AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking by Shannon Vallor
My rating: 5 of 5 stars

My fellow technologists, policymakers, educators, and education leaders wrestling with the impacts of generative AI should read Shannon Vallor’s excellent book The AI Mirror as soon as possible. In this highly readable and useful work of philosophy, the virtue ethicist Vallor calls for reclaiming our humanity in an age of machine thinking through moral wisdom and prudence.

The book starts with two organizing concepts. First, the metaphor of AI as mirror is carefully constructed to help explain how the current generation of AI technologies operates. They reflect back what is fed into them. They have no sense of the world; they inhabit no moral space. Of course, humans can’t help but anthropomorphize technologies that have human-like behaviors—projecting onto them reasoning abilities and intentions. There is a long history of this, and it’s used as a design pattern in technology to enhance usability and trustworthiness. But this is a trap. Machine thinking should not be mistaken for a machine caring about you or making moral decisions that weigh the true complexity of the world or a given, specific situation. Generative AI predicts the kinds of responses that fit the pattern of content it has been trained on.

Vallor’s other conceptual starting point comes by way of existentialist philosopher José Ortega y Gasset, who suggested that “the most basic impulse of the human personality is to be an engineer, carrying out a lifelong task of autofabrication: literally, the task of making ourselves” (p. 12). Vallor worries about how our future will be shaped if we rely on a tiny subset of humanity to design and build our AI tools—tools based on a sliver of the human experience—and we then rely on those reflections of biased data, filtered by the values of their creators, to guide society via AI-based problem-solving and decision-making.

Explaining why this is such a big problem is helped by Vallor’s use of another metaphor, “being in the space of reasons”, which describes “being in a mental position to hear, identify, offer, and evaluate reasons, typically with others” (p. 107). She uses this to contrast AI possessing knowledge with the psychological and social work necessary to make meaning through reasoning. This is not how machines think. “One of the most powerful yet dangerous aspects of complex machine learning models is that they can derive solutions to knowledge tasks in a manner that entirely bypasses this space,” writes Vallor (p. 107).

Furthermore, the “space of moral reasons” represents not only the private reflective space for working through morally challenging dilemmas to arrive at individual actions, but also the public spaces for shared moral dialogue. This is politics. As Vallor notes, “the space of moral reasons is [already] routinely threatened by social forces that make it harder for humans to be ‘at home’ together with moral thinking” (p. 109). AI threatens our moral capacity by seeming to “offer to take the hard work of thinking off our shaky human hands” in ways that appear “deceptively helpful, neutral, and apolitical” (p. 109). We are on this slippery slope toward eroding our capacity for self-government. Technology can trick us into believing we are solving our biases and injustices via machine thinking, when in fact we are reinscribing those biases and injustices with AI mirrors.

Like any mirror, humans will inevitably use AI to tell us who we are, despite their distortions. Social media algorithms do this every day. “For you” pages on TikTok reflect a mix of our choices, our unconscious behavior, and the opaque economies and input manipulation tuning the algorithm. But is this who we are? Is this who we want to be? At our fingertips, with no human deliberation required, we might casually assume the reflection we see is a fair rendering of ourselves and the world. Vallor distills this threat by writing, “when we can no longer know ourselves, we can no longer govern ourselves. In that moment, we will have surrendered our own agency, our collective human capacity for self-determination. Not because we won’t have it—but because we will not see it in the mirror” (p. 139).

One of the reasons I like Shannon Vallor and her writing is that she is not simply a critic of technology. She loves technology. She wants it to work for us. And she spends time in this book describing the ways generative AI can be useful. Large language models perform pattern recognition on data so vast it would take millennia for a human to encounter let alone comprehend, which allows us to learn things about how systems work and find information and connections beyond the reach of mere human expertise. We are already unlocking scientific discoveries with AI that serve humanity.

Vallor encourages us to reclaim “technology as a human-wielded instrument of care, responsibility, and service” (p. 217). Too much of our rhetoric around AI is about transcending or liberating us “from our frail humanity” (p. 219). Replacing ourselves or our roles in self-governance and as moral arbiters will lead to magnifying injustice, making the same mistakes again and again (e.g., racist legal proceedings, sexist health diagnoses) with greater efficiency. We could be using these technologies to interrogate our broken systems and let us fix them, rather than supercharging them. The chief threat of AI is that we will come to rely on it to make morally challenging decisions for us, and the more we do this, the more we erode our individual and collective ability to exercise our moral agency, leaving AI to govern us with a set of backward and inhumane values.

My favorite part of the book is “Chapter 6: AI and the Bootstrapping Problem.” Here, Vallor returns to her arguments in her brilliant 2016 book Technology and the Virtues (my review) and renews her call for the cultivation of technomoral virtue to help us reclaim our humanity amid the din of AI boosterism. In The AI Mirror, she directs her call to my students—the engineers and technologists who will be tasked with building and using AI technologies. I have been writing for years about the need for a renewed professional identity for engineers and technologists that fully embraces their civic responsibilities. This is what drew me to Vallor’s work originally, and it is exciting to hear our calls echo one another.

She takes issue with Silicon Valley’s emphasis on perseverance as a virtue and the technological value of efficiency. If we allow our technology creators and their products to promulgate such values, we risk dooming ourselves to a less caring, less sustainable, less just future. There are some things we should stop doing. There are some applications of AI that we should refuse. And we need virtues of humility, care, courage, and civility to guide us toward moral and political wisdom. We should no longer allow “the dominant image of technical excellence and the dominant image of moral excellence to drift apart”—”neither alone is adequate for our times” (p. 179).

View all my reviews

Using Civic Professionalism to Frame Ethical and Social Responsibility in Engineering

Citation

Graeff, E. 2025. Using Civic Professionalism to Frame Ethical and Social Responsibility in Engineering. In: Didier, C., Béranger, A., Bouzin, A., Paris, H., Supiot, J., eds. Engineering and Value Change. Philosophy of Engineering and Technology, vol 48. Springer, Cham. https://doi.org/10.1007/978-3-031-83549-0_3

Link

https://link.springer.com/chapter/10.1007/978-3-031-83549-0_3

Abstract

Most common approaches to ethical and social responsibility in engineering are insufficient to addressing the growing need to ensure engineers and technologists serve the common good. In particular, professional codes of ethics, grand challenges and social entrepreneurship, and corporate adoption of self-policed ethical principles are often toothless in shaping individual and corporate behavior and tend to reinscribe irresponsible technocratic ideologies at the heart of engineering culture. Erin Cech argues there is a “culture of disengagement” in engineering that depoliticizes engineering, separates and differentially values technical and social aspects of engineering work, and embraces the problematic values and worldview of meritocracy. Looking beyond STEM (science, technology, engineering, and mathematics) and STEM education to civic education and democratic theory, I argue civic professionalism, based on the work of Harry Boyte and Albert Dzur, offers a framing of professional identity and practice to engineers which articulates a positive ethics of virtue and resists technocratic forms of professionalism. It proactively engages in the broader sociopolitical questions connected to engineering work and embraces a democratic epistemology and way of working. Educating engineers to become civic professionals will require cultivating reflexivity and civic skills and virtues, and the creation of experiential learning opportunities that engage authentically with sociopolitical complexity.

A Call for Civic-minded Technologists

Citation

Graeff, E. 2025. “A Call for Civic-minded Technologists.” Presented at the SNF Agora Institute, Johns Hopkins University, Baltimore, MD, Mar 25.

Presentation

Abstract

Engineering’s “culture of disengagement” (Cech 2014) casts a long shadow on society. The anemic civic philosophy, preached by lauded tech heroes, pretends politics and power don’t apply to technology, that we can reduce most problems to technical challenges, and that meritocracy is justice. There are bright spots—individual, civic-minded technologists; the Tech Workers Coalition; the Public Interest Technology University Network, and the Tech Stewardship Program. But they are insufficient. To address the challenges of our contemporary society, democracy, and sociotechnical systems, we need to understand technology’s civic landscape, reframe the technical expert’s role in democracy, and cultivate engineers to be civic professionals.

Civic Virtue among Engineers

Citation

Graeff, E. 2025. “Civic Virtue among Engineers.” Virtues & Vocations, Spring 2025. https://socialconcerns.nd.edu/virtues/magazine-home-spring-2025/civic-virtue-among-engineers/.

Link

https://socialconcerns.nd.edu/virtues/magazine-home-spring-2025/civic-virtue-among-engineers/.

Introduction

My undergraduates at Olin College of Engineering want to make a positive impact. They see engineering as a career path to building a better world. Their initial theories of change are often naive. But I want them to hold onto the hope of positive impact through four years of equations, prototypes, and internships, and feel like they can live their values wherever their careers take them.

A Culture of Disengagement

The fields of engineering and computing have been experiencing a rightful reckoning with the negative impacts of emerging technologies. Their traditional models of personal, professional, and corporate ethics have long been lacking. Now citizens and their governments are realizing their inadequacy.

New research, curriculum, and ethics codes have emerged in response to the global focus on technology ethics. I’ve participated in countless conferences and meetings with scholars, educators, and practitioners trying to figure out how higher education can cultivate the necessary critical mindsets and ethical skills of technologists. I’ve introduced many of the novel ideas, frameworks, and approaches into the design, computer science, and social science courses I teach.

I’m reaching some students, but not all, and not always in the ways I hope to. Student reactions seem to fall into a few, rough categories: (1) Woah! Engineers have done some really bad things. I don’t want to be an engineer anymore. (2) Ethics and responsibility seem important, but it doesn’t seem relevant to the kind of engineering I want to do. (3) You can’t anticipate how people will misuse technology. This is just the cost of innovation and progress. (4) Building technology in an ethical way sounds like exactly what I want to do. But I’m not seeing job postings for “Ethical Engineer.” Can I get a job doing this?

Sadly, most reactions are not in the minor success that is Category 4. Most are in the spectrum of failure represented by Categories 1–3. In these failure modes, critical examination of how technology is created and its impacts on the world erodes responsibility and the hope of positive impact and elicits defensiveness.

Four years isn’t much time, and the mentorship my colleagues and I offer is only a sliver of the learning experiences students will have during their undergraduate education. I want to make the most of it. I want to increase the likelihood that I cultivate their fragile hope and equip them with sophisticated theories of change.

read more…