Citation
Graeff, E. 2026. “The AI Bargain.” In Anderson, Janna, and Lee Rainie, eds. Chapter 9: Epistemic Vigilance: Discerning Truth, Illusion and Misinformation. Building a Human Resilience Infrastructure for the Age of AI. Imagining the Digital Future Center. https://imaginingthedigitalfuture.org/reports-and-publications/human-resilience-in-the-age-of-ai/.
Link
Essay: The AI Bargain
The AI bargain: AI will be ‘just good enough that we won’t give it up.’ Human resilience requires epistemic humility, cultivating practical reason and investing in humans’ special moral capacities
Artificial intelligence will play a far more significant role in shaping our decisions, work and daily lives over the next decade, not because most people will demand such a transformation, but because AI will be subtly integrated into nearly every digital system we rely on. Even if many of us feel uneasy, resistance will struggle to compete with the promise of efficiency, personalization and productivity. Powerful forces of capital and the lure of perceived convenience may end up deciding for us.
At the moment, there is little appetite for the kind of regulation that might slow this integration. Generative chat assistants are celebrated as helpful companions for writing, coding and learning. Evidence is emerging, contested but concerning, that these tools can undermine attention, learning and even mental health, but the positive press is loud enough to muddy any call for restraint. Protecting children and human resilience more broadly would require moral courage from educators, technologists and policymakers.
We may see pockets of refusal. Elite families already limit screens and social media for their children, while the rest of society is nudged toward greater dependence. But opting out will not be realistic for most people. Technology companies, eager to justify their massive investments in AI infrastructure, are embedding it into learning management systems, workplace software, financial services and everyday tools like email and word processors. Software has long been engineered to be feature-rich rather than fail-safe; AI will amplify that tendency. There will be lawsuits over errors and harms, but large firms will shield themselves behind terms of service and the sheer complexity of their systems. The technology will be just good enough that we won’t give it up.
The AI bargain is no bargain
“This AI bargain comes at a potentially staggering price. In her book ‘The AI Mirror,’ philosopher Shannon Vallor cautions that we are trading something essential when we rely on AI: the ‘space of moral reasons.’
“Democracy depends on our ability to explain and contest decisions, to ask why a loan was denied, a student was flagged or a medical treatment recommended. Yet the deep-learning models powering today’s AI are intrinsically opaque. Vallor, echoing Frank Pasquale’s vision of a ‘black box society,’ reminds us that when reasons disappear behind algorithms, accountability follows.
“The danger to human resilience is not only technical or procedural; it is fundamentally moral. If we cannot meaningfully discuss automated decisions, we will more often than not accept them and grow reliant on them. Vallor warns us about ‘moral deskilling.’ Just as GPS has eroded our ability to navigate with a map, AI may erode our capacity to deliberate, to imagine alternatives and to take responsibility for collective choices.
“If we aren’t cultivating our moral skills in schools, workplaces and civic life, we will erode the practical wisdom that undergirds our human adaptability and resilience. Overreliance on machines risks shrinking our moral imagination precisely when we need it most.
How, then, should we respond?
First, we must cultivate epistemic humility. AI systems speak with unwarranted confidence and humans are tempted to mirror it. Resilience requires the opposite habit: awareness of what we do not know, curiosity about others’ experiences and respect for forms of knowledge that cannot be reduced to data. Schools and workplaces should reward slow reasoning, explanation and disagreement, not just correct answers produced fastest.
Second, we need to maintain social practices that keep the space of moral reasons alive. We should be designing AI systems that show their work. We must create and advocate for more face-to-face human forums in addition to today’s classrooms, juries and community meetings. Automated recommendations should be treated as starting points rather than verdicts. And AI can also be designed and used to reinforce human deliberation. Recent experiments in participatory city visioning in Bowling Green, Kentucky, as well as the large-scale, online deliberations run by Audrey Tang and Taiwan using pol.is, show that AI can widen participation rather than replace it when the design goal is collective reasoning instead of automation.
Third, we should invest in capacities that machines cannot replace: empathy, moral imagination, collective problem-solving and the patience to sit with uncertainty. These are not soft add-ons to technical skill; they are the infrastructure of democratic resilience. If we teach students to use AI and to code AI, we must also teach them when not to automate.
I hope my worries prove overstated. I also fear the kind of cataclysmic failure of an AI-based technology that may shake us out of our complacency. Absent such a unifying event, our adaptability as a species will do what it always does.
Technology, when embraced, always transforms human decision-making, work and daily life in some way. We risk degrading the moral skills and practical wisdom required for decision-making, creativity, self-care and social life until these capacities begin to feel impossible without AI assistance. The AI bargain is not settled. Let us defend the fragile, human space where reasons matter and design technologies that serve that space rather than replace it.

