AI, Aging, and the Real Problem We Keep Missing
Artificial intelligence is rapidly entering homes, hospitals, and caregiving systems, often framed as a solution to the “problem” of aging populations. But as Caroline Emmer De Albuquerque Green, Head of the Accelerator Fellowship Programme at the University of Oxford Institute for Ethics in AI, makes clear, older people are not the problem. The real issue lies in the systems and institutions deploying technology without rethinking how power, ageism, and human rights are embedded into design and decision-making. AI, when implemented without critical reflection, risks reinforcing the very inequities it claims to solve.
Older Adults’ Everyday Experiences With AI at Home
For many older adults, AI is not an abstract concept—it shows up in their daily lives through smart home devices, health monitoring systems, automated reminders, and digital care platforms. These technologies are often introduced as tools for safety, efficiency, and independence. However, lived experience tells a more complex story. Systems may misinterpret behavior, override personal preferences, or prioritize institutional convenience over autonomy. When AI is designed without meaningful input from older adults themselves, it can feel intrusive rather than supportive, turning homes into sites of surveillance rather than care.
How Ageism Gets Embedded in Technology Design
Ageism in AI is rarely explicit, but it is deeply structural. Many technologies are built on assumptions that older adults are less capable, less adaptable, or inherently vulnerable. These assumptions shape everything from user interfaces to automated decision rules. When systems are designed for older people rather than with them, they often erase diversity in aging experiences related to culture, disability, income, and community. As Dr. Green emphasizes, the problem is not aging itself, but the inflexibility of systems that fail to recognize older adults as full agents in their own lives.
AI in Health and Caregiving Systems: Efficiency at What Cost?
In healthcare and caregiving, AI is frequently justified as a response to workforce shortages and rising costs. Predictive analytics, automated triage, and remote monitoring promise efficiency, but they also shift how care is defined and delivered. Decisions once made through human judgment and relationship are increasingly mediated by algorithms. Without strong ethical frameworks, this shift risks reducing care to risk scores and alerts, sidelining dignity, consent, and relational understanding. For aging populations, this can mean less voice in care decisions, not more.
Why Institutions Must Change, Not Older People
A central insight from Dr. Green’s work is that resilience should not be demanded from individuals but built into institutions. Universities, governments, and healthcare systems play a critical role in shaping how AI is researched, funded, and deployed. When institutions prioritize speed and scalability over equity and inclusion, they reproduce harm at scale. Building intergenerational, flexible systems requires changing governance structures, accountability mechanisms, and whose knowledge is valued in AI development. Older adults must be seen not as end-users to be managed, but as co-creators of technological futures.
Human Rights and an Equitable AI Future for Aging Populations
Approaching AI and aging through a human rights lens reframes the conversation entirely. It centers autonomy, participation, privacy, and dignity as non-negotiable principles rather than optional features. This means ensuring older adults have the right to understand, challenge, and refuse AI-driven interventions. It also means confronting how ageism intersects with other forms of inequality, including disability, race, and socioeconomic status. An equitable AI future is not one where technology simply fills gaps, but one where systems are redesigned to work for everyone across the life course.
Learning From Critical Conversations on AI and Bias
These issues were powerfully explored during discussions connected to the Coded Bias screening hosted by Algorithmic Justice League, where Dr. Green highlighted how bias is not a technical glitch but a reflection of societal values. Conversations like these remind us that ethical AI is not achieved through better code alone, but through institutional courage, interdisciplinary collaboration, and sustained engagement with those most affected by technology.
Rethinking AI, Aging, and Responsibility
As AI continues to shape health and caregiving, social workers, policymakers, educators, and technologists must resist narratives that frame aging as a problem to be optimized away. The challenge before us is not to make older people adapt to rigid systems, but to make systems more resilient, flexible, and intergenerational. AI can support aging populations, but only if it is grounded in human rights, designed with, not for, older adults, and governed by institutions willing to change themselves.
To understand these dynamics more deeply, Dr. Green’s full interview offers critical insight into how we can move from ageist automation toward truly ethical, inclusive AI.
https://www.youtube.com/watch?v=R3BnJ4dyrco
You can also check out Dr. Marina Badillo-Diaz’s free online webinar here and obtain 2 CEUs for free: https://preview.mailerlite.io/preview/1164796/emails/184918509603521803
The content in this blog was created with the assistance of Artificial Intelligence (AI) and reviewed and edited by Dr. Marina Badillo-Diaz to ensure accuracy, relevance, and integrity. Dr. Badillo-Diaz's expertise and insightful oversight have been incorporated to ensure the content in this blog meets the standards of professional social work practice.