Invisible Bias, Visible Harm: Reclaiming Ethics in AI Development and Deployment

Authors of this blog: Marina Badillo-Diaz, DSW, LCSW and Taneasha E. Evans, LCSW

As social workers with deep roots in justice-based practice, we—Taneasha E. Evans, LCSW, and Marina Badillo-Diaz, DSW, LCSW—came together for this blog collaboration because we believe that the future of artificial intelligence (AI) must be shaped by ethical, community-centered, and culturally responsive frameworks. We share a commitment to serving vulnerable and marginalized populations, and we share concerns: AI is rapidly transforming the systems that shape our lives, including healthcare, education, child welfare, and beyond, without sufficient input from the communities it impacts most.

This collaboration was born out of urgent conversations we’ve both been having in our respective circles, about the lack of transparency in AI tools, the invisibility of systemic bias in AI systems, and the growing power of technology to both help and harm. We wanted to use this space to speak truth to power, to make visible the ethical blind spots in AI development, and to amplify the voice of social work in a conversation that too often leaves us out.

As an African American social worker and confessed sci-fi geek, Taneasha finds herself both thrilled and deeply troubled by the rise of AI. The promise of more efficient, accessible, and even empathetic care is exciting, but that promise is often undermined by the same human flaws that have long existed in our systems. The rhetoric that “AI can’t be racist” is not only false, it’s dangerous. AI is written by humans, and humans carry bias. That bias, when scaled through algorithms, becomes even more harmful because it hides behind the veil of neutrality. As the book Algorithms of Oppression reminds us, anything created by people is embedded with their values, assumptions, and worldviews (Nobel, 2018). There is no such thing as a bias-free system when that system is born from structural racism and systemic inequity.

Our goal is not to resist technology; it is to reclaim it. As social workers, we bring a lens that is grounded in human rights, relational accountability, and the dignity and worth of all people. This blog is our call to action: for our profession, our institutions, and our communities to get involved in shaping ethical AI, not just as users or critics, but as leaders and co-creators. This blog will explore how bias shows up in AI development and deployment, why it matters to social workers and ethicists, and how we can reclaim the ethical frameworks guiding this work, especially in light of the lack of accountability, oversight, and liability currently surrounding the for-profit use of AI in human service systems.

What Is Invisible Bias in AI?

Algorithmic bias can be understood as patterns of unfairness or discrimination built into data, models, and systems, patterns that reflect the inequalities and power structures of the societies from which they originate. When we refer to invisible bias, we are talking about the blind spots, assumptions, and cultural narratives that go unexamined and unchallenged during the creation and deployment of AI. These biases are “invisible” because they are often embedded in code and data sets that appear neutral or objective, but in reality, reproduce and magnify systemic inequities.

As Safiya Umoja Noble (2018) argues in her book Algorithms of Oppression, technology is never neutral. It is designed, built, and trained by humans, which means it is susceptible to our cultural biases, racial stereotypes, and oppressive structures. When AI is deployed in human service contexts without the ethical standards and checks that govern practitioners, like social workers, doctors, or therapists. It becomes a tool that can exploit rather than support human well-being. Meanwhile, private companies often profit from the use of these tools, even when harm occurs, without facing accountability or liability. This intersection of profit-driven technology and vulnerable populations is precisely the tension we seek to unpack in this blog.

From a social work perspective, invisible bias in AI is not just a technological issue. It is an issue of oppression, power, and privilege. AI systems often amplify the very systems of structural racism, classism, ableism, and other forms of oppression that social workers are trained to confront and dismantle. When these tools are adopted uncritically, they reinforce barriers to equity rather than remove them. For example, an AI system used to determine eligibility for public benefits might deny access to marginalized families because the algorithm was trained on historical data reflecting biased policies. This is not a glitch; it is a digital continuation of systemic injustice.

3 Ways Algorithmic Bias Shows Up in Social Work

We don’t have to look far to see the consequences of invisible bias. AI has been called out repeatedly on social media for its harmful and racist outputs. For instance, facial recognition systems have confused or misidentified African American individuals at disproportionately high rates, sometimes with dangerous results, like wrongful arrests. Image-generation tools have mislabeled Black actors, reinforcing the stereotype that “all Black people look alike.” Even more egregious examples include offensive associations, such as monkeys or apes appearing in search results for terms linked to the African American communities or other racial groups. These are not mere glitches; they are the direct result of biased data sets, unchecked algorithms, and a failure to embed cultural and racial awareness into technological design. Here are 3 ways Algorithmic Bias Shows Up in Social Work Practice:

  1. Predictive Risk Assessments in Child Welfare
    Predictive analytics and risk models used in child welfare often rely on historical data that reflects long-standing surveillance and disparities, particularly impacting low-income, African American, and Indigenous families (Whicher et al. 2022). Studies and government reports show these tools can reproduce and even magnify existing systemic inequities, leading to disproportionate investigations and interventions for certain groups. There is substantial concern among experts and practitioners that, without careful design and oversight, these algorithms risk further entrenching racial and socioeconomic disparities in child welfare decisions.

  2. Automated Eligibility for Housing and Benefits
    Algorithms are increasingly used to automate decisions in housing, public benefits, and social assistance (Schneider, 2020). Research highlights that these systems can inherit and amplify biases from historical data, potentially embedding ableist or classist assumptions. Examples include denial of housing or benefits based on factors such as employment history, criminal record, or data patterns that reflect prior discrimination rather than current eligibility. Legal and advocacy organizations have raised alarms about these risks, especially for marginalized communities.


  3. AI-Powered Mental Health Bots and Telehealth Triage
    AI chatbots and digital mental health tools are expanding in under-resourced areas, but pose significant risks. There is evidence that current chatbots can lack cultural sensitivity, trauma-informed frameworks, and language diversity. Studies from Stanford and other research bodies have shown cases where bots have mishandled or missed signs of suicidal ideation, or delivered responses perpetuating stigma or bias (Moore & Haber, 2024). This can create disparities in care quality, leading to a two-tiered system in which those with fewer resources are more likely to receive inadequate, potentially unsafe algorithmic care


Real-World Harms: When the Invisible Becomes Visible

The danger of invisible bias becomes starkly visible when AI is deployed in life-altering decisions or interventions. One of the most alarming developments is the increasing use of AI to replace, not supplement, practitioners such as medical doctors, mental health professionals, and even social workers. While marketed as efficient and scalable, these tools often lack the emotional intelligence, cultural competence, and ethical grounding that human providers bring to therapeutic relationships (Moore & Haber, 2024).

A recent New York Times article reported concerns about AI tools involved in mental health support, where conversations with users showed a disturbing trend: these systems responded inadequately or even harmfully to individuals expressing suicidality (Hill, 2025). In one high-profile case, a lawsuit was filed after a teenager’s death, with claims that an AI mental health chatbot failed to identify or escalate their distress, illuminating a systemic failure with deadly consequences.

From a social work perspective, these outcomes violate the profession’s ethical values, the dignity and worth of the person, the importance of human relationships, and the call to advance social justice (NASW, 2021). When mental health or medical efficacy is determined by private corporations driven by profit, the quality of care shifts from being relational to transactional. Vulnerable people are reduced to data points. Accountability disappears into code.

The ethical implications are profound. Social workers are held to a high standard of ethical conduct under the NASW Code of Ethics (2021). But AI developers are not bound by the same principles. Without oversight, these tools can and do cause harm with no liability. This power imbalance, particularly when targeted at those already facing structural oppression, must not be ignored.

Why Ethics Must Be Reclaimed

AI ethics today is largely dominated by technologists, engineers, and corporate stakeholders, many of whom lack training in cultural humility, social justice, or care work. The absence of racially and professionally diverse voices in the design process results in tools that overlook or actively replicate harmful systemic dynamics.

We’ve seen this firsthand. At one workplace, Taneasha was invited by IT developers to “consult” on an AI-based journaling and CBT tool. But instead of co-creating or influencing design, social workers were told they couldn’t address core systemic issues like language access or cultural variance. It was clear they were being asked to validate a system already built, not to guide or reshape it. The message was loud and clear: lived expertise and frontline care experience were not truly valued. This kind of tokenism is a hallmark of "checklist ethics". Treating equity, inclusion, and ethics as boxes to be checked rather than transformative principles that shape every step of a system's design. As a result, we see AI used in ways that widen gaps in access, safety, and care.

To counter this, we must call for community-led, culturally responsive frameworks. These frameworks must honor the individuality of people served, be shaped by those with lived and frontline experience, and be accountable to the communities most impacted. Otherwise, AI simply becomes another tool of surveillance, segregation, and exploitation camouflaged in the language of progress. The NASW Code of Ethics (2021) offers a powerful foundation for this work. Its emphasis on service, social justice, the dignity and worth of the person, and integrity provides a blueprint for what ethical AI could look like if social workers are at the table.

Reclaiming Space: The Role of Social Workers and Community Voices

Social workers have a critical role to play in reclaiming space within the AI ethics field. Our profession is uniquely equipped to bring human-centered, anti-oppressive, and justice-based approaches to technology design, deployment, and accountability. Here's how:

  • Participatory Design
    Social workers must be involved in co-designing AI systems that impact healthcare, education, housing, and mental health services. Cross-disciplinary collaboration—with ethicists, community leaders, and technologists. This is essential to ensuring that AI tools reflect the needs and realities of the people they are meant to serve.

  • AI Literacy and Advocacy
    We need to demystify AI for practitioners and communities. Understanding how AI tools are trained, deployed, and marketed empowers social workers to ask critical questions, challenge unethical practices, and advocate for transparent use. Literacy is a form of resistance.

  • Human Rights-Based Frameworks
    Any AI system used in social service spaces must be accountable to a human rights framework. This includes establishing built-in checks and balances, ethical review boards, and public disclosures when systems fail. Algorithms should be required to audit themselves along with public accountability.

  • Uplifting Community-Driven Tech
    Across the country and globe, activist coalitions and grassroots technologists are developing tools rooted in liberation, not surveillance. From mutual aid mapping tools to culturally specific mental health platforms, these alternatives show us what’s possible when people, not profits, guide design.

  • Centering Intersectionality and Black Feminist Ethics
    Ethical AI must center care, collective accountability, and liberation. Black feminist frameworks teach us that relationships, context, and embodied knowledge matter. When applied to AI, this means resisting abstraction and centering the lived realities of those historically pushed to the margins

Call to Action

To move toward a future where AI supports human dignity and advances social justice, we must commit to intentional, cross-disciplinary collaboration. Social workers, ethicists, educators, and impacted community members must be included in every phase of AI development, from policy design and training data decisions to implementation and evaluation. Their lived experience and professional expertise are essential in ensuring that AI tools reflect the needs, values, and realities of the communities they are meant to serve.

For social workers, this means learning the basics of AI, asking critical questions about how tools are built and by whom, and advocating for transparency and accountability in their use. Social workers must be willing to speak up when systems cause harm and insist on ethical standards that align with our professional values. Our voices matter, and we belong in these conversations.

Institutions also have a vital role to play. They must invest in ethical, community-centered technology development and implementation. This includes funding projects rooted in justice, establishing ethics review boards, and ensuring diverse and interdisciplinary input before adopting new tools. Institutions should not passively adopt the next shiny system. They must interrogate it, evaluate its equity implications, and demand safeguards.

Finally, professional bodies like the National Association of Social Workers must lead the way in developing guidance for ethical AI in our field. These organizations are uniquely positioned to advocate for national policies, provide training and resources, and ensure that any AI used in social work is grounded in principles of justice, transparency, and human rights. Without this leadership, the profession risks being left out of one of the most transformative shifts of our time.


References

Hill, K. (2025, June 13). They Asked ChatGPT Questions. The Answers Sent Them Spiraling. The New York Times. Retrieved July 31, 2025, from https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html

Moore, J., & Haber, N. (2024, June 24). Exploring the dangers of AI in mental health care. Stanford Human-Centered Artificial Intelligence (HAI).

National Association of Social Workers. (2021). National Association of Social Workers Code of Ethics. https://www.socialworkers.org/About/Ethics/Code-of-Ethics/Code-of-Ethics-English12

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.

Schneider, V. (2020). Algorithms and machine learning may undermine housing justice. Columbia Human Rights Law Review, 52(1), 251–309.

Whicher, D., Pendl-Robinson, E., Jones, K., & Kalisher, A. (2022, September 29). Avoiding Racial Bias in Child Welfare Agencies’ Use of Predictive Risk Modeling. U.S. Department of Health and Human Services, Office of the Assistant Secretary for Planning and Evaluation.

Aspects of this blog were developed with the assistance of generative AI tools to support the structure and editing process. As social workers committed to ethical practice, we disclose this use in alignment with principles of transparency, integrity, and accountability. All content was reviewed and shaped by the authors to ensure accuracy, relevance, and alignment with social work values.

Next
Next

How Social Workers Can Use Custom GPTs to Save Time, Boost Impact, and Work Smarter