Designing AI with Care: Social Work Advocacy for Accountability

Authors Dr. Marina Badillo-Diaz, LCSW and Brindha Balaguru, MSW

A Shared Social Work Perspective

Artificial intelligence is being developed faster than it is being made safe for people, especially those most vulnerable are paying the price. The headlines about AI often focus on innovation, profit, and disruption. But behind those stories are communities already experiencing harm when technologies are released without sufficient ethical guardrails. For social workers, these harms are not abstractions. They are the realities faced by the youth, families, and communities we serve every day.

This blog is the result of a collaboration between Dr. Marina Badillo-Diaz, LCSW and Brinda Balaguru, MSW, who are both equally passionate about ensuring that technology is designed responsibly. Together, we bring a shared lens grounded in ethics, justice, and care. Dr. Badillo-Diaz is a consultant and trainer at the intersection of AI and social work practice, researching how AI tools are being used by social workers in schools and advocating for ethical guardrails in both professional and academic spaces. Ms. Balaguru has long been committed to safety in design and brings expertise in community practice and human-centered approaches, with a focus on how rapid technological deployment intersects with equity and accessibility, working in tech companies for several years. 

Our collaboration grew out of conversations about the pace of AI development and the noticeable gap in protections for those most impacted. We came together because we believe social workers must not remain silent in the face of these shifts. Instead, we need to position ourselves as advocates for accountability, insisting that technology be built with the same care and foresight we expect in social systems. At its core, this blog asks: What does it mean to design AI with care for humans, and what role should social workers play in holding companies accountable?

The Speed of Innovation vs. The Slowness of Safeguards

One of the defining tensions of this moment is the imbalance between how quickly AI is being deployed and how slowly safeguards are being implemented. Technology companies are in a race to innovate, competing to release the newest models, apps, and platforms. But while they move at lightning speed, the processes to evaluate safety, assess risks, and establish regulations lag far behind.

This imbalance creates predictable harms. Algorithms designed to filter job applications have been shown to exclude women and people of color. Predictive policing technologies have reinforced racial profiling, disproportionately targeting Black and Brown communities. In the child welfare system, risk assessment tools have flagged poor families and families of color as “high risk” at significantly higher rates, often without justification. These examples illustrate a painful truth: when AI is built without careful design, those already at the margins of society are the first to bear the cost.

Why Design is the Starting Point of Accountability

Accountability in AI cannot be an afterthought. It must begin at the design stage. Every AI system inherits the values, assumptions, and blind spots of its creators. If design teams are homogeneous, if equity is not a core consideration, or if speed is prioritized over safety, the results will be inequitable by default.

Social work provides a useful parallel here. Just as an assessment determines the course of an intervention, design shapes the trajectory of a technology. A flawed assessment in practice can lead to harmful interventions for a client. Similarly, a flawed design process leads to AI systems that cause harm once deployed. Accountability, then, is not simply about monitoring outcomes. It is about embedding responsibility and foresight into the earliest stages of creation.

A Social Work Lens for Ethical AI Design

The values of our social work profession offer a framework for rethinking how AI should be designed. The National Association of Social Workers (2021) Code of Ethics outlines principles that can and should guide AI development.

  • Dignity and Worth of the Person requires us to insist that AI systems remain human-centered. Tools should not reduce people to data points but instead be designed with respect for human rights and individuality.

  • Social Justice calls for active auditing and mitigation of bias. Equity cannot be treated as an optional add-on; it must be central to how systems are built, tested, and deployed.

  • Integrity demands transparency. Companies must disclose how models are trained, what data is used, and what limitations exist, so that communities can understand the risks.

  • The importance of Human Relationships reminds us that AI should never replace care, empathy, or connection. Instead, it should serve as a tool that supports, not undermines, relationships and social work practice. 

Social workers are uniquely qualified to bring these values into AI conversations. We have always been advocates for those marginalized by systems, and AI is no exception. Our expertise in systems thinking, ethics, and advocacy positions us to challenge companies to design with accountability.

The Harms of “Move Fast and Break Things”

The Silicon Valley ethos has long been celebrated as a marker of innovation, but when this profit-driven mentality is applied to social systems, what gets “broken” and harmed are human lives and communities. These are not accidents or unintended consequences; they are the predictable outcomes of a system where speed and profit are valued above care, accountability, and human well-being. Capitalism plays a central role in this problem. AI companies operate within an economic model that rewards growth, market dominance, and shareholder returns, often at the expense of equity and safety. The faster a product is released, the faster it can capture market share, even if it replicates or worsens existing harms. In this way, the same forces that have historically produced inequities in housing, healthcare, education, and labor are now shaping how technology is built and deployed. For social workers, these outcomes are familiar. They mirror the systemic inequities we confront daily, only now they are being accelerated and amplified by technology that has the power to scale harm at unprecedented speed.

Advocacy in Action: What Social Workers Can Push For

Social workers have a responsibility to engage in advocacy at multiple levels when it comes to AI. At the macro and policy level, this means moving beyond individual awareness and taking collective action. The National Association of Social Workers and its state chapters have a critical role to play in shaping legislation that regulates AI. Just as our profession has historically advocated for child welfare protections, civil rights, and access to healthcare, we must now advocate for laws that require transparency, equity audits, public accountability, and meaningful community oversight of AI systems. These efforts can help ensure that technology is not driven solely by corporate profit but aligned with the values of equity and justice that ground social work.

At the practice level, social workers can demand tools that are trauma-informed, culturally responsive, and bias-audited before being adopted in schools, clinics, or agencies. We must hold vendors accountable for demonstrating that their products align with ethical standards before integrating them into the environments where our clients live, learn, and heal.

At the community level, we must ensure that the voices of those most impacted, including youth, families, people of color, immigrants, and people with (dis)abilities, are amplified and centered in design and deployment conversations. Too often, those who bear the brunt of harmful technologies are excluded from the rooms where decisions are made.

Overall, advocacy here is not optional; it is a natural extension of our professional mandate to protect and promote the well-being of individuals and communities. However, AI does not have to be harmful. With thoughtful, ethical design, it can be a tool that supports social good, expands access, and strengthens human relationships. But that outcome will not happen automatically. It requires deliberate accountability and sustained advocacy. As social workers, educators, policymakers, and technologists, we must demand a different path forward. A path where technology is designed with equity, care, and transparency. A path where accountability is embedded from the very beginning. Designing AI with care is not optional. It is the only way to ensure that technology supports justice, well-being, and the dignity of every person.

References

National Association of Social Workers. (2021). NASW code of ethics. https://www.socialworkers.org/About/Ethics/Code-of-Ethics

Transparency Statement

Aspects of this blog were developed with the assistance of generative AI tools to support the structure and editing process. As social workers committed to ethical practice, we disclose this use in alignment with principles of transparency, integrity, and accountability. All content was reviewed and shaped by the authors to ensure accuracy, relevance, and alignment with social work values.

Previous
Previous

AI Governance and Policy for Social Work Agencies

Next
Next

Invisible Bias, Visible Harm: Reclaiming Ethics in AI Development and Deployment