One Year Later: The BASW Has Guidance on AI — Why Doesn’t The NASW?

In March 2025, the British Association of Social Workers (BASW) released Generative AI and Social Work: Initial Guidance for Practice and Ethics, offering social workers timely, profession-specific direction on the ethical use of generative artificial intelligence. The guidance was explicit in its intent: AI is already present in social work practice, and ethical reflection can no longer be optional.

One year later in March 2026 social workers in the United States are still waiting for comparable guidance from the National Association of Social Workers (NASW).

This absence matters.

AI tools are already being used in documentation, education, supervision, and organizational decision-making across U.S. social service systems. Without clear, profession-led guidance, social workers are left to navigate ethical risks individually, inconsistently, and often in silence.

What BASW Got Right: Ethics Before Adoption

The BASW guidance begins with a clear acknowledgment: generative AI is already being used in social work practice, and its development is moving faster than policy, regulation, or professional norms. Rather than waiting for perfect certainty, BASW framed the document as initial guidance, explicitly inviting reflection, feedback, and revision .

This framing is important. It signals that ethical leadership does not require omniscience; it requires responsibility.

BASW anchors its guidance directly in its Code of Ethics, making clear that AI use must align with core professional commitments such as respect for human dignity, social justice, accountability, and professional judgment. AI is not treated as neutral or inevitable, but as a practice context that demands ethical scrutiny.

Clear Boundaries Around Risk, Bias, and Judgment

One of the strongest aspects of the BASW guidance is its clarity around risk. The document explicitly names known limitations of generative AI, including bias in training data, the risk of hallucinated or misleading outputs, and AI’s inability to understand context in the way human professionals do .

Rather than presenting AI as a solution, BASW emphasizes the necessity of human oversight. Social workers are reminded that they remain fully accountable for decisions informed by AI outputs. The guidance repeatedly reinforces that professional judgment cannot be delegated to a system, regardless of how sophisticated it appears.

This is not fear-based guidance. It is responsibility-based guidance.

Consent, Data Protection, and Accountability

BASW also addresses data protection and consent directly, warning against the use of generic AI tools for sensitive personal information. The guidance highlights the risks of entering client data into systems that may reuse or aggregate information without informed consent, explicitly linking this concern to data protection laws and ethical responsibilities .

Importantly, BASW does not place the burden solely on individual social workers. Employers are called on to provide training, clear guidance, and support. Organizations are encouraged to conduct Equalities Impact Assessments and Data Protection Impact Assessments before deploying AI tools.

This shared-responsibility model reflects an understanding of how power operates within systems.

Leadership, Regulation, and the Role of Professional Bodies

The BASW guidance goes beyond individual practice to address systemic responsibility. Employers, regulators, and governments are each assigned a role in ensuring ethical AI use in public services. BASW explicitly calls for regulatory guidance and legislative frameworks, acknowledging that frontline practitioners should not be left to manage technological risk alone . Here is a copy of the guidance

This is where the contrast with the U.S. context becomes most visible.

In the absence of NASW-issued guidance, U.S. social workers are navigating AI adoption without a unified ethical framework from their national professional association. This creates inconsistency, increases liability, and undermines ethical confidence.

The Cost of Silence in the U.S. Context

Without NASW guidance, social workers in the U.S. are left to interpret AI ethics through fragmented sources: vendor policies, employer directives, institutional risk management, or individual judgment. This places disproportionate pressure on practitioners while offering little protection when things go wrong.

Silence does not stop AI adoption. It simply shifts responsibility downward.

For a profession grounded in ethics, justice, and accountability, this gap is increasingly difficult to justify.

A Call to Action: NASW Must Lead

The release of BASW’s guidance demonstrates that professional associations can act, even amid uncertainty. The document is not perfect or final, but it provides something essential: ethical grounding, shared language, and professional clarity.

NASW has an opportunity—and a responsibility—to do the same.

Developing AI guidance for U.S. social workers would:

  • Provide ethical clarity in a rapidly changing practice landscape

  • Reduce individual risk and uncertainty

  • Support educators, supervisors, and organizations

  • Affirm that social work values remain central in the age of AI

Guidance does not need to be exhaustive to be meaningful. It needs to be principled, transparent, and grounded in the profession’s ethical commitments.

Moving Forward With Professional Integrity

AI is not waiting for social work to catch up. Tools are already embedded in systems of care, education, and administration. The question is not whether social workers will encounter AI, but whether they will be supported in navigating it ethically.

BASW has shown what leadership can look like in this moment.

It is time for NASW to do the same.

Next
Next

The 4 E’s of an AI Fluency Framework for Social Workers