Students Are Using AI, Professors Are Responding: Here’s the Conversation That Matters
Authors: Dr. Badillo-Diaz and Nelson Santos, MSW Candidate
Artificial intelligence is no longer a theoretical issue in higher education. It is a lived reality in today’s classrooms. Students are encountering AI not as an abstract technology, but as a tool woven into their academic routines, learning processes, and efforts to navigate the very real pressures of school, work, caregiving, and life. Faculty, particularly in professional programs like social work, are responding in real time, often without shared language, clear institutional guidance, or established pedagogical norms.
As a social work educator, administrator, and practitioner working at the intersection of ethics, pedagogy, and technology, my role in this moment is both instructional and reflective. I am responsible for preparing future social workers to think critically, practice ethically, and engage emerging tools without losing the human-centered values that define our profession. That responsibility requires not only setting boundaries, but also listening carefully to how students are experiencing these tools in their academic and professional formation.
This month’s blog collaboration emerged through connecting with MSW Candidate Nelson Santos, with a strong interest in humanitarian work and the ethical integration of AI in social impact contexts. Our conversations bridged faculty and student perspectives, highlighting the importance of intergenerational dialogue in understanding how AI is shaping learning, identity, and professional values in real time. While the analysis and framing here reflect my professional lens as an educator and practitioner, the collaborative process itself reinforced a central insight: meaningful AI integration in social work cannot be top-down. It must be relational, reflective, and grounded in shared inquiry.
This blog is an invitation into the conversation that matters especially between faculty and students, one that centers learning, integrity, and the preparation of future social workers to engage technology in ways that strengthen, rather than compromise, ethical practice.
The Reality We’re All Seeing
Students are using AI, not occasionally, but regularly. They are using it to interpret assignment prompts, organize their thoughts, manage overwhelming workloads, and sometimes to cope with the very real pressures of school, work, caregiving, and life. Increasingly, students are also using AI to deepen their learning: to test their understanding of complex theories, ask clarifying questions when concepts feel abstract or inaccessible, and work through ideas before bringing them into their own academic voice. Some students use AI to identify gaps in their reasoning, to refine arguments, or to strengthen the clarity and structure of their writing, not to replace their thinking, but to support it.
Faculty are not naïve to this reality. Professors across disciplines, including social work, are responding in real time, often without institutional guidance, shared language, or clear models for best practice. The real question is no longer whether AI is present in higher education. It is already embedded in the learning environment. The question is how it is being used, why students turn to it, and what this means for learning, professional identity, and ethical social work practice.
Social work education has always been responsive to social change. We teach students to assess systems, interrogate power, and adapt practice to evolving contexts. Artificial intelligence is now one of those contexts. Treating it solely as a threat to be eliminated or a shortcut to be banned misses the pedagogical responsibility of this moment. It also overlooks the ways students are actively trying to engage these tools as learning supports, tools they will inevitably encounter in professional, clinical, and humanitarian settings.
Student Truths
We are at the cusp of a technological revolution that is democratizing access to knowledge and fundamentally changing educational institutions. Unfortunately, this excitement is overshadowed by some professors who are skeptical of this technology and its implications. Understandably, some professors perceive that students are offloading cognitive work to AI models, thereby inhibiting their critical thinking skills. It is noteworthy that there are professors, like Dr. Badillo-Diaz and others, who do see the value of AI for students and that, when used ethically, it can be beneficial. Here lies the tension between professors who embrace AI and those who are concerned about its implications for the curriculum and students' ability to think critically without AI's assistance. Although these are valid concerns raised by AI skeptics, this should not impede the use of AI in the classroom. Restricting AI in the classroom due to potential negative consequences may have significant implications for future generations of students in their careers. Thus, we should propose synergistic solutions that still emphasize critical thinking while integrating AI.
Moving on from concerns to the beneficial implications of ethical use of AI, as a student I use several programs daily that enhance my learning experience. Applications such as OpenAI, NotebookLLM, and Gamma have assisted me in numerous ways, including serving as my reading companion to understand works of Western philosophy, enhancing PowerPoint presentations, serving as a writing assistant, generating study guides, and creating an interactive podcast on dense material to make complex concepts easier to comprehend. Moreover, I have continued to expand my AI skills and have begun building applications that I have successfully utilized to prepare me for real-world interactions in the social work field. One such example is an application that I developed that simulates interactions with digital interactive avatars resembling those of someone with a substance abuse disorder. Even though I have seen how this has personally impacted my life, I am hesitant to share with certain professors due to the inconsistency of AI boundaries.
This inconsistency in AI policy is extremely frustrating across classes. From a student's perspective, I understand the professor's hesitancy about Artificial Intelligence, and rightfully so, there are students who will use these tools unethically. However, we should not impede technological advancements in the classroom because of an isolated case of misuse that would transpire regardless of AI. We must not forget that throughout history, many technological revolutions have forced individuals to adapt to a new standard of learning. Underscoring this example is the advent of the Internet. Although professors initially had reservations about the reliability and feasibility of online answers, many years later, universities and professors who teach the curriculum remain an invaluable part of the academic journey. Moreover, this time feels particularly poignant; no one truly understands the ramifications of this technological revolution, so it's important that professors take the lead. Thus, they should embrace these tools and demonstrate to their students how to use them effectively and ethically in their lives, ensuring they are ready for the future.
Professor Truths
From the faculty perspective, artificial intelligence is not an abstract pedagogical debate. It shows up in grading, in classroom discussions with students, and in the ways professors themselves are beginning to use AI to support aspects of their work. AI-influenced student submissions can have a recognizable feel, though not always one of polish. Faculty encounter work that includes fabricated or unverifiable citations, references that do not correspond to real sources, inconsistent formatting, or stylistic markers such as unexplained bolding, fragmented phrasing, or incomplete sentences. The central concern is the lack of evidence that students have reviewed, evaluated, and taken responsibility for the accuracy and integrity of the work presented as their own
This creates a real tension for educators. On one hand, faculty are charged with protecting learning outcomes, professional competencies, and the integrity of degrees that carry ethical responsibility beyond the classroom. On the other hand, over-policing AI use risks creating fear-based environments where students prioritize compliance over curiosity, secrecy over transparency, and performance over learning. Many faculty feel caught between these poles, navigating rapidly changing technology without consistent institutional guidance, shared language, or pedagogical models that feel aligned with professional values.
Social work educators, in particular, are deeply invested in preparing students to think critically, practice ethically, and develop a professional identity grounded in judgment, accountability, and relational awareness. The concern is not whether students use AI, but whether AI is doing the thinking for them, and whether students are learning how to recognize when that line has been crossed.
Faculty also experience their own form of inconsistency. Across institutions, programs, and even within departments, expectations around AI use vary widely. Some faculty feel pressure to ban tools they do not fully understand. Others worry that permissive approaches may unintentionally dilute skill development in areas that cannot be automated: ethical reasoning, clinical judgment, reflexivity, and use of self. This lack of shared norms leaves many educators feeling responsible for drawing boundaries alone, often in high-stakes contexts where professional formation is on the line.
The Middle: What Both Sides Actually Want
What is often missing from the public discourse is the reality that students and professors are not adversaries in this conversation. In practice, they want remarkably similar things from the learning environment. Both seek learning that is meaningful rather than performative, work that reflects genuine engagement instead of compliance. Both want clarity rather than ambiguity about what is acceptable, ethical, and expected when it comes to AI use. And both benefit from classrooms grounded in trust rather than suspicion, where transparency and dialogue replace fear and enforcement. At the core of this shared interest is a common goal: using tools in ways that support thinking, deepen understanding, and strengthen learning, rather than tools that perform the intellectual labor on a student’s behalf.
Between rigid prohibition and unchecked use lies a critical middle space. This is the space where ethical education lives. Teaching within this space requires intentional design rather than reactive policy. It means naming AI explicitly in syllabi and assignments so expectations are not left to interpretation. It means clearly articulating what constitutes acceptable academic support versus inappropriate substitution of thinking. It also requires designing assessments that prioritize process, reflection, application, and ethical reasoning, elements of learning that cannot be automated and that align closely with the values and competencies central to social work practice. Most importantly, teaching in this middle space requires a shift away from a surveillance mindset toward a teaching mindset. One that assumes students are capable of ethical decision-making when given clear guidance, shared norms, and opportunities to reflect on their choices.
V. The Conversation That Matters
The conversation that matters is not whether artificial intelligence is “good” or “bad.” That binary framing is both unhelpful and outdated. What matters far more are the questions we ask as educators, programs, and institutions responsible for professional formation. The presence of AI in higher education is no longer speculative; it is embedded in how students learn, write, and engage with academic material. Our task, therefore, is to clarify how AI can be used in ways that support learning while protecting academic and professional integrity.
One essential question is when AI use is appropriate. AI may support learning through brainstorming, outlining ideas, clarifying complex theories, or editing for clarity. However, clear boundaries are necessary to ensure that these uses do not cross into academic substitution, where the tool performs the core intellectual labor meant to be done by the student. Defining these boundaries explicitly helps preserve both learning outcomes and professional standards.
Equally important is how AI use should be disclosed. Requiring students to include a brief statement explaining how AI supported their process, or a short reflection on how the tool was used, promotes transparency and ethical awareness. Disclosure shifts the conversation from secrecy to responsibility, reinforcing trust between students and faculty while modeling professional accountability.
Another critical consideration is identifying which skills must never be outsourced. Critical analysis, ethical reasoning, cultural humility, clinical judgment, empathy, and reflexivity are foundational to social work practice. These competencies are central to professional identity and cannot be delegated to a model without undermining the very purpose of social work education.
Finally, educators must consider what it would look like to co-create norms around AI use with students. Rather than relying solely on imposed rules, inviting students into the development of AI use agreements fosters shared ownership, accountability, and ethical decision-making. When students understand the rationale behind expectations, they are more likely to engage responsibly.
These are not disciplinary questions. They are pedagogical ones. They reflect a commitment to teaching students how to think, reason, and practice ethically, not simply how to produce assignments.
A Joint Statement for Future Social Workers
Artificial intelligence is not replacing social work education; it is reshaping the context in which education occurs. Future social workers will practice in systems increasingly influenced by algorithms, automation, and data-driven decision-making. If students are not prepared to engage these tools critically and ethically, they enter the profession underprepared and vulnerable to misuse, bias, and unintended harm.
AI literacy in social work education must therefore extend beyond technical competence. It must include ethical discernment, an understanding of algorithmic bias, awareness of power and inequity, and a sustained commitment to human-centered practice. Students need to learn not only how to use AI, but when not to use it and why those decisions matter.
At its core, social work remains relational. No model can replace the nuance of human connection, ethical judgment, and cultural responsiveness required in practice. The responsibility of social work educators is to ensure that technology enhances, rather than erodes, these foundational values.
This moment presents a clear choice. Educators can respond with fear, control, and silence, or they can respond with clarity, intention, and thoughtful pedagogy. The future of social work education depends on choosing the latter. That is the conversation that matters.
The content in this blog was created with the assistance of Artificial Intelligence (AI) and reviewed and edited by Dr. Marina Badillo-Diaz, LCSW, and Nelson Santos to ensure accuracy, relevance, and integrity. Dr. Badillo-Diaz's expertise and insightful oversight have been incorporated to ensure the content in this blog meets the standards of professional social work practice.