From Research to Real-World Innovation: How One PhD Scholar Is Powering a New Generation of Behavioral Health AI Tools

Co-Authors: Dr. Marina Badillo-Diaz, DSW (The AI Social Worker), LCSW & Milo LeGendre, LMHC (Chartara Labs)

Behavioral health is undergoing a transformation driven not by Silicon Valley futurists, but by the clinicians and researchers who understand the realities of practice. Every day, mental health professionals generate enormous amounts of clinical insight, documentation, outcome data, and organizational knowledge. Yet far too often, this information sits unused, trapped inside EHRs, scattered across platforms, or buried under administrative burden. The gap between what the research recommends and what clinicians can realistically implement has never been wider.

At the same time, AI is accelerating at a pace few in behavioral health were prepared for. Tools are emerging faster than practitioners can evaluate them, and many are built without meaningful input from clinicians, supervisors, administrators, or researchers. The result is technology that rarely reflects the nuance, ethics, and lived realities of behavioral health work. What the field urgently needs now is practitioner-built innovation, shaped by the people who provide care, supervise teams, and design behavioral health systems.

This month’s blog is a collaboration between Dr. Marina Badillo-Diaz, LCSW, the AI Social Worker, and Milo LeGendre, LMHC, a New York City Licensed Mental Health Counselor and PhD candidate in Psychology specializing in Social Policy and Behavioral Health Administration at National University. LeGendre’s work represents a unique convergence of applied scholarship, front-line clinical experience, and technical innovation. Through his company, Chartara Labs, he demonstrates what it looks like when rigorous behavioral health research does not remain siloed in academia but becomes the engine for real-world product development and behavioral health leadership. 

In this blog, we explore how LeGendre’s research, combined with modern AI capabilities, is reshaping what is possible in behavioral health. We examine how data visualization reduces clinician cognitive load, how machine learning can be made understandable and ethically usable for providers, and how AI-enabled tools can reduce burnout and strengthen care. His work offers a blueprint for the future of behavioral health technology, one where practitioners lead the way, and AI supports, rather than replaces, the clinical relationship.

The Story of Chartara Labs

LeGendre’s doctoral research focuses on research synthesis, integrating findings across diverse behavioral health domains to inform both policy and practice. His scholarship centers on three interconnected areas:

  1. Routine Outcome Monitoring (ROM) and how clinicians and organizations can use measurement-based care to support clinical decision-making.

  2. Organizational cultures of evidence-based practice, including how supervisors, administrators, and behavioral health agencies adopt, or struggle to adopt, data-informed workflows.

  3. Behavioral health systems and policy, including how organizations design and deliver services, develop and manage their workforce, navigate policy and financing structures, and evaluate programs to support quality and sustainability.

Across these areas, a core theme emerged: behavioral health professionals routinely generate vast amounts of clinical and organizational data, yet lack the tools to fully leverage that information for decision-making, treatment planning, and system-level insight. As LeGendre immersed himself in the peer-reviewed literature during his studies, a clear pattern became visible: the gap between research and practice is often a tooling gap, not a knowledge gap. Clinicians and organizations know the value of outcomes monitoring and evidence-based practice, but the absence of intuitive, automated, data-driven infrastructure makes implementation burdensome and inconsistent.

It was through this realization that LeGendre founded Chartara Labs, a behavioral health technology company that leverages Artificial Intelligence (AI) to produce actionable insights and workflow automations for the industry. By merging AI, research, data science, and his own clinical expertise, LeGendre builds systems that deliver real-time visual analytics, decision support, and documentation automation, bridging the long-standing divide between what behavioral health research recommends and what practitioners can feasibly implement.

Chartara Labs embodies the philosophy of research to practice and shows how empirical evidence, when paired with modern AI capabilities, can reshape behavioral health delivery, strengthen outcomes, and reduce administrative burden. LeGendre stands as a new kind of behavioral health researcher: one who not only synthesizes the literature but also engineers solutions that allow the field to live up to it. Furthermore, LeGendre is pioneering the integration of AI and Visual Analytics in the behavioral health industry. 

With this foundation in place, the next question becomes how research-informed tools can meaningfully support clinicians in their daily work. One of the clearest examples is the role of data visualization in reducing cognitive load.

Reducing Cognitive Load Through Data Visualization

Research on clinician workload shows that behavioral health documentation and outcome tracking create substantial cognitive strain, especially when information is fragmented across systems. One of the biggest challenges in clinicians’ operational workflow is cognitive load,  the mental effort required to process information, prioritize, and make decisions. Traditional EHRs often scatter data across tabs and fields, forcing clinicians to synthesize disparate information. Research shows that well-designed visualizations within EHRs can significantly reduce cognitive workload.

A cross-sectional study with 29 physicians found that visual prototypes highlighting clinically meaningful patterns (e.g., changes in condition) led to lower cognitive burden, as measured by the NASA Task Load Index (TLX), compared to traditional designs (Pollack & Pratt, 2020). This finding underscores a core principle behind Chartara’s data visualization and machine learning dashboards: surface the most relevant data in a way that humans can intuitively understand.

A similar pattern emerges across diverse healthcare domains, from frontline patient-facing EHR workflows to population-level public health analytics. Parikh et al. (2021), in their evaluation of the RED Alert re-emerging disease surveillance platform, found that visual analytics enabled public health analysts to make faster, more accurate judgments about complex, multi-dimensional data sets. The RED Alert tool integrated machine learning predictions with interactive data visualizations to detect disease re-emergence trends across time, geography, and contributing factors. By offloading complex pattern recognition and reducing the need to manually compare indicators across years and countries, visual displays “reduced the load on working memory, offered cognitive support, and leveraged the power of human perception” for analytic reasoning. Analysts were able to identify re-emergence events and evaluate contributing epidemiological factors precisely because visual analytics compressed high-volume, heterogeneous data into perceptually efficient formats.

Taken together, Pollack and Pratt’s (2020) findings and Parikh et al.’s (2021) evaluation converge on the same conclusion: visual analytics consistently reduce cognitive workload by transforming multi-step mental calculations into pattern-recognition tasks, which human perception is optimized to perform. While the NASA-TLX study shows this benefit in a clinical, practitioner-facing context, RED Alert demonstrates the same effect in operational, population-level public health analytics. This cross-domain consistency strengthens the argument that data visualization is not merely aesthetically beneficial; it is a workload mitigation strategy supported by human factors evidence.

Chartara Labs applies this insight in both its clinical and operational products. For clinicians, visualized symptom trajectories and progress indicators reduce the mental effort required to interpret routine outcome monitoring data. For organizations, Chartara’s operational dashboards apply the same cognitive principles to staffing patterns, caseload trends, no-show risk, and documentation compliance. In both cases, visual analytics serve the same core function documented in the NASA-TLX and RED Alert research: they minimize cognitive load, reduce the need for manual synthesis, and support faster, more accurate decision-making across behavioral health workflows.

These findings show why visualization is a cornerstone of effective behavioral health technology. Yet visualization alone is not enough. Clinicians also need support in understanding how machine learning works and how to interpret AI-driven insights safely.

Machine Literacy Matters for Therapists

Behavioral health providers increasingly encounter AI-enabled tools, yet most have never received structured training in machine learning (ML). Research shows that clinicians’ trust and willingness to use ML systems depend heavily on receiving clear explanations of how the model works and how to interpret outputs safely (Rosenbäcke et al., 2024; Nasarian et al., 2024).

For therapists and social workers, ML can be distilled into three essential ideas:

• What it does: ML finds patterns in large amounts of historical data and identifies trends humans may miss.
• How it helps: ML converts these patterns into visuals, summaries, and risk indicators that support clinical reasoning.
• What it does not do: It does not replace clinical judgment; it augments it.

Clinicians respond best to explanations that show which features contributed to a model’s prediction and how much uncertainty surrounds that prediction. Structured interpretability, preprocessing, modeling, and post-processing strengthen clinician understanding and reduce overreliance on “black-box” outputs.

Research also shows that clinicians benefit from tools that support thinking rather than promoting blind trust. Cognitive forcing prompts, for example, reduce overreliance on AI and improve decision quality (Buçinca et al., 2021). A cognitive forcing prompt in behavioral health machine learning is an intervention designed to disrupt a user’s quick, intuitive decision-making processes and prompt them to engage in more analytical, deliberate thinking. 

In AI systems for behavioral health, cognitive forcing prompts can take several forms:

  • Checklists: Presenting a checklist of alternative possibilities or necessary steps that the user must explicitly consider before finalizing a decision.

  • "Time-outs": Introducing a deliberate delay (e.g., a "AI is processing" message) to encourage the user to form their own initial hypothesis before seeing the AI's suggestion.

  • Mandatory Formulation: Requiring the user to formulate their own response or decision first, before being shown the AI's recommendation.

  • Highlighting/Redirection: Using specific formatting or prompts (e.g., "Note that the AI may not always be accurate, so verify against the reference document") to draw attention to potential areas of error or conflict with source material (hallucinations).

  • Prompting for Justification: Asking the user to explicitly rule out an alternative diagnosis or justify their decision if it conflicts with the AI's suggestion. 

Clinicians are not resistant to ML. They are simply not trained in it. Current healthcare curricula rarely include ML concepts, and when they do, the content is often too technical or disconnected from daily workflows (Charow et al., 2021). Programs such as the AiM-PC curriculum emphasize practical, case-based learning, which aligns with what behavioral health providers prefer.

Nasarian et al. (2024) extend this insight by demonstrating that interpretability frameworks across preprocessing, modeling, and post-processing stages strengthen trust and reduce errors from misinterpreting model outputs. This means that at each stage—checking and cleaning the data, showing which patient features drive predictions, and presenting results with clear visuals and context, clinicians can see how the model works, spot potential errors, and use AI outputs safely to support their decisions. Conversely, overly complex, contradictory, or absent explanations can undermine trust. It is imperative that ML tools be paired with carefully designed interpretive guidance to support clinicians’ expertise rather than substitute for it.

Understanding ML literacy brings us to a key question: how do these findings show up in Chartara’s design?

Research-Informed Practices of Chartara Labs AI Platform Design

The architecture of Chartara Labs is informed by clinical best practices for decision support. Studies consistently show that clinicians prefer tools that combine clear visualizations with plain-language explanations, since these features make machine learning outputs more interpretable and actionable in daily practice. Chartara’s dashboards follow this principle by translating complex data into intuitive visuals that mirror how clinicians naturally process information. 

Human-in-the-loop design is another area where Chartara aligns closely with the research. Evidence shows that AI should support, not replace, clinical judgment. Chartara’s tools are built to enhance reasoning rather than automate it. The system surfaces patterns, highlights risks, and visualizes trends while leaving interpretation and decision-making in the hands of providers and supervisors.

Chartara also integrates micro-education directly into the platform. Short, embedded explainers help clinicians understand how specific insights were generated and how to interpret them safely. This approach responds directly to the well-documented machine learning literacy gap among behavioral health providers. Clinicians do not need to become data scientists to use the tool. Instead, the system supports them with ongoing, context-based learning.

Chartara Labs Combines Generative AI Documentation with Visual Analytics & ML

Chartara’s combination of an AI scribe with ML-powered visual analytics aligns with a growing body of literature on clinician burnout, cognitive load, and workflow strain in behavioral health systems. Research on the public mental health workforce demonstrates that high documentation demands, volume-driven reimbursement models, and administrative case management responsibilities contribute to emotional exhaustion, reduced use of evidence-based interventions, and high clinician turnover (Baptiste & Talley, 2022). These pressures divert clinicians’ time and attention away from therapeutic work and undermine the consistent delivery of evidence-based care. By automating routine clinical documentation and transforming session data into interpretable visual summaries, Chartara’s platform is designed to reduce administrative burden while preserving clinician judgment. In doing so, it supports more sustainable clinical workflows, allowing providers to allocate cognitive resources toward treatment planning, therapeutic alliance, and evidence-based practice rather than paperwork alone.

 Clinicians can view symptom trajectories, such as PHQ-9 trends, through clear visual analytics, while the system surfaces risk indicators and treatment patterns without requiring manual synthesis across multiple electronic health record tabs. In the same interface, clinicians can also use voice dictation to speak into their device’s microphone, retain their own clinical voice, and Chartara Labs generative AI capabilities will provide accurate, improved, and high-quality documentation. The AI progress note writer adapts to any note framework needed- within seconds. The feature is customizable, so clinicians can enter their unique format for session progress notes, intakes, termination summaries, memos, and more. Rather than bulk recording client therapy sessions, individual therapist voice dictation provides an authentic solution for therapists to recall clinical interactions efficiently and retain their own knowledge, voice, and expertise.

Chartara Labs is providing generative AI documentation and a machine learning system all in one web app that works seamlessly with other electronic health records is a huge leap forward for the behavioral health industry. Importantly, LeGendre developed this platform with rigorous research on HIPAA compliance and FDA regulations in mind. There is a legal and regulatory basis for every feature Chartara Labs provides. This makes it a competitive and compelling option for industry healthcare providers nationwide.

Chartara Labs Supports Business Intelligence and Billing Consistency for Providers

Showcasing versatility, Chartara Labs also improves operational and financial outcomes of behavioral health organizations. The platform applies machine learning–enabled data visualization to administrative workflows that directly affect billing, compliance, and supervisory oversight. For example, Chartara’s Treatment Plan Review (TPR) tracking tools allow clinicians and managers to monitor documentation deadlines across large caseloads in real time, replacing time-intensive manual tracking processes. Instead of repeatedly entering and reconciling data across spreadsheets and electronic health record tabs, updated information is visualized automatically, generating clear indicators of compliance status and upcoming deadlines. These visual summaries can be exported instantly as time-stamped PDF reports, supporting supervision, audits, and payer requirements while reducing the risk of missed or delayed billing. By embedding lightweight analytics into everyday operational tasks, Chartara enables organizations to maintain documentation accuracy, improve billing consistency, and scale oversight efforts without increasing administrative burden. 

The importance of data visualization assisting therapists in timely TPRs that subsequently improve continuity of care and billing consistency cannot be understated. This feature is central to LeGendre’s core thesis that underpinned his initiative to develop Chartara Labs. That is, data visualization is the foundational benefit of the company to clinicians and behavioral health organizations in general. Visual analytics, then paired with generative AI documentation capabilities thus enhances a company’s ability to improve its outcomes immensely.

Targeted Supervision, Training, and Embedded PDF Reporting Automation

Automated monitoring of caseload trends and diagnosis prevalence is another example of how Chartara translates research into practice. Clinicians often have an impression of their caseloads, but data visualization reveals trends that are not always obvious. In LeGendre’s own work, visualization made it clear that major depressive disorder dominated his caseload. This insight allowed him to adjust treatment strategies, deepen specific competencies, and better prepare for supervision conversations. From a supervisory standpoint, following clinicians’ caseload analytics using data visualization and machine learning can enhance the ability for targeted training initiatives within organizations, such as in Certified Community Behavioral Health Clinics (CCBHCs) where LeGendre works as a full-time psychotherapist. 

These analytics play an important role in supervision, targeted professional development, and workforce training. Supervisors can identify where clinicians may need support, which diagnostic areas require additional training, and where caseload composition may be contributing to burnout. This is valuable across a range of settings, including social work field education, CCBHC programs, private practice groups, and college counseling centers.

Targeted training based on real caseload data aligns with ethical, data-informed supervision. It supports transparency, improves the quality of care, and ensures that professional development is responsive to actual practice patterns rather than assumptions. By grounding supervision in accurate, real-time analytics, Chartara Labs helps supervisors and clinicians work together in a more focused, equitable, and sustainable way.

Why Practitioner-Built AI Tools Matter

Many AI tools in behavioral health are created without the insight of people who have provided direct care, supervised clinical teams, or managed the pressures of documentation and productivity. As a result, technology may be technically advanced but disconnected from the realities, ethics, and constraints of behavioral health practice. Practitioner-built tools matter because they begin with lived experience and are grounded in what providers and organizations actually need to deliver effective care.

Tools shaped by real clinical workflows tend to be more ethical and more practical. Practitioners understand where cognitive load is highest, where documentation creates bottlenecks, and where clients are most at risk for inequitable care. This grounding allows them to design tools that reflect the nuances of behavioral health rather than an idealized version of it. 

This differs from many technology solutions developed in traditional tech settings. When behavioral health challenges are viewed only through the lenses of efficiency or prediction, essential human elements can be missed. Rapport building, safety planning, cultural context, and supervisory support are central to behavioral health but are often overlooked when clinicians are not part of the design process. In practice, this can lead to tools that unintentionally increase workload, pressure clinicians with unrealistic metrics, or replicate existing inequities.

Practitioner-built AI offers a more aligned approach. These tools reflect the realities of complex caseloads, inconsistent attendance, and the emotional and administrative demands of the work. They also reflect the ethical commitments of behavioral health professions, including dignity, relational practice, and client-centered care. With this foundation, practitioner-designed tools are more likely to reduce administrative burden, support trauma-informed care, and promote equity.

Most importantly, practitioner-driven innovation helps reduce burnout by ensuring that technology lightens the workload rather than adds to it. When clinicians, supervisors, and administrators help lead the design process, AI can simplify documentation, support outcome monitoring, surface trends for supervision, and enhance clinical reasoning. These improvements strengthen the workforce and improve the quality of care.

When developing Chartara Labs, Milo LeGendre, LMHC envisioned one unified, therapist-friendly interface where he could improve the quality of his work, reduce therapist burnout, enhance client outcomes, and engage in Routine Outcome Monitoring more effectively. He then sought to bring this platform to life with the understanding that it could improve workflow efficiency for three levels of the behavioral health industry. It was his on-the-job clinical work and administrative experience combined with the informed scholarly research synthesis, that allowed him to develop a platform that tangibly helps clinicians, supervisors, and administrators. Perhaps Chartara Labs’ greatest strength lies in its flexibility for users to harness it and adapt it for the range of workflows that the industry requires. In this way, Chartara Labs is designed not only as a tool, but as a platform that empowers clinical and organizational leaders to actively shape, govern, and adapt technology in service of their own teams, systems, and values.

Conclusion

The future of behavioral health AI relies on collaboration among researchers, clinicians, supervisors, administrators, and technologists. Each group holds essential knowledge for building tools that support ethical practice and meaningful outcomes. When these perspectives work together, AI becomes more than a technical solution. It becomes a mechanism for strengthening care and improving the experiences of both clients and providers. Chartara Labs demonstrates what is possible when clinical expertise and research are translated into practical innovation. By grounding AI development in real behavioral health workflows, Chartara helps close the gap between the recommendations in the literature and what practitioners can realistically implement.

As the field evolves, there is a growing need for more practitioner-researchers to engage in AI co-design. Behavioral health professionals understand where systems fall short and where clients and clinicians need support. When their insight guides the development process, AI becomes more ethical, more usable, and more aligned with the values of behavioral health care. The next generation of behavioral health AI will be shaped by those who understand the work from the inside. This is the moment for practitioners to lead, collaborate, and innovate to ensure that technology supports people and strengthens care.

For more information on Chartara Labs and how to get started using the platform, please visit chartaralabs.io. You can also connect with Milo LeGendre, LMHC on LinkedIn and on Instagram @chartaralabs. 

The content in this blog was created with the assistance of Artificial Intelligence (AI) and reviewed and edited by Dr. Marina Badillo-Diaz, LCSW and Milo LeGendre, LMHC to ensure accuracy, relevance, and integrity. Dr. Badillo-Diaz's and LeGendre’s expertise and insightful oversight have been incorporated to ensure the content in this blog meets the standards of professional social work practice.

References

Artificial Intelligence and Machine Learning for Primary Care Curriculum (AiM-PC). (2024). Society of Teachers of Family Medicine. https://stfm.org/teachingresources/curriculum/aim-pc/aiml_curriculum/

Baptiste, M., & Talley, R. (2022). The rising toll of public mental health work. Psychiatric Services (Washington, D.C.), 73(7), 721. https://doi.org/10.1176/appi.ps.22073007

Charow, R., et al. (2021). Artificial intelligence education programs for health care professionals: Scoping review. JMIR Medical Education, 7(4), e31043. https://mededu.jmir.org/2021/4/e31043

Parikh, N., Daughton, A., Rosenberger, W., Aberle, D., Chitanvis, M., Altherr, F., Velappan, N., Fairchild, G., & Deshpande, A. (2021). Improving detection of disease re- emergence using a web-based tool (RED Alert): Design and case analysis study. JMIR   Public Health and Surveillance, 7(1), e24132. https://doi.org/10.2196/24132

Pollack AH, Pratt W. Association of Health Record Visualizations With Physicians’ Cognitive    Load When Prioritizing Hospitalized Patients. JAMA Netw Open. 2020;3(1):e191930. doi:10.1001/jamanetworkopen.2019.19301

Proc. ACM Hum.-Comput. Interact. 5, CSCW1, Article 188 (April 2021), 21 pages. https://doi.org/10.1145/3449287

Nasarian, E., Alizadehsani, R., Acharya, U. R., & Tsui, K.-L. (2024). Designing interpretable ML system to enhance trust in healthcare: A systematic review to proposed responsible clinician-AI-collaboration framework. Information Fusion, 102412. https://doi.org/10.1016/j.inffus.2024.102412

Rosenbäcke, R., et al. (2024). How explainable artificial intelligence can increase or decrease clinicians' trust in AI applications in health care: Systematic review. JMIR AI, 1(1), e53207. https://ai.jmir.org/2024/1/e53207

Zana Buçinca, Maja Barbara Malaya, and Krzysztof Z. Gajos. 2021. To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making.

Previous
Previous

Artificial Intelligence in the Family System: Why It Matters for Social Work Practice

Next
Next

The Use of the AI Assessment Scale in Social Work Assignments