Towards safe, robust, interpretable dialogue agents for democratized medical care

PI: Julia Hirschberg
Co-PI: Sarah Ita Levitan, CUNY; Tatiana Emmanouil, CUNY

Abstract

With the growing capabilities of Large Language Models (LLMs) in enhancing communication, AI-powered dialogue agents are increasingly used in clinical psychology, promising to augment human therapists. However, deploying these technologies in medical settings involves significant risks, including the lack of safety regulation and robust evaluation methods, raising concerns about efficacy and reliability. Moreover, interpretable and effective communication is essential for practitioners to trust AI advice and misuse. Rising mental health issues highlight the urgent need for more accessible therapy. High costs and societal stigma often prevent those with limited resources from receiving traditional therapy. Our goal, aligned with ARNI’s themes of language, continual learning, and reasoning, aims to develop reliable and interpretable LLMs that provide immediate mental health support by establishing comprehensive safety guidelines and creating interpretable, effective medical AI/NLP systems tailored for psychological counseling. We will integrate principles from social science and human cognition into AI, enabling the creation of dialogue agents that are both technically proficient and psychologically intuitive. Our challenges are directly inspired by real-world applications in mental health care, where there is a pressing need for accessible, safe, and effective therapeutic tools. Ultimately, this work strives to bridge the gap between advanced technology and the nuanced needs of mental health care, democratizing patient care through empathetic, technology-driven tools.

Publications

In progress

Resources

In progress