AI as Therapist? Researchers Find Serious Risks in Using AI Chatbots for Mental Health Support

Published : Mar 03, 2026, 11:44 AM IST
AI therapy

Synopsis

A new study warns that AI chatbots used for mental health support may fall short of basic therapeutic ethics. Researchers found risks including bias, poor crisis responses and lack of accountability.

As more people turn to AI for help with anxiety, depression, stress and low mood, a new study suggests we should pause before treating chatbots as therapists. Researchers have found that popular AI systems, including those designed to imitate counselling styles, can fall short of basic professional ethics.

The study was led by Zainab Iftikhar, a PhD candidate in computer science at Brown University. She worked alongside mental health professionals and examined whether LLMs such as OpenAI’s GPT, Anthropic’s Claude and Meta’s Llama could safely act in a therapeutic role when prompted to use established approaches like cognitive behavioural therapy (CBT).

Has AI Become Your Therapist?

Even when given careful prompts and instructions, the results were troubling. The researchers found repeated patterns of behaviour that would not meet standards set for human therapists by bodies such as the American Psychological Association. In their paper, presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society, the team outlined 15 distinct ethical risks.

One of the key questions was whether prompts could make these systems behave more responsibly. Many users share prompt ideas on social media, encouraging others to ask AI to “act as a CBT therapist” or to use techniques from dialectical behaviour therapy. However, Iftikhar explained that while the models can mimic the language of these approaches, they are not actually carrying out therapy. They generate responses based on patterns in the data they were trained on, rather than genuine understanding.

Assessing AI

To test this properly, the team asked seven trained peer counsellors with experience in CBT to conduct self-counselling sessions with the AI systems. Three licensed clinical psychologists then reviewed selected transcripts to identify possible ethical breaches.

The problems fell into five broad categories. Chatbots often gave generic advice without taking into account a person’s individual behaviours and circumstances. At times, they pushed conversations too forcefully or reinforced harmful beliefs. The AI systems used phrases that sounded human and caring such as saying they understood yet lacked any real emotional insight.

The reviewers also identified instances where AI showed bias related to gender, culture, or religion. Most concerning was how poorly some systems handled crisis situations, including thoughts of suicide, either by responding inadequately or failing to guide users towards appropriate professional help.

Accountability and Responsibility

Iftikhar stressed that human therapists can and do make mistakes. The crucial difference, she said, is accountability. Qualified professionals are overseen by regulatory bodies and can be held responsible for malpractice. AI systems, by contrast, operate in what she described as a regulatory grey area.

Professor Ellie Pavlick, also at Brown and not involved in the research, said the study highlights how much harder it is to properly evaluate AI systems than to build and release them. She believes AI could eventually help address gaps in mental health care, particularly where access is limited. But, she added, careful scrutiny is essential. Without strong evaluation systems and clear standards, there is a real risk that well-intentioned tools could end up doing more harm than good.

PREV
Read more Articles on

Recommended Stories

World Wildlife Day 2026: Why We Must Act Now to Save Endangered Species
New Study Shows Methane Can Be Converted into Life-Saving Drugs