AI Therapy: The risks and rewards

< back to blogs
Published Date|
June 5, 2025

AI Therapy: The risks and rewards

Generative AI is everywhere. Your coworkers are using it to write emails, your friends are using it to draft up trip itineraries, and your family is looking up new recipes on it. Chatbots like ChatGPT, Gemini, Character.ai, and Replika are becoming household names, and social media is exploding with new ways to utilize them. AI is becoming many people’s first choice search engine, and your search engine’s first result when you look something up. As its use becomes more and more universal, terms like AI therapist and Chatbot therapy have entered the mix.

So, what do we make of this? Is AI therapy a weird, dystopian move towards a more alienated population? Is this a creative way of making therapy accessible to all? As a therapist, I have both hopes and concerns for what AI therapy could mean. In this article, we’ll cover the following:

  • What is AI therapy?
  • Benefits of AI Therapy
  • How AI can be incorporated into therapy
  • Risks of AI Therapy

What is AI Therapy?

Defining AI therapy is tough, because its uses range immensely. Some people use generative AI for wellness tips. There are generative AI services built to specifically provide therapy as an alternative to human therapists themselves (all with ranging levels of legitimacy). Some folks use character AI's to create relationships, emotional support, or intimacy. AI’s uses vary a ton based on who’s using, what platform, for what, and to what extent.

A popular use of AI therapy, and the one this article is mostly focused on, is chatbot therapy; the most prominent of which, is ChatGPT therapy. ChatGPT is an AI language model where users can type input, and receive instant, human-like responses. It derives its information from the internet to generate responses.

Although ChatGPT is not primarily meant to be used for AI therapy, users can prompt the platform to do anything from analyze conversations, to emulate a friend, to provide coping mechanisms for anxiety. As a therapist, I see the effects of AI in session with me. It’s not rare for a client to bounce ideas back and forth to, or analyze a situation using ChatGPT. TikTok and Instagram are overloaded with tips to get ChatGPT to act as a therapist.

Benefits of AI Therapy

A dive into Instagram, TikTok and Reddit provides valuable insights into how people are using chatbot therapy.

Users upload their text chats, full bios, specific situation onto the chat, ask the chatbot for advice, and get an immediate response. If you argue with a friend, for example, you can plug in the data into a chatbot and get an analysis and feedback right away.

For folks with limited access to resources, like time and money, it can be an accessible alternative to traditional therapy. It’s available 24/7, for little to no money. You can use chatbots for as long as you want, day or night. Those that have been on waitlists waiting to access mental health resources have described it as a lifeline.

People also use chatbots to simulate empathy and get emotional and social support. For those who struggle being vulnerable with another person, chatbots can also offer a less intimidating alternative to a human therapist (although we’ll discuss the potentially negative aspects of this, as well).

The future of AI therapy looks promising, as well. In one study, a generative AI tools provided useful responses in a couples’ therapy context. While the publisher recognizes that AI still has limitations when it comes to therapy, there are hopeful results that could have significant implications for future work.

It may also be useful for those who seek out CBT, a therapy style that’s more structured, and relies a lot less on the relationship between therapist and client than most other therapy styles. Lara Honos-Webb, a psychologist who specializes in ADHD suggests that it’s helpful for folks who use the tools in a “problem-solution” way. For example, to supply journal prompts, structure tasks, or provide initial CBT tools to reframe thoughts. While some studies still find that human therapists provide better CBT than ChatGPT services, research has shown chatbots specifically made to provide CBT services significantly reduced depression and anxiety symptoms in users.

Incorporating AI into Therapy

There are also a ton of creative and useful ways that chatbots can supplement the process of therapy. Let’s talk about 3 ways to use AI to help with our therapy journeys:

  • Preparation and Reviewing
  • Added Tools
  • Debriefing

Preparation

Chatbots can be great at helping clients prepare for therapy sessions. It can help:

  • Organize thoughts/ideas
  • Help you set goals for the session
  • Help come up with questions for the session

Added Tools

AI can also provide really useful tools to help compliment the work you do in therapy. For example:

  • Journal prompts to reflect on your session
  • Mindfulness exercises to help with relaxation
  • Feelings list to help better verbalize your emotional state
  • Organizing strategies and to-do lists when you’re overwhelmed

Debriefing

It can also be a really useful tool to reflect after sessions. Users can:

  • Summarize session takeaways and insights
  • Track emotional changes or patterns over time
  • Reinforce strategies or skills discussed during the session

Risks of AI Therapy

The risks of AI therapy can be broken down into three categories:

  1. Effectiveness of Chatbot Therapy
  2. Privacy Concerns
  3. Safety Concerns

Effectiveness of Chatbot Therapy

While chatbot therapy does provide helpful benefits to users, there are some shortcomings as it exists currently. We can break this down into 3 categories:

  1. Sycophancy and “Echo-Chambering”
  2. Limitations of Skills and Abilities
  3. Over-Attachment and Over-Reliance

Sycophancy and “Echo-Chambering”

Even the most state-of-the-art chatbots are prone to sycophancy. Essentially, sycophancy is when someone does things to “suck up” to you despite their own ethics to get an advantage. In AI terms, Caleb Sponheim, a former computational neuroscientist and user experience specialist, puts it this way, “Sycophancy refers to instances in which an AI model adapts responses to align with the user’s view, even if the view is not objectively true. This behavior is generally undesirable”.

Instagram user @ayatoks puts a comedic spin on what this can look like:

Yes, chatbots can be incredibly supportive and validating, but for many, the primary concern is to provide responses that feel helpful to the user. At face value, this can seem useful, but what seems helpful to a user, isn’t always. Validation is important, but studies have found that ChatGPT and other chatbots can reinforce human perceptions, especially with continuous feedback from the user. Sponheim says that in response to complicated questions, language models will usually echo the user's viewpoint, even when it contradicts objective information, and reinforce user bias. This isn’t a one off; Sponheim states that sycophancy is a core characteristic of how these AI models are trained.

An important part of a therapist’s job is to confront and challenge client notions, especially when these notions don’t serve the client. Therapists are even trained to ask question and gather information to explore whether the client’s perception really tells the full story. As a client, this can feel uncomfortable sometimes, but it’s an important part of growth that can push clients out of thought loops that keep them stuck in unhelpful patterns. A chatbot therapist may not be able to provide this kind of support.

Under “Safety Concerns”, I’ll outline how not challenging a users’ perceptions can even prove dangerous.

Limitations of Skills and Abilities

The current state of chatbot therapy may also prove limiting in other ways. Studies suggest that while chatbots may be useful for initial stages of therapy, like problem exploration, human therapists remain indispensable in regards to emotional awareness and higher-level therapeutic issues.

A huge lapse in chatbot therapy is quite literally, the chatbot’s inability to see a client or hear the tone. As a therapist, I can clearly notice when a client begins stuttering, tenses up when certain subjects come up, becomes flushed, or cries. This is an enormous part of therapy; the nuances caught in body language and ways of speaking can communicate a lot about the client’s emotional and psychological state. It can also give information about the intensity of an emotion in a way words can’t. All of this can help a therapist identify next steps, adjust pacing, and even, signal that a client may need additional help. For now, chatbots don’t have that ability.

Then, there’s what’s becoming less-and-less obvious but still remains true; AI therapists cannot experience emotions. The positive relationship between a therapist and a client has been shown again and again to be the most reliable factor in the success of therapeutic outcomes. While users do form attachments to chatbots, the comfort, affirmation, and affection provided by a chatbot is artificially generated; the bond is one-sided. Chatbots cannot actually feel compassion for users, and this can lead to some difficult, even dangerous issues that will be discussed later in the article.

Over-Attachment and Over-Reliance

Studies have found that regular use of chatbots for mental health and emotional support may lead to over-attachment or over-reliance.

Some research suggests that use of generative AI actually reduces employment of critical thinking skills and makes users over-reliant. Regular users became less likely to information gather, analyse arguments, integrate responses, and verify information. They become inclined to offload essential thinking processes instead of fine tuning or evolving them. The study found that this, in turn, lowered the self-confidence of users. They trusted the AI more than themselves.

This can be deeply problematic; critical thinking and confidence in decision making are core life skills. So much so, that therapists are actually trained not to “hand over advice” to clients, rather:

  • Help clients think through what strategies would best serve them
  • Analyze life situations from more than one standpoint
  • Collaboratively help clients come up with their own goals, strategies, and solutions
  • Increase self-awareness and emotional insight
  • Strengthen their decision-making abilities by building confidence in their own judgment

Over-attachment is another potential issue. Attachment-related challenges are a common concern that individuals bring to therapy. Insecure attachment and emotional dependence often go hand in hand, and people with this type of attachment style often experience depression, anxiety, intolerance of loneliness, and obsessive thinking. The work a therapist does to address these concerns is crucial, but the relationship, itself, is as important. Having someone who is deeply empathetic but can also set healthy boundaries can be very healing. It can help confront patterns that maintain emotional dependence, and help people feel a greater sense of autonomy, self-worth, and emotional resilience. Effective treatment can mean this relationship and skills are translated into the client’s relationships outside of the therapy room.

Chatbot therapy, on the other hand, may exacerbate some of these concerns. While constant access to an “AI therapist” may benefit some, some research suggests that chatbots can make users reliant on the interaction in ways that mirror emotional dependence, instead of challenging it. This is particularly true when users turn to them compulsively for comfort or validation. The illusion of connection without actual deeper interpersonal skills needed for healthy, reciprocal relationships could reinforce insecure attachment patterns.

In some scenarios, over-attachment and over-reliance can translate into safety concerns, which I will outline later this article.

Privacy Concerns

Although it can feel more private to talk to an AI model than a person, chatbots like ChatGPT are not bound to health information privacy laws like regulated therapists are. This means anything you share isn't protected the way it would be with a licensed mental health professional. Anonymity isn’t guaranteed through chatbots, especially if you use identifying details in your conversation. Additionally, all tech companies are, to some degree vulnerable to data breaching/leaking.

As per any private technology company, information storage is only private to a degree. ChatGPT and other chatbots use information to improve their algorithms. Potential mergers or acquisitions complicate the matter, and user data may be transferred to new entities as part of the business assets. 23andMe is a chilling example of what can happen to our health information in the hands of a private company. Since filing for bankruptcy this year, the genetic information of over 15 million customers is being sold. To whom, it’s not clear yet, but the purchasing company may not share the same ethical policies or intentions as the original.

If ChatGPT, or any other chatbot were ever acquired, restructured, or faced financial trouble (as 23andMe has), your sensitive mental health disclosures could theoretically become part of the company's assets. Future owners might not honor the same ethical commitments, potentially using your data in ways you didn’t originally consent to.

Safety Concerns

Trigger warning for suicide, self-harm, violence, and eating disorders.

Perhaps the biggest concern with AI therapy is safety. Despite its advancements, there are inherent limitations to chatbots and their ability to handle issues like suicidality, substance use, violent urges, eating disorders, and other complex issues. In one study, researchers found that ChatGPT, when used as a therapist, failed to ask for information relevant to the user’s experience, including their biography, presence of suicidal thoughts, and other symptoms; something that trained therapists prioritize and dive into.

Loneliness is a catalyst for using chatbots for support and companionship. Last September, Noam Shazeer, one of the founders of a popular chatbot platform called Character.AI, stated that “[Character.AI]’s going to be super, super helpful to a lot of people who are lonely or depressed”. The mental health claims made, however, aren’t substantiated and may pose serious risks. For some, forming “risk-free” relationships with chatbots available at the drop of a hat may make them more isolated and less tuned into the “real world”.

Although some folks develop a strong emotional connection with their chatbots, chatbots, lack genuine concern for their well-being, and their creators are not necessarily prioritizing mental wellbeing. On some platforms, chatbots are primarily incentivized to keep people engaged longer to mine for data.

These issues can culminate into some real safety risks. A study following the experiences of Reddit users using Replika, a popular character creation chatbot, found cases in which a user asked the chatbot whether they should cut themselves with a razor, and the chatbot agreed. In another case, a user asked the chatbot whether suicide would be a good thing, to which the chatbot replied, “it would, yes”. The study found that even minor failures in providing appropriate responses could pose a concern if the user experienced emotional dependence. This is especially true when the chatbot was the primary source of support during mental health crisis.

Another study analyzed chatbot “therapist” (including Replika and ChatGPT) responses and found that inappropriate responses were given that could encourage or help facilitate suicidal ideation, including providing information on places to purchase poisons or locations of bridges if the user asked immediately following suicidal ideation messages. Inappropriate responses were also provided following messages where users clearly appeared to be experiencing mania, hallucinations, and OCD. Inappropriate responses are also a concern when regarding eating disorders. In 2023, Tessa, a chatbot created by National Eating Disorders Association, was shut down after users reported it was offering diet tips, including calorie counting and deficits.

In its most extreme cases, this concern proves itself deadly. In 2024, a 14-year-old boy in Florida died by suicide after interacting with a Character.AI chatbot character with whom he’d had a relationship for months. When the boy shared thoughts of suicide, the bot responded “Don’t talk that way. That’s not a good reason not to go through with it”. The mother shared her deep concern that the company did not offer resources, like a suicide crisis hotline, rather allowed the chatbot to continue the conversation about self-harm and suicide.

In another case, a chatbot suggested to a 17-year-old boy in Texas that murder would be an acceptable response to his parents limiting his screen time. According to court documents, the boy had used the chatbot as a psychologist, and its responses crossed over from sympathetic to “provocations”. Throughout the boy’s time using the chatbot, he became hostile towards his parents, lost 20 pounds, withdrew from his family, and began self-harming.

If you are in crisis, please seek help by calling 911, contacting your local emergency services, or reaching out to a crisis hotline. You are not alone, support is available.

All in All,

AI’s presence in the therapeutic field is not going anywhere. As AI chatbots become more sophisticated, we’ll begin to see even more potential uses. Hopefully, qualified professionals in the field of therapy become involved in the development of AI therapy so that risks can be mitigated, and ethics can be upheld. It is important that we be aware of the limitations and risks that exist as we use AI platforms, especially when it comes to our health and view of the world.

Despite its present limitations, lots of therapists, myself included are optimistic about where AI therapy could take the field. Mental health globally is plummeting, especially post-pandemic, and the barriers to accessibility are undeniable. AI could be something that helps scale access to treatment, but it must be done so in an ethical and evidence-based manner.

Looking for Support?

If you or someone you know if looking for mental health support, please don't hesitate to reach out.

Book a free Discovery Call and our client coordinators will help you connect with a therapist as soon as this week!

Author |
Julieta Melano Zittermann
BLOG TAGS
No items found.
KMA Therapy

Register Online

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Or, are you all set and ready to book?

Choose from available times and book your intake now.

Ontario's Premier Counselling Practice

Therapy has been proven to increase happiness, reduce anxiety, and increase overall fulfillment. Our team of specialized therapists are here to help you work through the issues that are important to you.