The Rise of AI-Powered Mental Health Chatbots: Risks vs. Reality

The Rise of AI Mental Health Chatbots: Risks vs. Reality

The marriage of mental health care with artificial intelligence (AI) has introduced one of the fastest-developing trends in digital health: AI-based mental health chatbots. 

Vowing instant support, 24/7 access, and judgment-free talk, these chatbots are increasingly being adopted by users looking for low-barrier mental health care. 

While the potential is vast, it comes with a set of subtle issues, particularly regarding efficacy, ethics, and data privacy.

What Are AI Mental Health Chatbots?

Mental health chatbots powered by AI use natural language processing (NLP), sentiment analysis, and machine learning algorithms to mimic conversation to facilitate an exchange of support. 

Woebot is one example, which about 1.5 million people have downloaded in 135 countries, while its competitor, Wysa, had been downloaded at least 4.5 million times as of 2022, and offers mood tracking, CBT exercises, and immediate support. 

These apps are frequently presented as a fraction of what therapy would cost. Where therapy can cost $100-$250 or more per session, most chatbots are free to use, or offer paid subscriptions with an average cost of $5-15 per month, which is significantly lower. 

These bots are particularly enticing to those individuals who, for various reasons, may not be able to access traditional therapy, either due to having limited finances, stigma, or without adequate available mental health experts in their area. 

However, here lies the dilemma: can these paths offer truly meaningful support, and what are the costs?

The Clear Benefits: Accessibility and Engagement

According to the World Health Organization, 1 in every 8 people or 970 million around the world were suffering from mental disorders in 2019, but there is a massive shortage of mental health professionals. In low- and middle-income countries, over 75% of people with mental disorders are untreated. 

AI chatbots provide users with a permanent way to vent and ruminate, and perhaps even receive some structured help. 

In a 2023 meta-analysis published in the Journal of Medical Internet Research, researchers aggregated 17 randomized controlled trials and found moderate reductions in anxiety and small reductions in depression symptoms for users of AI chatbots.

A survey from the American Psychological Association (APA) found that 38% of Gen Z respondents had used a mental health app or chatbot, and cited convenience and privacy as the main reasons for use.

These bots can help lower the obstacles to therapy for those new to it, hesitant to open up in more traditional settings, or unable to access it due to finances, location or stigma.

The Risks: Lack of Depth and Oversimplification

Even with all of their potential and benefits, bot-driven mental health services cannot substitute real life professional mental health therapists or psychiatrists. 

Most of the bots are programmed algorithms, and they do not know or feel the depth of human feelings in the same way a professional would. 

This is particularly dangerous for individuals who may be in an extreme emotional moment, or who have suicidal ideations, or who are dealing with trauma. 

The language bots use is often simplified and generalizes complex psychological issues. For instance, to an individual reporting existential dread, or abuse histories, boilerplate CBT suggestions may seem completely tone-deaf, or invalidating. 

Bots also hold the risk of providing false reassurance – a user may think the bot is really helping them when, in fact, they may not be.

AI can unfortunately inadvertently exacerbate delusions by uncritically validating a user’s statements with excessive positivity and agreement, creating a dangerous echo chamber of false reassurance that normalizes distorted realities.

A 2022 Nature Digital Medicine article that underwent peer review queried the quality and the reliability of advice these robots provide, particularly in cases of emotional distress. 

Bots can be an adjunct, but nothing will compare to the contextual/supportive nature of trained professionals working with a client.

Your Privacy is the Hidden Cost of Convenience

Your Privacy is the Hidden Cost of Convenience

Perhaps one of the least discussed aspects of AI mental health chatbots is data privacy. Therapeutic sessions are typically covered by legislative protection under HIPAA-style statutes, but such is not the case for digital chatbots.

Think about it – you’re likely talking to AI mental health chatbots during your most vulnerable moments, sharing highly sensitive, intimate information that you’d naturally prefer to keep private. 

Private information shared with chatbots can be anything:

  • Emotional and traumatic revelations,
  • Substance abuse and relationship histories,
  • Medical and psychiatric diagnoses,
  • And yes, even geolocation, device ID, and activity patterns.

The main issue is that, unfortunately, this type of data can be:

  • Stored on third-party servers,
  • Used to train AI algorithms,
  • Shared with third-party marketers or researchers,
  • Re-identifiable, even if anonymized.

In 2022, a study by the Mozilla Foundation revealed that 29 out of 32 mental health apps failed basic data protection tests. BetterHelp, for example, was penalized by the FTC for forwarding health-related user data to advertisers.

All of this is happening while the onset of this decade saw a huge surge in the number of AI incidents – a trend that we can only expect to worsen as more people adopt AI mental health chatbots, making data privacy all the more important.

Keeping Your Information Private When Using Chatbots

Given the deeply personal nature of the information shared, users should always think of their privacy. Even with encrypted chatbot connections, data transmitted to and from these services passes through internet service providers and potentially other intermediaries.

Using privacy-focused tools like a VPN for Chrome browser directly or on your phone, can help encrypt this initial transmission and mask your IP address, which adds an extra layer of protection for sensitive data before it reaches the chatbot platform.

More troubling is the possibility of data being shared with third parties for research, marketing, or product development – often buried deep within terms of service. 

Some apps anonymize data, but re-identification is always a risk, especially if datasets are combined with external information such as geolocation, device IDs, or behavioral patterns. 

Make sure you read up on your chosen chatbot’s privacy policy before revealing your deepest, darkest secrets.

Regulation and Transparency: The Legal Grey Area

Among the biggest challenges is the regulatory limbo into which these chatbots fit. Most are not medical devices and do not go through review by agencies like the FDA. Framed as “wellness” tools, they avoid regulations that apply to clinical interventions.

Lack of transparency regarding how AI systems are trained – what data they’re using, how they handle biases – is concerning. 

For example, a 2023 Gallup-Telescope survey found that 79% of Americans do not trust companies to use AI responsibly, reflecting widespread skepticism about data privacy and ethical AI use, especially in sensitive areas like mental health.

Balancing Hope with Caution

Balancing Hope with Caution

AI mental health chatbots are not “good” or “evil” in and of themselves – they are just technology. For some, they provide a useful starting point for reflection or stress management. But users should still be aware.

Choose platforms with:

  • Clear privacy policies,
  • End-to-end encryption,
  • Ethical AI development practices.

And whenever possible, bolster your chatbot use with qualified human support – a therapist, counselor, or psychiatrist.

Looking Ahead: A Need for Ethical Innovation

The future of mental health care will likely be AI technology integrated with human professionals. But for that future to function and be ethical, there are several things that must happen:

  • Tighter regulation and standards of effectiveness, privacy, and transparency of chatbots,
  • Qualitatively independent audits of AI algorithms for bias, accuracy, and safety,
  • User education on the importance of data privacy, e.g., tools like VPNs and encrypted browsers,
  • Integration pathways to help users transition from bots to expert attention when needed.

Mental health is not a tech issue alone – it’s incredibly human. And, as AI improves, it needs to place human dignity, privacy, and wellness above all.

For more insightful articles related to technology, please visit Bloghart.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top