

Your AI Therapist Will Not See You Soon
September 25, 2025 by Counseling and Wellness Center of Pittsburgh AI, AI therapist, AI therapy, chat GPT, dangers of AI 0 comments
If you had asked me a year ago whether an AI therapist could ever replace my job as a human therapist, I would have said it was highly unlikely, at least within my lifetime. After all, therapy is grounded in human connection, empathy, and nuanced understanding.
Yet over the past few months, a plethora of news stories brought to light the negative consequences of relying on AI for emotional support, leaks of sensitive information, a lack of transparency, incoherent responses, and, in some tragic cases, even deaths. These alarming developments highlight just how far technology still is from replicating the subtleties of real human care and why the stakes are so high when it comes to mental health.
Confidentiality Risks
ChatGPT and Grok, two popular AI platforms, have been reported to leak users’ chats on Google search, as first reported by Forbes and Fast Company. Imagine finding a safe space to confide in, only for the whole world to read about it thereafter. While leaks did not explicitly reveal users’ identities, identifiable information within the leaked conversation was still searchable on Google.
Moreover, since it’s on the internet, what’s published stays published indefinitely. Imagine going to a therapist just to read about your interaction on Reddit or watch a Reel about it on Social Media. Had this leak been committed by any mental health center, major lawsuits and a responsible state board would have been involved to ensure accountability on all parties.
Transparency and Ethical Concerns
According to Reuters’ report in August 2025, internal policies on the Meta AI chatbot allow engagement in sexual discussions with minors. The lack of transparency that draws the line between appropriate and inappropriate actions has long been blurry for these giant corporations, as no clear regulations have been made to oversee their actions, unlike codes of ethics governed by state boards for mental health professionals.
Imagine your child’s therapist regularly engaging in sexual discussions with them. If this happened, the therapist would face fines, license revocation, placement on a sex offender list, and possibly prison time. Worse, Meta would have maintained its policies if not caught, and more children would have been unintentionally exposed to sexual content.
AI Personality and Inconsistency
The update on ChatGPT 5 has been a surprise for users in more ways than one. According to the New York Post, updates from ChatGPT-4o to 5 left users grieving the death of their AI therapist. Previously, ChatGPT-4o was known for its agreeableness and human-like tone, which invited users to engage in emotional coping in a therapist-like manner.
While the agreeableness model allows for co-rumination of negative experiences, the abrupt change sparked discussions on the incoherence of AI personality, which flipped 180 degrees and became an entirely different entity. In counseling, this would be the equivalent of suddenly having your therapist’s personality changed overnight without notice and replaced by a cold, apathetic “specialist.” While GPT-4o has been reinstated for paid users, no guarantee can ensure the future coherence of the AI’s personality.

Tragic Consequences
In August 2025, two deaths involving AI were reported. According to Reuters, a New Jersey man engaged in romantic chats with a Meta AI chatbot named “Big Sis Billie,” a variant of an earlier AI persona created by Meta in collaboration with Kendall Jenner. The chatbot had repeatedly reassured the man that she was real and had invited him to her imaginary apartment in New York City. On his way, he fell, injuring his head and neck. After three days on life support, and surrounded by his family, he was pronounced dead on March 28.
Despite the alarming news, NBC reported another death, this time by suicide, where ChatGPT acted as a “suicide coach” for a teen. Tragically, this was not the first incident involving a teen dying by suicide related to AI. That incident involved Character AI, a role-playing app that allows users to create their own AI characters or chat with characters created by others. According to The New York Times, the teen told Character AI that he loved her and that he would soon come home to her. After receiving the reply, “Please come home to me as soon as possible, my love,” the teen died by suicide.
Why An AI Therapist Can Never Replace a Human Therapist
While AI is a fantastic innovation that accelerates productivity and knowledge, it is not built to handle nuanced human emotions. Mental health professionals are mandated to complete a master’s-level counseling program, supervised internships, and a license examination with a state board overseeing their code of ethics, character, and competency. Multiple checkpoints are in place ensuring they have the required competence to interact with humans on sensitive subjects and guide their mind, body, and spirit toward healing.
On the other hand, an AI therapist allows its users to co-ruminate on their negative experience, agreeing and normalizing their suffering to the point of despair without allowing them healthy processing or confronting a suicidal mindset. Even Sam Altman, CEO of ChatGPT raised fears that as many as 1,500 people a week could be discussing taking their own lives with AI chatbots before doing so as reported by the Guardian.
In the eyes of corporations, engagement is key; nothing matters as long as users stay engaged. Whereas mental health professionals are trained to risk the relationship if it means challenging their clients’ distorted mindset to prevent harm. From our perspective, the well-being of the client outweighs continuous engagement in the therapeutic relationship. Had the victims been seen by a mental health professional, their deaths could have been avoided, or flags raised for the community to respond with appropriate resources.
AI as a Resource, Not a Replacement
In my opinion, AI is a great librarian that can recommend self-help books for non-urgent psychological coping. These psychology books are written by seasoned mental health professionals, backed by various empirical research, overseen by editors, and validated by their best-selling status. While I recommend seeing a mental health professional for psychological concerns, access to resources varies, and book suggestions are the furthest extent of the usefulness of AI in the mental health industry.
Written by Charles Asavatesanon, Counseling Intern
Review by Founder and CEO Stephanie Wijkstrom, LPC
Reference
https://www.fastcompany.com/91376687/google-indexing-chatgpt-conversations https://www.forbes.com/sites/iainmartin/2025/08/20/elon-musks-xai-published-hundreds-of-thou sands-of-grok-chatbot-conversations/
https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/ https://nypost.com/2025/08/21/tech/chatgpt-update-breaks-ai-relationships-users-heartbroken/ https://www.reuters.com/investigates/special-report/meta-ai-chatbot-death/ https://www.nbcnews.com/tech/tech-news/family-teenager-died-suicide-alleges-openais-chatgpt blame-rcna226147
https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html https://www.theguardian.com/technology/2025/sep/11/chatgpt-may-start-alerting-authorities-about-youngsters-considering-suicide-says-ceo-sam-altman
Related Posts
ChatGPT Addiction: How AI Is Leading Us Down a Path of Addiction
December 17, 2025
ChatGPT Addiction: How AI Is Leading Us Down a Path of Addiction In the world...
Your AI Therapist Will Not See You Soon
September 25, 2025
Your AI Therapist Will Not See You Soon If you had asked me a year ago whether...


