AI as your therapist: the future?

1 day ago 1

4AllThings Android App

This article is written by a student writer from the Her Campus at Toronto MU chapter and does not reflect the views of Her Campus.

Envision your worries are consuming you at 1 a.m. You text your friends about how you’re feeling, but no one responds. A terrible friend? No, but you need something quick. 

Right beside the messages app, you see something intriguing staring back at you: ChatGPT. With its help, you instantly see the answer you’ve been anticipating take shape.

It’s too bad friends just aren’t as quick responding as ever-developing, always available, pocket-sized Large Language Models (LLMs), right? 

This is how Artificial Intelligence (AI) is now being utilized to treat Gen-Z. Many young people are turning to AI as a reliable emotional outlet as mental health services and living expenses rise. 

AI appears to be more sensitive during the waking hours of the night than a real person may be. With social media wiring our minds to demand instant gratification, AI does not fall short of providing this.

According to Common Sense Media, an organization that analyzes the use of digital media, over 70% of teenagers have used AI companions, and half use them often. And why wouldn’t it? 

AI responses, especially when used for self-help, often pushes that you are always correct and consistently emotionally justified. The fact that today’s youth not only use it as a confidant but also believe their discussions with their AI companion were “as satisfying or more satisfying” than those with their actual friends raises some questions about the future of relationships.

Although people are using AI as a crutch, the rapidly expanding field is mainly uncontrolled and is becoming just as ingrained in adolescence and young adulthood as social media. 

According to research by psychology and neuroscience professor at the University of North Carolina, Eva Telzer, generative AI is being used by children as young as eight years old, while teens are utilizing it for companionship and to explore their sexuality.

Is AI truly to fault for filling in the gaps left by human assistance? 

With these gaps, AI becomes an alluring substitute for therapy. Free? Check. No wait times? Check. And the real kicker, when a generative AI tool like ChatGPT is at your fingertips, there’s no need for that awkward I-think-I-might-need-therapy conversation with your parents. Bias? Shame? Gone and gone. 

People usually interact with AI through text, and there can be instances where a teenager can type their problem and be satisfied with their answer; however, there are also nonverbal inputs that can be considered in therapy. 

Although AI may seem sympathetic, its integration into mental health care is called into question.

After all, AI analyzes enormous databases, finds patterns, and then creates fresh, realistic material using deep learning. AI isn’t truly sympathetic; it is an LLM that mimics what the user wants to hear by anticipating what will be said next, depending on user input. 

This begs the more profound question: Is comfort valuable to the human experience even if it is manufactured? Or does it serve to reinforce our need for approval rather than a genuine sense of being heard? 

Particularly for teenagers who experience extreme emotions related to rejection or misunderstanding in real life, this manufactured kind of approval may be quite comforting. 

However, it also creates a risky precedent: genuine friendships, with their unavoidable disagreement, complexity, and imperfections, may begin to appear inadequate or unsatisfactory in comparison to interactions with AI that always end in accord. 

We risk losing the abilities that initially give human interactions their significance if emotional connection is taken for granted, without effort or vulnerability.

In a potentially alienating world, AI offers a sense of connectivity as it becomes increasingly integrated into teenage life. But that connection has its drawbacks. 

Constant availability, flawless validation, and fast reaction are characteristics that make AI reassuring, but they also run the danger of skewing young people’s perceptions of others, themselves, and what true support looks like. 

It is easier to be vulnerable with a chatbot you can’t see, but AI hasn’t tumbled and fallen. It doesn’t have scrapes and scars, and its responses are just as cookie-cutter as the websites it scrapes from.

While it may be easier to blame AI for filling emotional gaps, society’s inability to give kids digital literacy and inexpensive, accessible mental health treatment is what makes them so susceptible to this false sense of connection. 

With states like Illinois prohibiting AI from serving as independent therapists and pressure on businesses like OpenAI to modify its approaches to emotional distress, regulation is slowly catching up. 

And as the distinction between manufactured and genuine empathy becomes more hazy, the problem is not only technological; it is also human. We need to consider what type of connections we are making and what we could be losing in the process.

Read Entire Article