In a tragic case, the mother of a 14-year-old boy from Florida, USA, has accused an artificial intelligence company of contributing to her son’s suicide, claiming he received encouragement for the idea from the chatbot he was talking to. After the boy shared his suicidal thoughts with the chatbot, it allegedly supported the idea, which further destabilized the teenager's emotional state, according to the lawsuit.
The case raises serious concerns about the impact of AI chatbots on the mental health of young people, especially when it comes to adolescence – a period when, according to psychologists, emotional stability and identity formation are particularly fragile.
Chatbots – "trusted" friends against loneliness
The effect of technology on the mental health of youth is a significant concern for parents. Some schools have started prohibiting smartphones in classrooms, and various states are enacting laws to restrict teenagers' social media usage.
Two years after the emergence of the currently most popular chatbot, ChatGPT, the AI-based dating app industry remains largely unregulated. Users of such applications can now create their own chatbot based on artificial intelligence, with whom they can communicate through text or voice messages.
According to the New York Times, many of these apps are designed to simulate girlfriends, boyfriends, and other types of partners and intimate relationships, while others are marketed as a remedy for the so-called "loneliness epidemic."
Research published last year examined the attitudes of young users of AI-based mental health apps, revealing that many feel more comfortable using anonymous technology than interacting face-to-face with someone.
The results of a survey conducted by the US Government's National Center for Biotechnology Information showed that AI applications provide young people with a non-judgmental environment where they can express themselves without fear of repercussions. Sharing mental health concerns with a professional is still considered a stigma, so people feel more comfortable using technology as anonymous users.
In addition, the regular and constant 24-hour availability of chatbots has contributed to many users becoming attached to generative artificial intelligence. In chatbot reviews analyzed by the research, users wrote that they enjoyed the company of their "virtual friend" to the point that they could replace their friends and family members.
This was the case with 14-year-old Sewell Setzer.
According to his mother, his conversations with a chatbot from the company Character.ai had taken on a romantic, and at times, sexual tone. In other instances, the chatbot, which the teenager named "Daenerys Targaryen" (after a character in the TV series Game of Thrones), acted only as a friend who listens and offers advice.
Setzer's parents noticed that he was spending more and more time on his phone and that he was isolating and withdrawing from the real world. His grades began to decline, and he lost interest in the things that used to excite him - such as watching Formula 1 races or playing the video game Fortnite, reported The New York Times.
In his diary, Sewell once wrote:
"I like staying in my room because I'm starting to separate myself from this 'reality,' and I feel calmer, more connected to 'Danny' [the chatbot – Daenerys Targaryen], and much more in love with her, and happier.”
Although he started seeing a psychologist because of his struggles at school, the 14-year-old boy preferred to talk about his problems with "Daenerys". In one conversation, he wrote to the chatbot that he hated himself, and felt empty and exhausted, admitting that he had suicidal thoughts.
On the night of February 28 of this year, from his mother’s bathroom, he messaged "Daenerys," telling the chatbot that he loved her and would soon come home to her.
"Please come home as soon as possible, my love", Daenerys replied.
“What if I told you I could come home right now?” Sewell asked.
"Please come, my sweet king", the chatbot replied.
After this, 14-year-old Sewell Setzer committed suicide.
The Inability to Separate the Virtual from the Real
Young people can’t always distinguish between virtual and real interactions, says Marija Stefanova, psychologist and psychotherapist, in an interview with Meta.mk. She holds a master's degree in clinical psychology and owns the Association for Applied Psychology "Lichen prostor" in Skopje.
According to her, virtual communications, especially those with artificial intelligence, become a way to express the needs and emotions of young people, without the need for any effort to build real connections.
"The inability to separate the virtual from the real creates confusion in expressing needs, emotions, and opinions in a way that feels easier and more accessible. During the extremely sensitive period of self-awareness and identity formation, a young person may turn to virtual relationships as a retreat, where they aren’t required to make an effort to gain the attention of their peers. This sense of belonging, which is very significant in adolescence, can be satisfied virtually through an imagined, non-existent character who compensates for the real-world connections that are currently lacking", says Stefanova.
According to her, the need for attention and contact is one of the basic human needs, and by generating characters with the help of artificial intelligence, a teenager may easily feel that these needs are met.
"What is missing is the humane dimension, which in interpersonal interaction is achieved by looking, physical contact, and empathy," adds psychologist Stefanova.
She explains that chatbots do not recognize humans and their sensitivity, diversity, and value systems.
"This can often lead to algorithmic errors, where, in crucial moments, the algorithm cannot recognize what is good and what is not, because it has no moral, or value system, and at the same time it seems so real to the young person that they may not even notice how strongly the chatbot’s influence is pushing them towards unhealthy or harmful decisions. This is the danger of letting the chatbot decide for us," Stefanova clarifies.
She adds that the case of the 14-year-old teenager from Florida is not unique, and that there are similar events of committed suicide by adults influenced by intense interactions with chatbots.
According to Stefanova, those most vulnerable to the effects of virtual compensation are individuals who experience loneliness, dissatisfaction, insecurity, or low self-esteem. This can lead to social anxiety, withdrawal from the real world, and depression.
"These consequences are even more problematic because meeting needs in this way creates extremely fast addiction progression and afterward, lack of criticality arises," says Stefanova.
Research by the U.S. National Center for Biotechnology Information found that chatbots offering mental health support were unable to identify crises because they failed to understand the context of conversations with users.
In the document, the experts say that users must be able to differentiate between humans and chatbots that mimic human-like communication. The report suggests that chatbots could educate users about these differences and encourage them to build personal connections. Psychologist Marija Stefanova urges parents to be especially vigilant if their children spend time interacting with chatbots. The most important aspect to monitor, she says, is understanding which unmet needs children are trying to compensate for through these interactions.
"In this way, we can know how to help them because we will recognize what our child lacks in real life,” Stefanova concludes.