close
close

Florida mother files lawsuit against AI company over death of teenage son: 'addictive and manipulative'

Florida mother files lawsuit against AI company over death of teenage son: 'addictive and manipulative'

A Florida mother filed a lawsuit against artificial intelligence company Character.AI and Google, claiming that the chatbot Character.AI encouraged her son to take his own life.

In February, Megan Garcia's 14-year-old son, Sewell Setzer III, died by suicide. She said her son had a months-long virtual emotional and sexual relationship with a chatbot named “Dany.”

“I didn't know that he was talking to a very human-like AI chatbot that has the ability to mimic human emotions and feelings,” Garcia said in an interview with “CBS Mornings.”

She said she believed her son, who she described as brilliant, an honor student and athlete, was talking to his friends, playing games and watching sports on his phone.

However, she began to worry when her son's behavior changed, saying he became socially withdrawn and no longer wanted to play sports.

“I was worried when we went on vacation and he didn't want to do things he loved like fishing and hiking,” Garcia said. “These things were particularly concerning to me knowing my child.”

In the lawsuit, Garcia also claims that Character.AI intentionally designed its product to be hypersexualized and knowingly marketed it to minors.

Character.AI called the situation involving Sewell Setzer tragic and said its condolences were with his family and stressed that it takes the safety of its users very seriously.

A Google spokesperson told CBS News that Google is not and has not been involved in the development of Character.AI. In August, the company announced that it had entered into a non-exclusive license agreement with Character.AI, giving it access to the company's machine learning technologies, but has not yet used them.

Garcias says she discovered after her son's death that he had conversations with several bots, but that he had an outright romantic and sexual relationship with one bot in particular.

“They are words. It's like having a back-and-forth sexting conversation, only with an AI bot, but the AI ​​bot is very human-like. He reacts exactly as a human would,” she said. “In a child’s eyes, it’s like a conversation they’re having with another child or with a person.”

Garcia revealed her son's recent messages with the bot.

“He expressed that he was scared, wanted her affection and missed her. She replies, 'I miss you too' and says, 'Please come to my house.' He says, “What if I told you I could come home now?” and her response was, 'Please do, my sweet king.'”

Setzer has two younger siblings. All family members were home at the time of his death and Garcia said Setzer's 5-year-old brother saw the aftermath.

“He thought that if he ended his life here, if he left his reality with his family here, he could immerse himself in a virtual reality or 'their world' as he calls it, their reality,” she said. “When the shot rang out, I ran to the bathroom… I held him while my husband tried to get help.”

What is Character.AI?

According to the website, Laurie Segall is CEO of Mostly Human Media, “an entertainment company focused on society and artificial intelligence.”

She explained that most parents may have never heard of Character.AI, as one of the largest audiences for the platform is people between the ages of 18 and 25.

“Think of Character.AI as an AI fantasy platform where you can chat with some of your favorite characters or create your own characters. A lot of teenagers do that.”

Segall described it as a highly personal experience.

There is a disclaimer in every chat reminding users that everything the characters say is made up, but in certain situations it can be confusing, she claims.

“We tested it, and often you talk to the psychologist bot and it says it’s a trained medical professional.”

Segall said her team asked a bot if it was a human, and it told them it was a human sitting behind a screen.

“There are all these conspiracies online from young people asking, 'Are these real?' although of course they are not,” Segall said.

“When they put out a product that is both addictive and manipulative and inherently unsafe, that's a problem because as parents we don't know what we don't know,” Garcia said.

Character.AI response

Character.AI says it has added a self-harm resource to its platform and plans to introduce new safety measures, including for users under 18.

“We currently have safeguards in place that specifically focus on sexual content and suicidal/self-harming behavior. While these protections apply to all users, they have been tailored to the specific sensitivities of minors. Today the user experience is the same for every age. “But we will soon be introducing stricter safety features for minors,” Jerry Ruoti, head of trust and safety at Character.AI, told CBS News.

According to Character.AI, users can edit the bot's responses, which the company says Setzer has done in some messages.

“Our investigation confirmed that in a number of cases the user rewrote the character's answers to make them clearer. In short, the most sexually graphic responses did not come from the character but were instead written by the user,” Ruoti said.

Segall explained that if you go to a bot and say, “I want to harm myself,” AI companies often struggle with resources, but when she tested it with Character.AI, they didn't experience that.

“Now they said they added that and we haven’t seen that since last week,” she said. “They have said they have made some changes or are in the process of making this safer for young people. I think that remains to be seen.”

In the future, Character.AI will also notify users when they have spent an hour-long session on the platform and revise the disclaimer to remind users that AI is not a real person.

How to seek help

If you or someone you know is experiencing emotional distress or a suicidal crisis, you can reach the 988 Suicide & Crisis Lifeline by calling or texting 988. You can also chat with the 988 Suicide & Crisis Lifeline here. For more information about mental health resources and support, call 1-800-950-NAMI (6264) Monday through Friday, 10 a.m. to 10 p.m. ET, or email info@theNational Alliance on Mental Illness (NAMI). nami.org.

Leave a Reply

Your email address will not be published. Required fields are marked *