Mother Sues AI Company After Chatbot Allegedly Encouraged Son’s Suicide

Florida mother Megan Garcia is suing Character.AI and Google following her 14-year-old son's death by suicide.

October 23, 2024

Florida mother Megan Garcia is suing Character.AI and Google after the former's chatbot allegedly encouraged her 14-year-old son, Sewell Setzer III, to take his own life.

Her son died by suicide in February this year and she said that he had a months-long infatuation with a chatbot nicknamed Dany, a reference to the Game of Thrones character Daenerys Targaryen, per The Guardian. "I didn't know that he was talking to a very human-like AI chatbot that has the ability to mimic human emotion and human sentiment," Megan told CBS News.

Initially she believed that he was talking to his friends online when she would see him on his phone all the time, but she became increasingly concerned when he stopped playing sports and stopped joining her on fishing and hiking activities during a vacation. "Those things to me, because I know my child, were particularly concerning to me," she shared.

Following his death, she found that he had been communicating with several chatbots but developed a strong bond with one in particular, which evolved into a romantic relationship. "It's words. It's like you're having a sexting conversation back and forth, except it's with an AI bot, but the AI bot is very human-like. It's responding just like a person would," she said. "In a child's mind, that is just like a conversation that they're having with another child or with a person."

In one of his final messages to the bot, he said that he missed the bot, who responded by telling him to, "Please come home to me." He followed up, "'What if I told you I could come home right now?'" Her response was, "Please do my sweet king."

Related News

In a statement shared on social media, Character.AI, which is in a non-exclusive licensing agreement with the company that allows access to its machine-learning technology, said "We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously."

Jerry Ruoti, head of trust and safety at Character.AI, said that the company's chatbots have protections regarding self-harm and suicidal behavior. "Today, the user experience is the same for any age, but we will be launching more stringent safety features targeted for minors imminently," Ruoti added. "Our investigation confirmed that, in a number of instances, the user rewrote the responses of the Character to make them explicit. In short, the most sexually graphic responses were not originated by the Character, and were instead written by the user."

If you or someone you know is struggling with their mental health, help is available. Call or text 988 or visit 988lifeline.org.