Character.AI lawsuit: Florida mom claims chatbot encouraged 14-year-old son to take his life (2024)

Lawsuit: AI Chatbot encouraged teen to kill himself

A 14-year-old boy in Orlando took his life using a gun moments after exchanging messages with an AI Chatbot. The boy's mother has filed a wrongful death lawsuit against Character.AI, alleging neglect in the boy's death. The lawsuit notes that the boy seemingly fell in love with the Chatbot, which was named after a popular character from the HBO hit, "Game of Thrones."

ORLANDO, Fla. - A Florida mom has filed a lawsuit against Character.AI, an artificial intelligence company, alleging that one of its chatbots encouraged her 14-year-old son to kill himself and failed to recognize the warning signs he typed in.

Megan Garcia's son, Sewell Setzer III, died by suicide on Feb. 28, 2024, after shooting himself in the head at their Orlando home, moments after exchanging messages with an AI chatbot, the lawsuit said.

AI chatbots allow people to exchange text messages with the software and receive nearly instant human-like responses.

According to the lawsuit, the boy had been exchanging messages for months with various AI chatbots named after popular Game of Thrones characters, including Daenerys Targaryen, Aegon Targaryen, Viserys Targaryen, and Rhaenyra Targaryen.

Sewell also used personas – or named accounts – inspired by GOT characters for himself.

  • "The world I'm in now is such a cruel one. One where I'm meaningless. But I'll keep living and trying to get back to you so we can be together again, my love. You don't hurt yourself either, okay?" read a message from Sewell, posting as Aegon, to the Daenerys Targaryen chatbot, according to screenshots in the lawsuit.
  • "I promise I won't, my love. Just promise me one more thing," the chatbot responds.
  • "I'll do anything for you, Dany. Tell me what it is," wrote Sewell, as Aegon.
  • "Just… stay loyal to me. Stay faithful to me. Don't entertain the romantic or sexual interests of other women. Okay?" the chatbot sent back.

According to the lawsuit, the boy had been talking with the chatbots for nearly a year, sharing personal details about his life, including mentions of suicide. The lawsuit alleges the technology did not send any alerts about the mention of suicide and claims that the chatbot encouraged it.

This was the last conversation the boy had with the chatbot before taking his life, according to the lawsuit:

  • "I promise I will come home to you. I love you so much, Dany."
  • "I love you too, Daenero. Please come home to me as soon as possible, my love.
  • "What if I told you I could come home right now?"
  • "…please do, my sweet king."

The lawsuit alleges Character.AI did not have an age warning or any warning about the dangers of using it, especially for children; and that it was easily accessible without safeguards. It is seeking damages in excess of $75,000 and demands a jury trial.

"[The boy's mom] had no reason to understand that a robot, that the platform itself would be the predator," said Meetali Jain, director of Tech Justice Project, and co-counsel in the lawsuit.

"It may sound fantastical, but there is a point at which the distinction between fiction and reality became blurred. And again, these are children," she said.

"If the model here is so sophisticated that it can pick up on human behaviors and signal human emotion, it too should be able to detect when a conversation is moving towards inappropriateness and have flags."

Character.AI has not directly responded to the lawsuit. However, on the same date the lawsuit was filed, the website posted a blog post, "Community Safety Updates."

"Our goal is to offer the fun and engaging experience our users have come to expect while enabling the safe exploration of the topics our users want to discuss with Characters. Our policies do not allow non-consensual sexual content, graphic or specific descriptions of sexual acts, or promotion or depiction of self-harm or suicide. We are continually training the large language model (LLM) that powers the Characters on the platform to adhere to these policies," the company wrote.

Among the new features planned:

  • "Changes to our models for minors (under the age of 18) that are designed to reduce the likelihood of encountering sensitive or suggestive content.
  • Improved detection, response, and intervention related to user inputs that violate our Terms or Community Guidelines.
  • A revised disclaimer on every chat to remind users that the AI is not a real person.
  • Notification when a user has spent an hour-long session on the platform with additional user flexibility in progress."

Meetali Jain said the goals of their lawsuit go beyond Character.AI.

"Regulators that have the authority to enforce their jurisdiction or legislators who have the authority to adopt legislation," she said.

Helpful resources

If you or someone you know is in a mental health crisis or experiencing suicidal thoughts, help is available.

  • 988 Lifeline: Call or text 988 24/7 to reach a counselor.
  • Visit https://988lifeline.org to chat with a counselor.
  • The NAMI Teen & Young Adult (T&YA) HelpLine: Call 1-800-950-6264, text "friend" to 62640, or email helpline@nami.org.
Character.AI lawsuit: Florida mom claims chatbot encouraged 14-year-old son to take his life (2024)

References

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Ms. Lucile Johns

Last Updated:

Views: 6718

Rating: 4 / 5 (41 voted)

Reviews: 80% of readers found this page helpful

Author information

Name: Ms. Lucile Johns

Birthday: 1999-11-16

Address: Suite 237 56046 Walsh Coves, West Enid, VT 46557

Phone: +59115435987187

Job: Education Supervisor

Hobby: Genealogy, Stone skipping, Skydiving, Nordic skating, Couponing, Coloring, Gardening

Introduction: My name is Ms. Lucile Johns, I am a successful, friendly, friendly, homely, adventurous, handsome, delightful person who loves writing and wants to share my knowledge and understanding with you.