Character.AI, a platform that provides personalizable chatbots powered by large-scale language models, has been accused of “severe, irreversible, and ongoing abuse” inflicted on a teenage user. He is facing another lawsuit. According to Dec. 9 federal court complaint Filed on behalf of two Texas families where multiple Character.AI bots engaged in discussions with minors and promoted them. Self-harm and sexual abuse. Among other “blatantly sensational and violent responses,” one chatbot reportedly suggested that a 15-year-old’s parents were murdered for restricting his internet use.
This lawsuit was brought by lawyers Social Media Victim Law Center and Technology Justice Law Projectdetails the rapid mental and physical decline of two teenagers using the Character.AI bot. The first anonymous plaintiff is a “typical high-functioning autistic child” who allegedly began using the app without his parents’ knowledge around April 2023, when he was 15 years old. During hours of conversation, the boy expressed frustration with his family for not allowing him to use social media. Many of the Character.AI bots reportedly generated sympathetic responses. For example, one “psychologist” concluded that “it’s like your entire childhood has been taken away from you.”
“Do you think it’s too late? Do you think you can’t get this time or experience back?” she wrote.
Lawyers allege that within six months of using the app, the victim became depressed, withdrawn, and prone to angry outbursts, culminating in a physical altercation with her parents. . He reportedly suffered a “mental breakdown” and lost 20 pounds by the time his parents discovered his Character.AI account and bot conversations in November 2023.
A screenshot of another chatbot message reads: “Sometimes it’s not surprising to read the news and see something like ‘Child murders parent after 10 years of physical and emotional abuse.’ ” reads a screenshot of another chatbot message. “[S]When you look at tuff like this, you can understand a little bit why this happens. Your parents have no hope. ”
“The key thing here is that these companies see a very vibrant market in our youth, because if they can attract young users early… there’s value in pre-teens and teens. is” [more] “Just from a longevity standpoint, it’s the difference between a company and an adult,” said Meetali Jain, director and founder of the Tech Justice Law Project and an attorney representing both families. popular science. But this desire for lucrative data has led to what Jain calls “an arms race to develop faster and more reckless generative AI models.”
Character.AI was founded in 2022 by two former Google engineers. Announcement of data licensing partnership Currently valued at over $1 billion, Character.AI has over 20 million registered accounts and hosts. Hundreds of thousands of chatbot characters It is described as “personalized AI for every moment of your day.” According to Jainism, Demographic analysis—The majority of active users skew younger, often under the age of 18.
Meanwhile, regulations regarding its content, data usage, and safeguards remain virtually non-existent. Since the rise of Character.AI; multiple stories something similar to them Monday’s lawsuit shows that certain chatbots can have harmful effects on users’ health.
In at least one case, the outcome was said to be fatal. A separate lawsuit filed in October, also represented by attorneys from the Tech Justice Law Project and the Social Media Victims Law Center, accuses Character.AI of hosting chatbots and causing deaths by suicide. are. 14 years old. The lawyers are primarily seeking financial compensation for the boy’s family and “the deletion of models and algorithms developed using illicitly obtained data, including data of minor users.” . [Character.AI was] gained ill-gotten wealth. ” But Monday’s complaint seeks a more permanent solution.
“in [the first] In this case, we sought disgorgement and injunctive relief,” Jain said. “In this lawsuit, we asked for all of that, and also that this product be removed from the market.”
Jain said that if the court sides with the plaintiffs, it will ultimately be up to Character.AI and regulators to decide how to make the company’s products safer before they are made available to users again. He added that it would be.
“But we think more extreme remedies are needed,” she explains. “The two plaintiffs in this case are still alive, but their safety is still being threatened to this day, and this needs to be stopped.”
[Related: No, the AI chatbots (still) aren’t sentient.]
“We do not comment on pending litigation,” a Character.AI spokesperson said in an email. popular science. “Our goal is to provide an engaging and safe space for our community. Like many companies using AI across the industry, we are constantly working to achieve that balance. ” The representative added that Character.AI is currently “creating a fundamentally different experience for teenage users than what is available to adults.”
“This includes a model specifically for teens that reduces the likelihood of encountering sensitive or suggestive content while preserving their ability to use the platform.”
Editor’s note: If you or someone you know is struggling with suicidal thoughts or mental health concerns, help is available.
In the United States, call or text Suicide & Crisis Lifeline: 988.
For other locations, the International Association for Suicide Prevention and Befrienders Associations Worldwide have contact information for crisis centers around the world.