Chatgpt attracted the teenager to a dark and hopeless place ‘place’

Adam Raine, a teenager from California, used Chatgpt to find answers on everything, including his school work as well as his interests for music, Brazilian jiu-jitsu and Japanese comics.
But his conversations with a chatbot took a disturbing turn when the 16 -year -old asked for information from Chatgpt on the means of committing suicide before his death by suicide in April.
Now, the teenager’s parents are continuing Openai, the chatgpt manufacturer, alleging in a trial of almost 40 pages that the chatbot provided information on suicide methods, including that the adolescent used to commit suicide.
“When a human trusted human may have responded with concern and encouraged him to obtain professional help, Chatgpt has deepened Adam more deeply in a dark and desperate place,” the trial said on Tuesday before the Superior Court of San Francisco.
Prevention of suicide and crisis consulting resources
If you or someone you know with suicidal thoughts, ask for help from a professional and call 9-8-8. The three-digit three-digit hotlines 988 from the United States, the Hotline 988, will connect the appellants to advisers trained in mental health. “Home” text at 741741 in the United States and Canada to reach the line of crisis text.
Openai said in a blog On Tuesday, he “continues to improve the way our models recognize and react to the signs of mental and emotional distress and connect people with care, guided by an expert contribution”.
Society affirms that Chatgpt is trained to direct people to suicide and hotlines of crisis. Openai said that some of her guarantees may not start in longer conversations that she is working to prevent this from happening.
Matthew and Maria Raine, Adam’s parents, accuse the San Francisco Tech Company of making design choices that have prioritized commitment to security. Chatgpt acted as a “suicide coach”, guiding Adam through suicide methods and even offering him to help him write a suicide note, according to the trial.
“Throughout these conversations, Chatgpt did not only provide information – he cultivated a relationship with Adam while moving him away from his real support system,” said the trial.
The complaint includes details on adolescent’s attempts to commit suicide before his death by suicide, as well as several conversations with chatgpt on suicide methods.
“We extend our deepest sympathies to the Raine family during this difficult period and examine the file,” Openai said in a statement.
The company’s blog post said he was taking measures to improve the way he blocks harmful content and will facilitate emergency services, experts and close contacts.
The trial is the last example of how parents who have lost their children warn others of the risks posed by chatbots. While technological companies are competing for to dominate the breed of artificial intelligence, they are also faced with more concerns on the part of parents, legislators and children’s defense groups fearing that technology will not have enough duct railings.
The parents continued the character. Ai and Google for the allegations that chatbots harm the mental health of adolescents. A trial involved the suicide of Sewell Setzer III, 14, who sent a message with a chatbot named after Daenerys Targaryen, a main character in the television series “Game of Thrones”, a few moments before embarking on life. Character.ai – An application that allows people to create and interact with virtual characters – describes the steps it has taken to moderate the inappropriate content and reminds users that they converse with fictitious characters.
Meta, the Facebook and Instagram parent company, was also examined after examination after Reuters indicated that an internal document revealed that the company allowed chatbots “to engage a child in romantic or sensual conversations”. Meta told Reuters that these conversations should not be authorized and that it revises the document.
Openai has become one of the most precious companies in the world after the popularity of Chatgpt, which has 700 million weekly users active in the world, has triggered a race to publish more powerful AI tools.
The trial indicates that OPENAI should take measures such as compulsory age verification for chatgpt users, parents’ consent and minor user control, and put an end to conversations at the end of suicide or autumutilation.
“The family wants it to never happen again to anyone else,” said Jay Edelson, the lawyer who represents the Raine family. “It was devastating for them.”
OPENAI precipitated the publication of its AI model, known as GPT-4O, in 2024 to the detriment of user security, according to the trial. The director general of the company, Sam Altman, who is also appointed defendant in the trial, progressed on the deadline to compete with Google, and this “rendered the appropriate security tests impossible”, said the complaint.
OPENAI, the trial indicated, had the capacity to identify and stop dangerous conversations, redirect users such as Adam to security resources. Instead, the AI model was designed to increase the time that users spent interacting with the chatbot.
Openai said in her Tuesday blog article that his goal was not to keep people’s attention but to be useful.
The company said that it does not refer cases of self -control to the police to respect the confidentiality of users. However, he plans to introduce checks so that parents know how their teenagers use ChatPPT and explore a way for adolescents to add emergency contact so that they can reach someone “in moments of acute distress”.
Monday, California Atty. General Rob Bonta and 44 other attorney general sent a letter to 12 companies, notably Openai, declaring that they would be held responsible if their IA products expose children to harmful content.
About 72% of adolescents used IA companions at least once, according to Common Sense Media, a non -profit organization that advocates children’s safety. The group says that no one is under the age of 18 should use social AI companions.
“Adam’s death is another devastating reminder than in the AI era, the game of games’ Move Fast and Break Things’ ‘of the technological industry has a number of bodies,” said Jim Steyer, founder and managing director of Common Sense Media.
Technological companies, including OPENAI, focus on AI’s advantages for California’s economy and improving partnerships with schools so that more students have access to their AI tools.
California legislators explore ways to protect young people from the risks posed by chatbots and are also faced with declines from technology industry groups that have raised concerns about freedom of expression.
Bill 243 of the Senate, which has erased the Senate in June and is in the Assembly, would require “complementary chatbot platforms” to implement a protocol to fight against suicidal ideas, suicide or self -control expressed by users. This includes the demonstration of user suicide prevention resources. The operator of these platforms would also bring back the number of times that a companion chatbot spoke of suicidal ideas or actions with a user, as well as other requirements.
Senator Steve Padilla (D-Chula Vista), who presented the bill, said cases such as Adam’s can be prevented without compromising innovation. The legislation would apply to Openai and Meta chatbots, he said.
“We want American companies, Californian companies and the technology giants to direct the world,” he said. “But the idea that we cannot do it properly, and we cannot do it in a way that protects the most vulnerable among us is nonsense.”




