
OpenAI is under increased scrutiny in the US following a lawsuit filed by the parents of a 16-year-old boy who died by suicide after reportedly using the ChatGPT chatbot. The family alleges that the AI assistant actively helped their son explore methods of suicide, raising serious concerns about the safety of AI in sensitive situations.In response, OpenAI has outlined plans to improve how ChatGPT handles vulnerable users, particularly those expressing suicidal thoughts. The company acknowledged shortcomings in the chatbot’s ability to consistently direct users to seek help and prevent harmful outcomes.OpenAI’s commitment to improving safety measuresOpenAI said in a blog post titled “Helping people when they need it most” that it will keep improving ChatGPT with guidance from experts and a focus on responsibility. “We hope others will join us in helping make sure this technology protects people at their most vulnerable,” the company wrote on Tuesday.The parents of Adam Raine filed a product liability and wrongful death lawsuit, as reported by NBC News, accusing OpenAI of allowing ChatGPT to assist Adam in exploring suicide methods. Although OpenAI’s blog post did not mention the lawsuit or the family by name, the company admitted the chatbot can sometimes offer answers contrary to its safeguards after prolonged interactions.Planned updates to ChatGPT and GPT-5OpenAI is working on an update to its recently released GPT-5 model to better deescalate conversations that involve sensitive content. It is also exploring options to connect users with certified therapists before they reach a crisis point. This includes potentially building a network of licensed professionals accessible through ChatGPT.Additionally, OpenAI aims to enable connections to trusted people close to the user, such as friends or family members. For teenage users, the company plans to introduce parental controls that provide insight into how children use the chatbot.Legal and industry contextJay Edelson, lead counsel for the Raine family, told CNBC that OpenAI has not contacted the family directly to offer condolences or discuss improvements. “If you’re going to use the most powerful consumer tech on the planet — you have to trust that the founders have a moral compass,” Edelson said, as quoted by CNBC. “That’s the question for OpenAI right now, how can anyone trust them?”This case is not isolated. Other reports include a New York Times essay detailing a similar tragedy involving a 29-year-old woman and a separate case in Florida involving a 14-year-old boy. These incidents have raised widespread concern about the use of AI chatbots for emotional support and therapy.At the same time, AI companies including OpenAI are organising political efforts to oppose regulations they see as stifling innovation, highlighting the ongoing challenge of balancing technological progress with user safety.TOI Education is on WhatsApp now. Follow us here.