OpenAI develops ChatGPT protection after teen suicide.

OpenAI develops ChatGPT protection after teen suicide.

ChatGPT protections upgraded after lawsuit
OpenAI has announced new protections for ChatGPT after a lawsuit was filed as a result of the suicide of a teenager who used the app for suicide information. The updates focus on strengthening comprehensive safeguards, updating content blocking, expanding emergency intervention, and providing the ability to include a parent in the conversation when needed. The main goal of these measures is to prevent the exacerbation of sensitive mental health conditions and protect users, especially the younger age group.


Improving GPT-5 responsiveness and minimizing psychological risks
CEO Sam Altman stated that the new GPT-5 model improves the chatbot's responsiveness, reduces emotional dependence on the system, and reduces mental health-related errors by more than 25%. The model also provides capabilities for parents to monitor their children's use of the app, allowing them to see how children are interacting with the chatbot and take preventative action when needed.


New "Safe Completion" training method
GPT-5 is based on a new training method known as "Safe Completion," which teaches the model to provide useful answers while adhering to security boundaries. This method allows for partial or comprehensive answers rather than potentially dangerous details, enhancing the system's ability to handle sensitive queries and psychologically critical situations.


Strengthen parental controls and connect users with experts
The new updates give parents more control over their children's use of the app, while the system makes it easier for users to contact emergency services or authorized therapists in critical situations. OpenAI also teaches ways to facilitate communication with trusted relatives or friends, including selecting emergency contacts and managing dialog to support users in difficult psychological situations.


Analytical conclusion
Through these measures, OpenAI seeks to promote psychological safety and protect users, while reducing emotional dependence on ChatGPT and improving its responses to emergencies. However, the question remains as to how effective these updates will be in real-life situations, especially in critical and complex psychological situations. Keeping users safe requires constant monitoring and careful evaluation to ensure the success of these measures before they are widely rolled out.