Technology

OpenAI rolls out enhanced safety features for ChatGPT users under 18

California: OpenAI, a leading company in artificial intelligence, has announced the introduction of new safety protocols specifically designed to protect users under 18 years old when using its AI chatbot, ChatGPT.

The updated system will automatically estimate the age of users. If it detects that a user is a minor, ChatGPT will transition them to a restricted mode that limits discussions to non-sensitive topics only.

Conversations involving sexual content or flirtatious language will be blocked for underage users to ensure a safer environment.

In cases where young users express signs of emotional distress or suicidal thoughts, the chatbot will make efforts to inform their guardians or, in extreme cases, alert relevant authorities to provide timely assistance.

Parents will be empowered with new tools to establish “blackout periods” — specific times when their children will be prevented from accessing ChatGPT.

Furthermore, if the system is uncertain about a user’s age, it will default to applying the most stringent safety settings to safeguard the user.

OpenAI stressed that these measures are part of its ongoing commitment to protecting younger users, following reports of mental health struggles and attempted suicides linked to interactions with the platform.

The company also revealed that parental controls will become available later this month, giving families greater control over their children’s AI usage.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button