2:09 pm, Saturday, 13 September 2025

Meta Introduces New Restrictions on AI Chatbots for Teenagers

Sarakhon Report

Technology company Meta has announced that its AI chatbots will no longer engage in conversations with teenagers (ages 13 to 18) on sensitive topics such as suicide, self-harm, or eating disorders. According to Meta, the new policy aims to create a safer and more secure online environment for young users. This decision is being seen as part of global efforts to safeguard the mental health of adolescents.

At the same time, Meta has rolled out new privacy settings specifically designed for users between 13 and 18 years old. Once enabled, these settings will further restrict and secure the type of content shown on teen accounts. Parents will also be able to see which AI chatbots their children interacted with over the past seven days. This measure ensures that teenagers remain protected while giving parents greater visibility into their children’s online experiences.

The move comes in the wake of a significant incident in the United States. In California, a couple recently filed a lawsuit against OpenAI, alleging that ChatGPT had driven their son toward suicide. Following this case, the safety of AI chatbots has come under renewed scrutiny, prompting policymakers and technology companies to strengthen safeguards for children and adolescents.

Experts believe such steps will help reduce the online risks teenagers face. While technology cannot eliminate all risks at once, limiting sensitive topics is expected to play a positive role in protecting adolescent mental health. Meta’s initiative may not only shape its own platforms but also set an example for the wider technology industry in strengthening safety standards for younger users worldwide.

03:35:27 pm, Wednesday, 3 September 2025

Meta Introduces New Restrictions on AI Chatbots for Teenagers

03:35:27 pm, Wednesday, 3 September 2025

Technology company Meta has announced that its AI chatbots will no longer engage in conversations with teenagers (ages 13 to 18) on sensitive topics such as suicide, self-harm, or eating disorders. According to Meta, the new policy aims to create a safer and more secure online environment for young users. This decision is being seen as part of global efforts to safeguard the mental health of adolescents.

At the same time, Meta has rolled out new privacy settings specifically designed for users between 13 and 18 years old. Once enabled, these settings will further restrict and secure the type of content shown on teen accounts. Parents will also be able to see which AI chatbots their children interacted with over the past seven days. This measure ensures that teenagers remain protected while giving parents greater visibility into their children’s online experiences.

The move comes in the wake of a significant incident in the United States. In California, a couple recently filed a lawsuit against OpenAI, alleging that ChatGPT had driven their son toward suicide. Following this case, the safety of AI chatbots has come under renewed scrutiny, prompting policymakers and technology companies to strengthen safeguards for children and adolescents.

Experts believe such steps will help reduce the online risks teenagers face. While technology cannot eliminate all risks at once, limiting sensitive topics is expected to play a positive role in protecting adolescent mental health. Meta’s initiative may not only shape its own platforms but also set an example for the wider technology industry in strengthening safety standards for younger users worldwide.