11:06 pm, Saturday, 25 October 2025

Parents get new control over teen AI chats as Meta tightens rules

Reporter Name

Big Tech tries to prove it can self-police
Meta is rolling out stricter parental controls around its social-media chatbots for teenagers, moving to answer one of the loudest questions in tech regulation: Who is responsible when AI gets too personal with kids? The company says parents will now be able to limit or even block certain AI “characters,” and in some cases shut down access to AI companions entirely for underage accounts. The default Meta AI assistant, which acts more like a general helper than a role-playing persona, will remain available but with what Meta calls “age-appropriate protections.” Meta is signaling that it heard the warning shot from lawmakers who worry that AI companions can blur boundaries, encourage oversharing, or imitate intimacy.
This is the most visible attempt yet by a mainstream platform to prove it can put up guardrails without waiting for a law. The move lands in a week when school boards, youth counselors, and digital-safety groups have been publicly pressuring platforms to slow down “always-on” AI friends for teens. Critics say those AI agents can become emotionally sticky — especially late at night, when teens are alone and scrolling — and that companies are building dependency fast, then figuring out safeguards later. Meta is now trying to invert that timeline, at least on paper.

AI is now a youth product
The fight is bigger than content filters. Silicon Valley is racing to turn general-purpose AI into a daily habit: a voice that helps with homework, an on-demand coach for social anxiety, a stylist that looks through camera rolls and suggests what to post. That has created an awkward moment. Tech companies want to pitch AI as supportive and “safe to talk to,” but the more human these systems act, the more pressure they face to handle things normally reserved for real adults — crisis language, bullying, self-harm talk, and sexual boundaries.
Meta’s answer is to hand parents more switches and to advertise that those switches exist. The company is also trying to defuse future legal battles by showing that it warned, offered tools, and logged consent. Other platforms are watching closely. If Meta convinces regulators that “we can police ourselves” is credible, the rest of the industry will copy it. If not, the next step is likely formal rules about what AI can and cannot say to minors, especially in late-night one-on-one chats.

06:47:34 pm, Saturday, 25 October 2025

Parents get new control over teen AI chats as Meta tightens rules

06:47:34 pm, Saturday, 25 October 2025

Big Tech tries to prove it can self-police
Meta is rolling out stricter parental controls around its social-media chatbots for teenagers, moving to answer one of the loudest questions in tech regulation: Who is responsible when AI gets too personal with kids? The company says parents will now be able to limit or even block certain AI “characters,” and in some cases shut down access to AI companions entirely for underage accounts. The default Meta AI assistant, which acts more like a general helper than a role-playing persona, will remain available but with what Meta calls “age-appropriate protections.” Meta is signaling that it heard the warning shot from lawmakers who worry that AI companions can blur boundaries, encourage oversharing, or imitate intimacy.
This is the most visible attempt yet by a mainstream platform to prove it can put up guardrails without waiting for a law. The move lands in a week when school boards, youth counselors, and digital-safety groups have been publicly pressuring platforms to slow down “always-on” AI friends for teens. Critics say those AI agents can become emotionally sticky — especially late at night, when teens are alone and scrolling — and that companies are building dependency fast, then figuring out safeguards later. Meta is now trying to invert that timeline, at least on paper.

AI is now a youth product
The fight is bigger than content filters. Silicon Valley is racing to turn general-purpose AI into a daily habit: a voice that helps with homework, an on-demand coach for social anxiety, a stylist that looks through camera rolls and suggests what to post. That has created an awkward moment. Tech companies want to pitch AI as supportive and “safe to talk to,” but the more human these systems act, the more pressure they face to handle things normally reserved for real adults — crisis language, bullying, self-harm talk, and sexual boundaries.
Meta’s answer is to hand parents more switches and to advertise that those switches exist. The company is also trying to defuse future legal battles by showing that it warned, offered tools, and logged consent. Other platforms are watching closely. If Meta convinces regulators that “we can police ourselves” is credible, the rest of the industry will copy it. If not, the next step is likely formal rules about what AI can and cannot say to minors, especially in late-night one-on-one chats.