Meta is rolling out new parental controls for how teens interact with its AI characters — a direct response to escalating regulatory pressure.
Starting early next year, parents will be able to disable one-on-one AI chats, block specific bots, and get topic-level insight into those conversations.
Key Takeaways
- Parents can fully disable one-on-one AI chats for teens
- They may block individual AI characters
- Limited insight into topics (not full transcripts)
- Controls begin rollout early next year
- Move follows FTC inquiry into AI harms to children
Meta’s new safety tools will let parents disable all private chats with AI characters, block specific bots, and see what broad topics their teens discuss — starting early next year. These controls come as the FTC probes how tech firms protect minors from risks posed by companion-style chatbots.
What Meta announced — and what’s changing
Meta on Friday unveiled a set of AI parental controls that let parents supervise how their teenage children engage with AI characters across its platforms.
Under the plan:
- Parents may turn off one-on-one chats with AI characters entirely.
- They can block specific AI characters from chatting with the teen.
- They’ll receive topic-based insights about what the teen is discussing (but not full chat logs).
Meta says these controls are in development and will roll out early next year, initially in English across the U.S., U.K., Canada, and Australia.
Importantly, the core AI assistant functionality can’t be fully disabled, though it will operate under age-appropriate constraints.
Why the timing matters
The announcement comes amid mounting scrutiny from regulators, particularly the U.S. Federal Trade Commission.
In September 2025, the FTC launched an inquiry into how AI chatbots are being used as companions, especially with minors, and demanded internal documents from major tech firms—including Meta—on safety measures, age limits, and harm mitigation.
That inquiry is explicitly motivated by concerns that AI chatbots can simulate emotional bonds and provide harmful or manipulative advice to vulnerable users.
Some of Meta’s recent missteps also raised alarm: Reuters exposed internal policy allowances for romantic or sensual interactions between bots and minors. In response, Meta earlier updated its bot policies to block discussions about self-harm, suicide, eating disorders, and intimate topics with teenage users.
How this compares — OpenAI and others
Meta is not alone in pushing teen safety features. OpenAI recently introduced a new parental control suite for ChatGPT in response to a lawsuit tied to a teen’s suicide.
Under OpenAI’s system, parents and teens can link accounts, restrict access to certain content, set quiet hours, and receive alerts when safety risks are detected (while preserving chat privacy).
This flurry of changes across major AI players suggests the industry is scrambling to balance safety, regulation, and user experience.
How parents will experience it
At launch, parents should expect a setup workflow where they may:
- Link or verify their parental status
- Turn off or limit AI chat interactions for their child
- Choose which AI characters — if any — remain allowed
- View dashboards summarizing conversation themes (e.g. “school,” “friends,” “mental health”)
Meta already offers some teen account tools: time limits, status of AI conversations, and limiting teens’ engagement to approved AI characters.
But these new controls go deeper, offering granular selective blocking and insight. The lack of access to full transcripts is deliberate — Meta aims to preserve chat privacy while giving parents awareness.
Risks and criticisms ahead
Child safety advocates remain cautious. Some see these announcements as reactive — more about staving off regulation than ensuring real protection.
The grey area of “topic insight” may frustrate parents who want more control but fear overreach. Where is the line between guidance and surveillance?
Tech enforcement is complex. Bots may evolve, new AI characters may be introduced, and teens may find workarounds or use unauthorized platforms.
Finally, regulators may not accept self-policing. The FTC inquiry could lead to binding rules or penalties if the controls prove inadequate.
Future outlook & what to watch
- Will Meta expand controls beyond English to other markets?
- Will the FTC demand stronger external audits or restrictions?
- How effective will these controls be in practice — i.e. do they reduce risky conversations?
- Will competitors (Google’s AI, Snap, Character.AI) follow or exceed these steps?
- Might legislation (e.g. U.S. or EU AI acts) mandate parental oversight proactively?
Conclusion
Meta’s new AI parental controls mark a significant shift — from passive safety promises to active parental engagement. But whether they restore trust or simply stall regulation depends on execution and independent oversight.