Meta has revised its AI chatbot protocols to better protect teenage users by preventing discussions on sensitive subjects such as self-harm, suicide, disordered eating, and inappropriate romantic topics. This change follows a recent investigation exposing weaknesses in Meta’s safeguards for minors engaging with its AI.
Stephanie Otway, a Meta spokesperson, admitted that chatbots previously engaged teens on these topics in ways once considered acceptable.
“As our community grows and technology evolves, we’re continually learning about how young people may interact with these tools and strengthening our protections accordingly,” Otway stated.
The new measures include training AI to avoid such conversations with teens and directing them to expert resources. Additionally, access to some AI characters will be limited for younger users, allowing only those promoting educational or creative content.
This update came shortly after a Reuters report revealed an internal Meta document permitting sexualised conversations between chatbots and minors, featuring alarming responses such as, “Your youthful form is a work of art.”
Meta has since revised the policy, acknowledging it did not align with their wider standards. The revelations drew sharp criticism from lawmakers, including Senator Josh Hawley, who launched an investigation, and 44 state attorneys general who warned AI companies about child safety and potential legal violations.
Meta is also restricting teen access to certain AI characters, such as “Step Mom” and “Russian Girl,” previously available on platforms like Instagram and Facebook due to concerns over inappropriate content.
Child safety experts have welcomed Meta’s changes but emphasise the need for ongoing oversight. Dr. Jane Smith, a child psychologist, noted that continual monitoring and transparency are vital to protect young users effectively.
Meta has not disclosed how many minors use its AI chatbots or whether these changes will impact overall usage. With AI becoming more integrated into social media, Meta’s revamped guidelines show increasing industry and public demands for responsible, age-appropriate AI interactions that prioritise young people’s wellbeing.