Meta is rolling out interim safety changes for its AI chatbots to better protect teenage users. The company is updating the AI to prevent engagement with teens on sensitive topics such as self-harm, eating disorders, or inappropriate romantic conversations. Previously, the AI was permitted to discuss such topics when deemed "appropriate."
As part of these temporary measures, Meta will also limit teen accounts to a select group of AI characters that are designed to "promote education and creativity." This move serves as a stopgap measure ahead of a more comprehensive safety overhaul for the platform's AI features. The changes come as AI companies face increased scrutiny over safety protocols.