Meta's Oversight Board has called for a significant overhaul of the company's policies for handling AI-generated content, stating current methods are inadequate for managing misinformation during conflicts. The recommendation was prompted by a case involving a fake video related to the 2025 Israel-Iran war, which highlighted that Meta's systems are not robust enough to handle the speed and scale of AI-driven falsehoods in crisis situations.

The board urged Meta to create a new, comprehensive Community Standard specifically for AI-generated content. Key recommendations include developing stronger tools for real-time detection, applying labels more frequently to inform users, and fully implementing content provenance standards like C2PA to detail the origin of media. The board's rulings are not binding, but Meta, which typically responds within 60 days, faces public pressure to address the findings.