Meta, the parent company of popular platforms like Facebook and Instagram, is making changes to how it handles AI-generated media based on feedback from the Oversight Board. Starting in May 2024, Meta will be labeling videos, audios, and images that are detected as AI-manipulated using industry-standard indicators or user disclosures.
Initially, Meta’s focus was mainly on AI-altered videos, but the Oversight Board highlighted the need to expand this approach due to advancements in AI technology. The Board pointed out the emergence of realistic AI-generated content in various formats, including audio and photos, raising concerns about potential impacts on freedom of expression if manipulated media is removed without violating Community Standards.
In response to these concerns, Meta collaborated with industry partners to establish standardized technical criteria for identifying AI-generated content. This collaboration led to the development of the “Made with AI” label. This labeling system aims to empower users by providing them with the information needed to distinguish between AI-altered content and authentic material.
Meta remains dedicated to promoting transparency and collaboration within the industry. The company will continue to engage with industry peers through initiatives like the Partnership on AI and maintain open communication with governments and civil society organizations.
By implementing these changes and labels, Meta is striving to ensure a safer and more informed online environment for all its users.