Meta overhauls rules on deepfakes, implementing a comprehensive strategy to combat the spread of misleading content. The company will introduce “Made with AI” labels for AI-generated videos, images, and audio posted on its platforms, starting in May.
Meta, the parent company of Facebook, has unveiled significant changes to its policies on digitally created and altered media, particularly focusing on deepfakes. This move aims to address the potential risks posed by AI-generated content and deceptive media ahead of the upcoming US elections.
Prominent Labeling for High-Risk Altered Media
Moreover, Meta will apply separate and more prominent labels to digitally altered media that poses a “particularly high risk of materially deceiving the public on a matter of importance.” This labeling system will be applied regardless of whether the content was created using AI or other tools.
Furthermore, Meta’s new approach represents a shift from simply removing a limited set of posts to keeping the content online while providing viewers with information about how it was made. This transparency aims to empower users to make informed decisions about the content they consume.
Addressing Incoherent Policies and Preparing for Elections
The updated policies come in response to Meta’s Oversight Board’s criticism of its previous “incoherent” rules on manipulated media. The board had pointed out the need to address non-AI-generated content as well, as it can be equally misleading. Moreover, with the US presidential election approaching in November, addressing AI-generated deception has become a pressing concern.
Additionally, Meta’s new labeling approach will be applied across its platforms, including Facebook, Instagram, and Threads. However, services like WhatsApp and Quest virtual reality headsets will follow different rules. The company aims to strike a balance between content moderation and user transparency while preparing for potential AI-driven disinformation campaigns during the elections.