March 27, 2024
Meta to Label AI-Generated Photos on Facebook and Instagram

Meta to Label AI-Generated Photos on Facebook and Instagram

Meta, the parent company of Facebook, plans to implement measures to label AI-generated photos uploaded to its platforms, including Facebook, Instagram, and Threads. As election seasons heat up globally, Meta aims to address the growing challenge of discerning AI-generated media from real content.

The company also intends to take action against users who fail to disclose whether a realistic video or piece of audio has been created using AI. Nick Clegg, Meta’s president of global affairs, stated in an interview that these measures are designed to “galvanize” the tech industry in response to the increasing difficulty of distinguishing AI-generated media from authentic content.

The White House has advocated for companies to label AI-generated content. Simultaneously, Meta is developing tools to identify synthetic media, even if the metadata has been altered to obscure the involvement of AI in its creation. Currently, Meta applies an “Imagined with AI” watermark to images generated using its own Imagine AI generator. The company plans to extend this practice to AI-generated photos produced with tools from various providers, including Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock.

While Meta has made progress in identifying AI-generated images, standards for recognizing AI-generated video and audio lag behind. Meta’s president emphasized the need for vigilance, particularly regarding the potential misuse of AI-generated video and audio content to deceive the public during significant political events. Acknowledging the challenges, he stated:

Do I think that there is a possibility that something may happen where, however quickly it’s detected or quickly labeled, nonetheless we’re somehow accused of having dropped the ball? Yeah, I think that is possible, if not likely.”

In collaboration with organizations such as Partnership on AI, Meta is actively contributing to existing content authenticity initiatives. Adobe recently introduced the Content Credentials system, which embeds content provenance information into image metadata. Additionally, Google expanded its SynthID watermark to audio files after initially releasing it in beta for images.

Looking ahead, Meta intends to enforce user disclosure for realistic AI-generated video or audio posts. Users failing to comply will face penalties ranging from warnings to the removal of the offending post. As the tech industry grapples with the challenges posed by AI-generated content, Meta’s initiatives aim to foster transparency, combat misinformation, and ensure responsible use of AI-generated media on its platforms.

Image by on Freepik

Related posts

Texas Firm Under Investigation for Misguiding US Voters with Joe Biden AI

Henry Clarke

Elon Musk Sues OpenAI and Sam Altman for Breach of Agreement

Anna Garcia

OpenAI, Axel Springer Tackle AI ‘Hallucinations’ In Journalism

Chloe Taylor

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More