March 27, 2024

Meta to Take Action Against AI Misuse ahead of EU Parliament Elections

In an effort to combat the misuse of generative artificial intelligence (AI) and safeguard the electoral process, Meta, the parent company of Facebook and Instagram, has unveiled its comprehensive strategy ahead of the 2024 European Parliament elections scheduled for June.

In a blog post on February 25, Meta’s Head of EU Affairs, Marco Pancini, emphasized the company’s commitment to applying the principles behind its “Community Standards” and “Ad Standards” to AI-generated content. Pancini stated, “AI-generated content is also eligible to be reviewed and rated by our independent fact-checking partners.” Notably, one of the ratings will indicate whether the content is “altered,” encompassing “faked, manipulated, or transformed audio, video, or photos.”

Meta’s existing policies already mandate the labeling of photorealistic images created using its AI tools. The recent announcement extends this labeling requirement to AI-generated content produced by external tools, including those from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock, when posted on Meta’s platforms.

To enhance transparency, Meta plans to introduce features allowing users to disclose when they share AI-generated video or audio content, with potential penalties for non-disclosure. Advertisers running political, social, or election-related ads altered or created using AI must also disclose its usage. Meta reported removing 430,000 ads across the European Union between July and December 2023 for failing to carry a disclaimer.

The move comes in anticipation of major global elections in 2024, with Meta and Google previously addressing rules related to AI-generated political advertising on their platforms. Google, on December 19, 2023, announced limitations on responses to election queries on its AI chatbot Gemini, and its generative search feature in the run-up to the 2024 U.S. presidential election.

OpenAI, the developer of the AI chatbot ChatGPT, has taken steps to address fears of AI interference in global elections by implementing internal standards. On February 17, a coalition of 20 companies, including Microsoft, Google, Anthropic, Meta, OpenAI, Stability AI, and X, pledged to curb AI election interference, recognizing the potential dangers if left unchecked.

Governments worldwide are also actively combating AI misuse ahead of local elections. The European Commission initiated a public consultation on proposed election security guidelines to counter threats posed by generative AI and deepfakes. In the U.S., the use of AI-generated voices in automated phone scams was banned and deemed illegal following the circulation of a deepfake of President Joe Biden in scam robocalls, misleading the public.

Image from

Related posts

OpenAI CEO Sam Altman’s Worldcoin Project Secures Additional $100M Funding for AI-Related Problems 

Kevin Wilson

Hollywood Production Companies Presented with New Plan for Data Transparency and AI to Prevent Strikes

Kevin Wilson

Global Coalition Unites Against AI-Generated Abusive Content

Bran Lopez

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More