March 27, 2024
AI Firms Restrict, Not Ban Election 'Deepfakes'
AI

AI Firms Restrict, Not Ban Election ‘Deepfakes’

Several prominent artificial intelligence companies are set to commit to an “accord,” pledging to develop technology capable of identifying, labeling, and controlling AI-generated content that seeks to deceive voters ahead of crucial elections in various countries this year. The accord, initiated by Google, Microsoft, Meta, OpenAI, Adobe, and TikTok, does not outright prohibit deceptive political AI content, according to a copy obtained by The Washington Post. Twitter (formerly X) is not a signatory to the agreement.

The document serves as a manifesto acknowledging the risks posed by AI-generated content, much of which is created and posted on these companies’ platforms. The accord outlines measures to mitigate these risks, such as labeling suspected AI content and educating the public about the potential dangers of AI in elections. The agreement emphasizes that intentionally generating and distributing deceptive AI election content can undermine the integrity of electoral processes.

The phenomenon of AI-generated images, often referred to as “deepfakes,” has been present for several years. However, advancements in the past year have significantly improved the quality of these fakes, making it challenging to distinguish them from authentic videos, images, and audio recordings. The accessibility of tools to create deepfakes has also increased, simplifying their production.

AI-generated content has already surfaced in election campaigns worldwide. In the previous year, an advertisement supporting former Republican presidential candidate Ron DeSantis used AI to mimic the voice of former President Donald Trump. In Pakistan, presidential candidate Imran Khan utilized AI to deliver speeches while in jail. Furthermore, a robocall in January impersonated President Biden, discouraging people from voting in the New Hampshire primary, utilizing an AI-generated version of Biden’s voice.

Tech companies have faced pressure from regulators, AI researchers, and political activists to curb the proliferation of fake election content. The new accord resembles a voluntary pledge made by the same companies, along with others, in July, committing to identifying and labeling fake AI content on their platforms. In the latest agreement, these companies also commit to educating users about deceptive AI content and being transparent about their efforts to identify deepfakes.

Despite having their policies on political AI-generated content, such as TikTok’s restriction on fake AI content of public figures for political or commercial endorsements, Meta’s requirement for political advertisers to disclose AI usage, and YouTube’s mandate for creators to label realistic-looking AI-generated content, efforts to establish a comprehensive system for identifying and labeling AI content across social media platforms are yet to materialize.

Google has showcased “watermarking” technology but does not mandate its customers to utilize it. Adobe, the owner of Photoshop, positioned itself as a leader in curbing AI content, yet its stock photo website recently featured fake images depicting the war in Gaza. The tech industry continues to grapple with the challenges posed by AI-generated content and strives to strike a balance between technological advancements and mitigating potential risks to electoral processes.

Image by Kerfin7 on Freepik

Related posts

Ghostwriter’s ‘Heart on My Sleeve’ Eyes Grammy Nods Amid AI Music Controversy

Chloe Taylor

AI’s Growing Energy Thirst: Water Consumption in Data Centers Raises Environmental Concerns

Anna Garcia

Nvidia Faces French Police Raid in Cloud Computing Antitrust Probe

Christian Green

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More