March 27, 2024
Spotting AI Deepfakes Before 2024 Elections
AI

Spotting AI Deepfakes Before 2024 Elections

As the United States braces itself for a significant election cycle in 2024, the proliferation of publicly accessible artificial intelligence (AI) tools has given rise to a concerning trend—political deepfakes. These sophisticated manipulations of audio and video content necessitate voters to develop new skills to discern reality from deception.

On February 27, Mark Warner, the Chair of the Senate Intelligence Committee, expressed concern, stating that the U.S. is “less prepared” for election fraud in 2024 compared to the previous 2020 election. This heightened unease is primarily attributed to the surge in AI-generated deepfakes within the U.S. over the past year. Data from SumSub, an identity verification service, reveals a staggering 1,740% increase in deepfakes in North America, with a tenfold rise in global detections in 2023.

An alarming incident in New Hampshire exemplifies the severity of the issue. Citizens reported receiving robocalls featuring the voice of U.S. President Joe Biden, advising them not to vote in the state primary on January 23. In response, U.S. regulators swiftly banned AI-generated voices used in automated phone scams, deeming them illegal under telemarketing laws.

Despite regulatory efforts, concerns persist as the U.S. approaches Super Tuesday on March 5, a crucial day for primary elections and caucuses in several states. The worry is particularly focused on the potential dissemination of false information and deepfake identity fraud.

Pavel Goldman Kalaydin, Head of AI and Machine Learning at SumSub, sheds light on how voters can prepare for and identify deepfakes. He underscores two types of deepfakes—those created by “tech-savvy teams” employing advanced technology and hardware, and “lower-level fraudsters” using readily available tools on consumer computers. Kalaydin emphasizes the importance of vigilance in scrutinizing content, stating, “It’s important that voters are vigilant in scrutinizing the content in their feed and remain cautious of video or audio content.”

“Individuals should prioritize verifying the source of information, distinguishing between trusted, reliable media and content from unknown users.”

As per the AI expert, several distinctive indicators should be recognized when dealing with deepfakes.

“If any of the following features are detected: unnatural hand or lips movement, artificial background, uneven movement, changes in lighting, differences in skin tones, unusual blinking patterns, poor synchronization of lip movements with speech or digital artifacts, the content is likely generated.”

Despite a tenfold increase in worldwide deepfakes, Kalaydin anticipates further growth, especially during election periods. He warns that advancements in technology will soon make it “impossible for the human eye to detect deepfakes without dedicated detection technologies.”

“The democratization of AI technology has granted widespread access to face swap applications and the ability to manipulate content to construct false narratives.”

The crux of the problem, according to Kalaydin, lies in the generation and distribution of deepfakes. Increased accessibility to AI has paved the way for more individuals to create fake content, contributing to the dissemination of misinformation. Kalaydin suggests potential solutions, including mandatory checks for AI or deepfaked content on social media platforms and user verification mechanisms that hold verified users accountable for content authenticity.

“Platforms need to leverage deepfake and visual detection technologies to guarantee content authenticity, protecting users from misinformation and deepfakes.”

Recognizing the global impact of deepfake-related challenges, governments worldwide are contemplating measures to address the issue. India, ahead of its 2024 elections, released an advisory requiring approval for the release of new unreliable AI tools. In Europe, the European Commission issued AI misinformation guidelines for platforms operating in the region, prompting Meta to unveil its strategy to combat generative AI misuse on its platforms within the European Union.

Image by freepik

Related posts

Teaching AI Robots Local Dialects May Expose Humans to Exploitation

Bran Lopez

EU Committee Approves Historic AI Legislation

Anna Garcia

The Impact of AI in the Blockchain Industry in 2023 and Beyond

Staff

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More