April 19, 2024
Texas Firm Under Investigation for Misguiding US Voters with Joe Biden AI

Texas Firm Under Investigation for Misguiding US Voters with Joe Biden AI

In a shocking revelation, a Texas-based firm, Life Corporation, and an individual named Walter Monk are under criminal investigation for orchestrating misleading robocalls using an AI-generated voice resembling President Joe Biden. The New Hampshire Department of Justice’s Election Law Unit identified the perpetrators behind the deceptive messages aimed at voters in the Jan. 23 primary.

Attorney General John Formella announced the findings, emphasizing the misuse of AI deepfake technology to interfere in the 2024 presidential election. The state attorney general’s office condemned the robocalls as misinformation and urged New Hampshire voters to disregard the messages.

AI deepfake tools, powered by advanced algorithms, produce convincingly realistic digital content, including audio recordings, videos, and images. These tools have raised significant concerns about their potential to deceive and manipulate public opinion.

The Election Law Unit swiftly responded to the voter suppression calls, issuing a cease-and-desist order to Life Corporation for violating New Hampshire’s statutes on bribery, intimidation, and suppression. The order mandates immediate compliance, with the unit reserving the right to pursue further enforcement actions.

Investigators traced the origin of the calls to a Texas-based telecoms provider, Lingo Telecom. The Federal Communications Commission (FCC) joined the investigation, issuing a cease-and-desist letter to Lingo Telecom for its alleged involvement in facilitating illegal robocall traffic featuring AI-generated voice cloning.

FCC Chairwoman Jessica Rosenworcel proposed classifying calls featuring AI-generated voices as illegal under the Telephone Consumer Protection Act, signaling a potential crackdown on such deceptive practices.

The proliferation of deepfake technology has heightened concerns globally, with organizations like the World Economic Forum and the Canadian Security Intelligence Service warning about the detrimental impact of AI-generated disinformation campaigns.

As authorities intensify efforts to combat the spread of misleading content, this incident underscores the urgent need for robust regulations and enforcement mechanisms to safeguard the integrity of democratic processes and protect voters from manipulation.

Image by heblo from Pixabay

Related posts

FSC Creates Task Force to Examine AI in Financial Services

Henry Clarke

Revolutionizing AI Optimization: Microsoft’s New Compilers Outperform The Competition

Eva Moore

Google Explores Subscription Model for AI-Powered Search Features

Bran Lopez

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Table of Contents