March 27, 2024
AI Hallucinations in the Courtroom: A New Defense for Pras Michel

AI Hallucinations in the Courtroom: A New Defense for Pras Michel

Every aspect of life, including the American judicial system, is being influenced by artificial intelligence. However, as the technology spreads, the problem of AI-generated falsehoods or nonsense—also known as “hallucinations”—remains.

Prakazrel “Pras” Michel, a former member of the Fugees, alleges that an AI model developed by EyeLevel destroyed his multi-million dollar fraud case, a claim that EyeLevel co-founder and COO Neil Katz rejects. These AI hallucinations are at the heart of his allegations.

In his conspiracy trial, Michel was found guilty on 10 charges in April, including tampering with witnesses, forging paperwork, and acting as an unregistered foreign agent. Following his conviction as a Chinese agent, Michel might spend up to 20 years behind bars because, according to the prosecution, he used a funnel to try to sway American lawmakers.

“We were brought in by Pras Michel’s attorneys to do something unique—something that hadn’t been done before,” Katz stated in an interview.

According to a report by the Associated Press, defence attorney David Kenner incorrectly quoted a line from the song “I’ll Be Missing You” by Sean “Diddy” Combs and wrongly attributed the song to the Fugees during closing arguments by Michel’s counsel at the time.

According to Katz, EyeLevel was tasked with developing an AI taught on court transcripts that would enable attorneys to pose intricate natural language inquiries about what took place during a trial. He claimed that it did not, for instance, pull additional information from the internet.

A lot of documentation is frequently produced during court procedures. There are already hundreds of papers in the criminal prosecution of FTX founder Sam Bankman-Fried, which is still going on. Separately, the bankruptcy of the defunct cryptocurrency exchange has more than 3,300 papers, some of which are several pages long.

“This is an absolute game changer for complex litigation,” Kenner stated in an EyeLevel blog post. “The system turned hours or days of legal work into seconds. This is a look into the future of how cases will be conducted.”

In the U.S. District Court for the District of Columbia, Michel’s new defence counsel, Peter Zeidenberg, submitted a request on Monday for a new trial, which Reuters republished online.

“Kenner used an experimental AI program to write his closing argument, which made frivolous arguments, conflated the schemes, and failed to highlight key weaknesses in the government’s case,” Zeidenberg wrote. Michael, he said, is asking for a new trial “because numerous errors—many of them precipitated by his ineffective trial counsel—undermine confidence in the verdict.”

Katz disputed the assertions.

“It did not occur as they say; this team does not know artificial intelligence whatsoever nor of our particular product,” Katz explained. “Their claim is riddled with misinformation. I wish they had used our AI software; they might have been able to write the truth.”

Katz further denied reports that Kenner had stock in EyeLevel, claiming that the business was recruited to support Michel’s legal team.

“The accusation in their filing that David Kenner and his associates have some kind of secret financial interest in our companies is categorically untrue,” Katz explained. “Kenner wrote a very positive review of the performance of our software because he felt that was the case. He wasn’t paid for that; he wasn’t given stock.”

EyeLevel, situated in Berkley, California, was established in 2019 and creates generative AI models for both consumers and legal professionals. Katz outlined how EyeLevel was one of the first programmers to collaborate with OpenAI, the company that created ChatGPT, and how it intends to offer “truthful AI”—or robust tools that don’t cause hallucinations—to individuals and legal professionals who do not have the financial resources to hire a huge staff.

The majority of the time, massive datasets acquired from the internet and other sources are used to train generative AI models. Katz said that EyeLevel is unique in that this AI model is taught exclusively on court papers.

“The [AI] was trained exclusively on the transcripts, exclusively on the facts as presented in court, by both sides and also what was said by the judge,” Katz stated. “And so when you ask questions of this AI, it provides only factual, hallucination-free responses based on what has transpired.”

Experts caution about the program’s propensity for lying or having hallucinations, regardless of how an AI model is educated. Jonathan Turley, a criminal defence lawyer in the United States, was falsely accused of sexual assault by ChatGPT in April. To support its assertion, the chatbot even provided a phoney link to a Washington Post piece.

The fight against AI hallucinations is a top priority for OpenAI, which has even hired outside red teams to evaluate its arsenal of AI technologies.

“When users sign up to use the tool, we strive to be as transparent as possible that ChatGPT may not always be accurate,” OpenAI states on its website. “However, we recognize that there is much more work to do to further reduce the likelihood of hallucinations and to educate the public on the current limitations of these AI tools.”


Related posts

Nvidia Partners with NSF for AI Advancement

Bran Lopez

AI’s Transformational Impact on Pension Funds: Mercer CFA Institute Report Unveils Potential

Christian Green

Officials Express Concerns Over AI Enabling Cyber Crimes in the U.S.

Cheryl  Lee

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More