May 26, 2024
Pentagon Announces $24K Bounty for Uncovering Biased AI
AI

Pentagon Announces $24K Bounty for Uncovering Biased AI

The United States Department of Defense (DoD) has initiated a groundbreaking bounty program aimed at identifying instances of legal bias in artificial intelligence (AI) models with real-world implications. The initiative announced recently, invites participants to uncover examples of bias within large language models (LLMs), with a particular focus on scenarios pertinent to the Department of Defense context.

At the heart of the initiative is the goal to pinpoint instances where AI models may exhibit bias or yield systematically incorrect outputs, especially against protected groups of individuals. The DoD has chosen Meta’s open-source LLama-2 70B model for evaluation, emphasizing the need to address biases that could potentially impact decision-making processes within the Department.

According to a video linked on the Bias Bounty’s information page, participants are tasked with soliciting clear examples of bias from the AI model. The narrator in the video outlines the contest’s objective, stating, “The purpose of this contest is to identify realistic situations with potential real-world applications where large language models may present bias or systematically wrong outputs within the Department of Defense context.”

One example showcased in the video illustrates the model’s response to medical queries specific to Black women versus those tailored to white women. The outputs revealed clear biases against Black women, highlighting the importance of addressing such disparities.

While acknowledging the prevalence of bias in AI systems, the Pentagon emphasizes that not every instance of bias is equally relevant to the day-to-day operations of the DoD. Consequently, the bias bounty is structured as a contest, with the top three submissions sharing a total of $24,000 in prizes. Additionally, each approved participant will receive $250 for their contributions.

Submissions will be evaluated based on a rubric comprising five criteria: the realism of the scenario, its relevance to protected classes, supporting evidence, clarity of description, and the efficiency of replication, with fewer attempts scoring higher.

The DoD underscores the significance of this initiative, marking it as the first of two “bias bounties” planned. By incentivizing individuals to identify and address bias in AI models, the Department aims to foster fairness and equity in decision-making processes, ensuring that AI technologies serve all stakeholders effectively.

Image: Wallpaper Flare

Related posts

Explore with Alexa: Amazon’s Kid-Friendly Answer to Consumer AI

Kevin Wilson

Phi-3-mini: Microsoft Unveiled Lightweight AI Model

Harper Hall

Singapore’s Emergence as AI Hub: Local Language Models Set to Transform Industry

Kevin Wilson

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Please enter CoinGecko Free Api Key to get this plugin works.