March 27, 2024
Biden's AI Vision: NIST Calls for Input on Responsible Development
AI

Biden’s AI Vision: NIST Calls for Input on Responsible Development

The United States National Institute of Standards and Technology (NIST), operating under the U.S. Department of Commerce, has issued a request for information to elicit public input supporting its responsibilities outlined in the latest presidential executive order concerning the secure and responsible development and use of artificial intelligence (AI).

The request, open for public input until February 2, 2024, is part of NIST’s commitment to creating guidelines, evaluation processes, and testing environments for AI system assessment. The initiative, inspired by President Joe Biden’s October executive order, aims to establish consensus-based standards and facilitate the safe, reliable, and responsible development of AI within the United States.

U.S. Secretary of Commerce Gina Raimondo emphasized the importance of public participation in shaping the guidelines, stating, “This framework aims to support the AI community in safely, reliably and responsibly developing AI.” The executive order specifically instructed NIST to incorporate evaluation and red-teaming, reflecting a comprehensive approach to AI system assessment.

The NIST request for information is seeking input from both AI companies and the general public, focusing on generative AI risk management and strategies to mitigate AI-generated misinformation. Generative AI, known for its ability to create text, photos, and videos based on open-ended prompts, has sparked both enthusiasm and concerns. The technology’s potential impact on job displacement, electoral disruptions, and fears of surpassing human capabilities are among the issues addressed in the request.

One significant aspect of the inquiry involves determining effective areas for “red-teaming” in AI risk assessment and establishing best practices. The concept of red-teaming, originating from Cold War simulations, involves simulating potential adversarial scenarios or attacks to identify vulnerabilities in a system. This practice has been widely used in cybersecurity to uncover new risks.

In a noteworthy development, the NIST announcement follows the inaugural U.S. public evaluation red-teaming event held in August. The event, coordinated by AI Village, SeedAI, and Humane Intelligence at a cybersecurity conference, marked a pivotal step in addressing AI vulnerabilities and strengthening security measures.

Additionally, NIST, in November, revealed the formation of a new AI consortium along with an official notice inviting applicants with the relevant credentials. The consortium’s primary objective is to establish and implement specific policies and measurements ensuring a human-centred approach to AI safety and governance.

As the deadline for public input approaches, stakeholders, including AI experts, companies, and the general public, have the opportunity to contribute valuable insights to shape the future guidelines and standards governing AI development and deployment in the United States.

Image: Wallpapers.com

Related posts

Ghostwriter’s ‘Heart on My Sleeve’ Eyes Grammy Nods Amid AI Music Controversy

Chloe Taylor

Tennessee: First State to Protect Musicians from AI

Bran Lopez

UK’s Safety-First AI Approach Risks Losing AI Race

Bran Lopez

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More