July 21, 2024
Biden's AI Vision: NIST Calls for Input on Responsible Development
AI

Biden’s AI Vision: NIST Calls for Input on Responsible Development

The United States National Institute of Standards and Technology (NIST), operating under the U.S. Department of Commerce, has issued a request for information to elicit public input supporting its responsibilities outlined in the latest presidential executive order concerning the secure and responsible development and use of artificial intelligence (AI).

The request, open for public input until February 2, 2024, is part of NIST’s commitment to creating guidelines, evaluation processes, and testing environments for AI system assessment. The initiative, inspired by President Joe Biden’s October executive order, aims to establish consensus-based standards and facilitate the safe, reliable, and responsible development of AI within the United States.

U.S. Secretary of Commerce Gina Raimondo emphasized the importance of public participation in shaping the guidelines, stating, “This framework aims to support the AI community in safely, reliably and responsibly developing AI.” The executive order specifically instructed NIST to incorporate evaluation and red-teaming, reflecting a comprehensive approach to AI system assessment.

The NIST request for information is seeking input from both AI companies and the general public, focusing on generative AI risk management and strategies to mitigate AI-generated misinformation. Generative AI, known for its ability to create text, photos, and videos based on open-ended prompts, has sparked both enthusiasm and concerns. The technology’s potential impact on job displacement, electoral disruptions, and fears of surpassing human capabilities are among the issues addressed in the request.

One significant aspect of the inquiry involves determining effective areas for “red-teaming” in AI risk assessment and establishing best practices. The concept of red-teaming, originating from Cold War simulations, involves simulating potential adversarial scenarios or attacks to identify vulnerabilities in a system. This practice has been widely used in cybersecurity to uncover new risks.

In a noteworthy development, the NIST announcement follows the inaugural U.S. public evaluation red-teaming event held in August. The event, coordinated by AI Village, SeedAI, and Humane Intelligence at a cybersecurity conference, marked a pivotal step in addressing AI vulnerabilities and strengthening security measures.

Additionally, NIST, in November, revealed the formation of a new AI consortium along with an official notice inviting applicants with the relevant credentials. The consortium’s primary objective is to establish and implement specific policies and measurements ensuring a human-centred approach to AI safety and governance.

As the deadline for public input approaches, stakeholders, including AI experts, companies, and the general public, have the opportunity to contribute valuable insights to shape the future guidelines and standards governing AI development and deployment in the United States.

Image: Wallpapers.com

Disclosure Statement: Miami Crypto does not take any external funding, or support to bring crypto news to the readers. We do not have any conflicts of interest while writing news stories on Miami Crypto.

Related posts

ChatGPT Enters Beta with ‘Custom Instructions,’ Paving the Way for Personalized AI Interactions

Bran Lopez

Elon Musk Diverts Tesla AI Chips to X

Harper Hall

Kenyan Government Under Fire for Controversial AI and Robotics Bill

Cheryl  Lee

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Please enter CoinGecko Free Api Key to get this plugin works.