April 19, 2024
Biden Administration Takes Steps to Bolster AI Governance Through Consortium
Policy & Regulation

Biden Administration Takes Steps to Bolster AI Governance Through Consortium

In a significant move towards addressing the challenges associated with artificial intelligence (AI) development and deployment, the United States National Institute of Standards and Technology (NIST) and the Department of Commerce have announced the establishment of the Artificial Intelligence (AI) Safety Institute Consortium. The initiative aims to promote a human-centred approach to AI safety and governance in the United States.

Published in the Federal Registry on November 2, a document from NIST states:

“This notice is the initial step for NIST in collaborating with non-profit organizations, universities, other government agencies, and technology companies to address challenges associated with the development & deployment of AI.”

The collaborative effort calls upon its members to contribute to various critical functions, including the development of measurement and benchmarking tools, policy recommendations, red-teaming exercises, psychoanalysis, and environmental analysis. These activities are designed to promote the responsible development and deployment of AI technology.

This announcement follows a recent executive order issued by U.S. President Joseph Biden, which set forth six new standards for AI safety and security. However, it is essential to note that these standards have not yet been formally legislated. While countries in Europe and Asia have been actively formulating policies governing AI systems, particularly concerning user privacy, security, and the potential for unintended consequences, the United States has relatively lagged behind in this domain.

President Biden’s executive order signifies a notable step towards establishing specific policies to govern AI in the United States. The formation of the Safety Institute Consortium further underscores the nation’s commitment to addressing AI governance challenges and enhancing safety measures.

Nonetheless, there remains a lack of a clear timeline for the implementation of laws governing AI development and deployment in the United States, beyond the existing regulations governing businesses and technology. Many experts in the field argue that these current regulations are insufficient when applied to the rapidly evolving AI sector.

The establishment of the AI Safety Institute Consortium signals a proactive approach to fostering responsible and secure AI development in the United States, aiming to align the nation with international efforts to ensure the ethical and safe use of artificial intelligence. The collaboration’s success will likely play a crucial role in shaping the future of AI governance in the country.

Image: Wallpapers.com

Related posts

Thailand’s Digital Wallet Program Delayed Amidst Criticism and Funding Concerns

Kevin Wilson

Illegal Forex Scheme Exposed: Crypto Trading Platforms in $2.2B Chinese Scandal

Harper Hall

SEC and Coinbase Clash Over Celsius’ Request for Distribution Agent Role

Chloe Taylor

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More