April 19, 2024
AI Firms to Report Safety Tests to US Government
AI

AI Firms to Report Safety Tests to US Government

The Biden administration plans to enforce a new regulation requiring major AI system developers to disclose safety test results to the government. The White House AI Council will meet to assess progress on President Joe Biden’s executive order aimed at overseeing the rapidly advancing technology.

The order, signed three months ago, mandates AI companies to share crucial information, including safety tests, with the Commerce Department under the Defense Production Act. The government seeks assurance that AI systems are safe before public release, emphasizing the need for companies to meet safety standards.

While companies commit to safety test categories, a common standard is yet to be established, with the National Institute of Standards and Technology developing a uniform framework for assessment. AI’s increasing significance in economic and national security prompts federal considerations, including potential legislation and collaboration with other countries.

The Commerce Department is working on a draft rule for U.S. cloud companies providing servers to foreign AI developers. Nine federal agencies have conducted risk assessments on AI use in critical national infrastructure, and efforts to hire AI experts and data scientists at federal agencies are underway. The government aims to ensure effective regulation and management of transformative AI technology while acknowledging its potential impact.

Image by Freepik

Related posts

Snapchat Faces Regulatory Scrutiny in the UK for ‘My AI’ Chatbot Privacy Risks

Christian Green

China’s Tencent Shifts Focus to AI Amid Gaming Revenue Decline

Chloe Taylor

Nvidia Sets Sights on $200M AI Hub in Indonesia Amid Global Demand Surge

Chloe Taylor

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More