April 19, 2024
18 Nations Collaborate on Crucial Model Protection Guidelines

18 Nations Unveil Comprehensive AI Security Protocols: Push for Inherent Model Safety

The United States, the United Kingdom, Australia, and 15 other nations have jointly published global recommendations to safeguard artificial intelligence (AI) models against manipulation, emphasizing the need for companies to prioritize making their models inherently secure. Issued on November 26, the 18 countries unveiled a comprehensive 20-page document delineating the cybersecurity protocols that AI companies should adopt while creating and utilizing AI models.

They highlighted that security can often be a secondary consideration in a rapidly evolving industry. The outlined guidelines primarily comprise broad suggestions, advocating for stringent control over the infrastructure of AI models, continuous monitoring to detect any interference before and after deployment, and the education of staff on cybersecurity risks.

Notably absent from these recommendations were contentious AI topics such as the regulation of image-generating models, deep fakes, and data collection practices for training models, all of which have spurred copyright infringement lawsuits against multiple AI firms.

In a statement, U.S. Secretary of Homeland Security Alejandro Mayorkas underscored the pivotal moment in AI advancement, stating, “We are at an inflection point in the development of artificial intelligence, which may well be the most consequential technology of our time. Cybersecurity is key to building AI systems that are safe, secure, and trustworthy.”

These guidelines align with other governmental initiatives in the AI domain, such as the recent AI Safety Summit in London, where governments and AI entities convened to establish a consensus on AI development strategies.

Simultaneously, the European Union is finalizing the specifics of its AI Act to govern the domain, and U.S. President Joe Biden issued an executive order in October to set benchmarks for AI safety and security. However, these initiatives faced resistance from the AI industry, which contends that they might hinder innovation.

Among the endorsers of the new ‘secure by design’ guidelines are Canada, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, South Korea, and Singapore. Several AI companies, including OpenAI, Microsoft, Google, Anthropic, and Scale AI, contributed to shaping these guidelines.

Image by freepik

Related posts

Ghostwriter’s ‘Heart on My Sleeve’ Eyes Grammy Nods Amid AI Music Controversy

Chloe Taylor

Mastercard Fortifies Crypto Defense with AI Partnership Against Fraud

Christian Green

Hyped-Up AI Tokens Lack Substance, According to Coinbase Research

Cheryl  Lee

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More