April 19, 2024
Global Focus on AI Regulations as EU Nears Deal

Global Focus on AI Regulations as EU Nears Deal

Growing apprehensions regarding the potential misapplication of AI technology have prompted a swift global response from nations like the U.S., U.K., China, and the G7 to expedite regulatory measures. Notably, Europe has taken a pioneering role in this domain, leading with the European Union’s (EU) groundbreaking initiative—the AI Act.

This comprehensive regulatory framework, recognized for its innovation, aims to govern various AI tools, including generative AI like OpenAI’s ChatGPT and Google’s Bard.

Despite initial delays, reports indicate that negotiators reached a consensus on Dec. 7 to establish controls specifically for generative AI tools under the EU AI Act. The concerns surrounding the potential misuse of AI have spurred not only Europe but also the United States, the United Kingdom, China, and other G7 nations to accelerate their efforts in AI regulation.

In June, the Australian government initiated an eight-week consultation period seeking feedback on the potential prohibition of “high-risk” AI tools. This consultation, extended until July 26, explored strategies to ensure the “safe and responsible use of AI,” considering options such as ethical frameworks, specific regulations, or a combination of both.

China, in temporary measures starting Aug. 15, introduced regulations overseeing the generative AI industry. These regulations mandate security assessments for service providers, requiring clearance before introducing AI products to the mass market. Notably, four Chinese technology companies, including Baidu and SenseTime, unveiled AI chatbots to the public on Aug. 31 after obtaining government approvals.

France’s privacy watchdog, CNIL, initiated an investigation in March into complaints about ChatGPT’s potential privacy breaches. This followed the temporary ban of ChatGPT in Italy. Simultaneously, the Italian Data Protection Authority launched a “fact-finding” investigation on Nov. 22, focusing on data-gathering processes for AI algorithm training.

On a global scale, the U.S., the U.K., Australia, and 15 other countries collaboratively released guidelines to safeguard AI models from tampering. Emphasizing the need for models to be “secure by design,” these guidelines aim to enhance the protection of AI technology worldwide.

Image by rawpixel.com on Freepik

Related posts

Snapchat Faces Regulatory Scrutiny in the UK for ‘My AI’ Chatbot Privacy Risks

Christian Green

UK’s NCSC Report: AI Progress Could Counterbalance Rising Cyber Threats

Bran Lopez

Mistral AI Secures €385 Million Funding, Aims for Global Impact in AI

Christian Green

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More