Skip to content

18 countries conclude pact against misuse of AI

18 countries conclude pact against misuse of AI

18 countries conclude pact against misuse of AI
18 countries conclude pact against misuse of AI

Global Coalition Against AI Misuse Sets Precedent

Hear this: 18 nations, including heavyweights like the USA and Germany, have joined forces to curb AI misuse. They've just unveiled what appears to be the first international agreement aiming to safeguard users from AI's wrongful use. And it's all in a 20-page document that's as exciting as it is necessary.

On a Sunday, these 18 forward-thinking nations shared their agreement. They declared that companies that create and utilize AI should prioritize safeguarding their customers and the public from its misuse.

Jen Easterly, Director of the US Cybersecurity and Infrastructure Security Agency, spoke to Reuters, stating, "For the first time, we're seeing an acknowledgement that it shouldn't just be about cool features and how fast we can get them to market..." Instead, the focus should be on security during the design phase.

Non-binding but Impactful

But hold your horses! While this agreement isn't legally binding, it's a significant step forward. It contains general recommendations such as monitoring AI systems for misuse, protecting data from manipulation, and vetting software providers. According to Easterly, the agreement's importance lies in the fact that so many nations support prioritizing AI system safety.

This latest move by several nations is part of a broader strategy to control the international development of AI. Europe, for instance, is leading the way in AI regulations. Recently, lawmakers in France, Germany, and Italy agreed on regulations for artificial intelligence that include mandatory self-regulation through codes of conduct for essential AI models.

The 18 nations that have signed this agreement include the UK, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore.

Protecting Customers above All

The 18-nation agreement underscores the importance of companies utilizing AI to prioritize customer and public protection from misuse. This aligns with non-binding guidelines promoted at international conferences and agreements on AI development.

Moving forward, these multinational conferences and agreements will continue to shape the global AI landscape by fostering cooperation and setting guidelines for responsible AI deployment.

Source:

Enrichment Data:

The non-binding international agreement referred to is the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law. Here are its key details and guidelines:

  1. Scope: This Convention aims to ensure that the use of AI complies with existing international legal standards on human rights, democracy, and the rule of law. It has a global focus, with 57 countries from almost all regions participating, including all G7 members.
  2. Signatories: The Convention was signed by the EU (for all 27 Member States), Andorra, Georgia, Iceland, Israel, Moldova, Norway, San Marino, the UK, and the US, among others.
  3. General Obligations: States are required to guarantee the protection of human rights, the integrity of democratic processes, and respect for the rule of law throughout the entire lifecycle of AI systems.
  4. Principles and Measures: The Convention establishes a set of principles that states must follow when dealing with AI. It requires legal remedies and procedural safeguards, as well as mechanisms to assess the risks and adverse impacts of AI.
  5. Application to Public and Private Actors: The provisions of the Convention apply directly to AI-related activities of public authorities. Private actors are required to address risks and impacts in a manner conforming with the object and purpose of the Convention.
  6. Implementation Flexibility: This Convention does not prescribe specific bans on AI applications but instead obliges states to consider the need for moratoria or bans. The measures to be taken should follow a graduated and differentiated approach, depending on the severity and probability of the occurrence of adverse impacts of specific AI systems.

This framework provides a comprehensive approach to regulating AI, emphasizing the need for responsible AI development and use to protect human rights and democratic values.

Latest