Published daily by the Lowy Institute

A first step on the long road to global AI regulation

It didn’t make headlines but the US and EU came together last week to sign an important treaty – leaving partners such as Australia with a choice.

The text represents a clear signal of the existence of a Western alignment regarding the minimum requirements for the future regulation of AI (Getty Images)
The text represents a clear signal of the existence of a Western alignment regarding the minimum requirements for the future regulation of AI (Getty Images)
Published 10 Sep 2024 

The first ever binding international treaty on artificial intelligence was agreed last week, bringing together the European Union, the United States, Israel, the United Kingdom and six other European (but not-EU) countries. Despite the name – the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law (CETS No. 225) – the treaty is open to all countries, and while not a signatory (yet), Australia participated in its negotiation.

The text is remarkable for being one of the first times that the United States and European Union have formally aligned their views on the regulation of AI. Sceptics may be right to question the capacity of the Framework Convention to deliver concrete results. It has no proper enforcement regime. It is unclear whether countries in the global South will have more opportunities to shape global technology governance. And the forthcoming election results could see the United States head in a radically different direction.

Then there are bigger questions. Will the Convention’s attention to human dignity, equality and non-discrimination invoke the necessary level of accountability among key tech industry players in ways that are needed to shift the current concentration of AI power? Can the treaty shift the lack of incentives for big tech players to prioritise human rights over innovation?

Nonetheless, the Convention may be the best example of a global initiative aimed at ensuring that the use of AI systems is fully consistent with human rights.

The Framework Convention’s greatest strength – it neatly follows the common denominator of the regulation of AI on both sides of the Atlantic Ocean – is perhaps its greatest weakness.

Known as the Vilnius Convention for the Lithuanian capital where the formal signature took place, the text represents a clear signal of the existence of a Western alignment regarding the minimum requirements for the future regulation of AI. In this respect, it is important that other non-European States such as Argentina, Peru and Uruguay were involved in the negotiation.

Asia, however, is notably absent. The Convention will make evident a divide between the Western democracies and other jurisdictions such as China, Saudi Arabia, Pakistan or Venezuela, where some existing deployments of AI seem to go against basic ideas about human dignity and the responsibility for states to protect the rights of individuals.

Signatories to the Framework Convention must introduce domestic legislation that requires the public sector to assess the risk of AI deployment to ensure minimum standards are met. These standards are admittedly low, and even these took more than two years to negotiate. However, they still offer a basic floor to ensure minimal compatibility with fundamental human rights.

Building towards agreement (Ben Wicks/Unsplash)
Building towards agreement (Ben Wicks/Unsplash)

The Vilnius Convention also envisages the possibility of a ban or moratorium on certain applications of AI. It therefore offers broad parameters for the global community on how to regulate AI.

The text does well in its attempts to recognise those most at risk. “Digital literacy” and in turn, the greater vulnerability to which the digitally illiterate are exposed, is noted early in the Convention. AI systems must show a respect for equality, with explicit reference to gender equality. Its acknowledgement that AI can undermine democracy and rule of law reflects, among other things, fear of misinformation and the potential that AI can be used to exploit the vulnerabilities of parts of the population.

Yet the Framework Convention’s greatest strength – it neatly follows the common denominator of the regulation of AI on both sides of the Atlantic Ocean – is perhaps its greatest weakness. It provides a mandate for signatory countries to regulate the use of AI by the public sector at the national level (understood as federal-level regulation and not necessarily binding on sub-national states or provinces) based on an assessment of potential risks for human rights, the rule of law or democracy that the deployment of a given AI system may involve.

By contrast, for private sector entities using AI, the Framework Convention envisages a significantly lower threshold, simply demanding that state parties:

“shall address risks and impacts arising from activities within the lifecycle of artificial intelligence systems by private actors […] in a manner conforming with the object and purpose of this convention.”

This means that most states will comply with the Vilnius Convention as far as they have a solid legal regime at the federal level capable of ensuring that the public sector will assess the risks of an AI system before such a system is deployed. This is precisely what the United States has achieved with its Executive Order of 30 October 2023 and the European Union through its AI Act, which also extends to some high-risk systems deployed in the private sector. Also noteworthy is that systems related to the protection of countries’ national security interests and defence are outside the scope of the treaty. In other words, the private sector and key public sector uses remain relatively underregulated under such an approach.

Where does this leave Australia? Interestingly, the treaty signing took place the same week that the Australian government finally published its Proposals Paper for “introducing mandatory guardrails” for the domestic regulation of AI. This outlines three options, a domain-specific approach, framework legislation or a whole-of-economy AI Act, and marks Australia’s first attempt to begin to reach that minimum standard. If Australia signs, we will be required to do more. Importantly, signing the Vilnius Convention could be the trigger for Australia to accelerate these processes and push towards stronger limits on public sector use. Early signature by Australia could also be a signal of commitment and even a way to position Australia – an evident outsider when it comes to the key players in the AI game – as a more relevant global actor in global technology governance.

Either way, a choice must be made in a relatively narrow time frame. The Vilnius Convention is likely to enter into force at some point in 2025. One could make a case for not signing the convention yet, and keeping Australia’s options open, instead of firmly aligning itself with the United States and the European Union. Nonetheless, assuming Australia sees itself as seeking to promote AI use consistent with human rights and the rule of law – as the Proposals Paper suggests – then it is hard to see any serious argument for delay.




You may also be interested in