Published daily by the Lowy Institute

Byte-sized diplomacy: The search for safe AI

How can Australia improve public trust in AI and what can Australia contribute to global AI safety efforts?

Views on AI safety are by no means homogenous (Getty Images Plus)
Views on AI safety are by no means homogenous (Getty Images Plus)

Got a big question on technology and security for “Byte-sized Diplomacy”. Send it through here.

Artificial Intelligence is already here, diffused through the economy in different ways. So, Australia needs to accept a few conflicted realities.

We have a deeply worried population, with what are – let’s face it – some pretty legitimate concerns about AI and big tech companies.

We also have an economy that will be left behind if we don’t enable Australian companies and entrepreneurs to create and adopt AI. We have people that need to be prepared with AI (and other) skills for future jobs and contributions to society.

Any government investment needs to address these realities. The local AI safety conversation is also heating up. It hasn’t won the same attention yet as the government flagging limits on social media access, but AI safety will be part of a big global debate.

Last week saw the release of voluntary AI safety standards and a proposal paper on mandatory guardrails for AI in high-risk settings. Despite calls for safety in the uses of AI technology, Australia is seen as lagging on the issue globally, while other jurisdictions are moving rapidly.

The European Union, for example, has adopted the AI Act 2024, a risk-based regulatory framework governing the application and development of AI systems. At a state level, Californian legislators recently passed a suite of AI bills including a controversial AI safety bill, known as SB1047. This requires developers of advanced AI models to adopt and follow safety procedures – including shutdown protocols – to reduce the risk that their models are deployed in a way that causes “critical harm”. The bill has faced vocal opposition and the Californian governor must decide whether to veto the legislation or sign it into law by the end of the month.

An aerial view of Silicon Valley, California, with Apple Park, headquarters of Apple Inc in centre (Amit Lahav/Unsplash)
An aerial view of Silicon Valley, California, with Apple Park, headquarters of Apple Inc in centre (Amit Lahav/Unsplash)

Establishing AI safety institutes are also seen as another crucial step in managing advanced AI complexity through technically informed, globally coordinated action. Australia doesn’t have one presently. But the United States and United Kingdom do, and last month they announced a partnership. AI safety institutes have also been established in Japan, Canada and Singapore. Some tech companies are also on board with an agreement on AI safety research, testing and evaluation.

Perhaps an Australian AI Safety Institute will help identify crosscutting issues that existing regulatory agencies can’t effectively respond to and help collaborate globally.

Australia is not alone, however. France, Germany, Italy and South Korea have likewise made a renewed commitment to safe and responsible AI and support for an international network of AI safety institutes but are yet to set up one themselves – each country diverges in approach with competing incentives and varying structures. The network of AI safety institutes is growing, albeit slowly. Some within India have expressed interest, too.

Public concern about AI risks isn’t abating. Australians are more concerned about the future of AI than other nations, with 64 per cent of Australians saying AI makes them nervous. Eighty per cent of Australians think managing AI risk is a global priority. This concern is most clear in relation to misinformation and disinformation, but appears across the board, as Australians are more uncomfortable with AI producing news than most other countries and about its use of private information in business.

Views on the topic are by no means homogenous. Numerous voices are concerned that Australia is not investing in AI enough. It’s a big topic and Canberra has seen a stream of global AI experts offering views. Kent Walker, Google’s president of global affairs, recently spoke with Australian parliamentarians about AI. Alondra Nelson, the former Director of the White House Office of Science and Technology Policy when the 2022 Blueprint for an AI Bill of Rights was released, also visited Canberra and Sydney.

I interviewed Signal CEO Meredith Whittaker about her concern around AI power concentration, consumer rights and extractive business models. I also spoke with Connor Leahy, CEO of Conjecture – an AI Safety company – about the power of technology and AI safety. Leahy said that Australia should look to take advantage of the burgeoning network of AI Safety Institutes:

“Australia has a lot to offer the AI global safety discussion, with a long history of standing up to tech companies, a historical role in global diplomacy from nuclear to pandemics as well as strong public institutions and legislatures.”

Professor Anton van den Hengel wrote a few days ago that the AI economy is global and we can’t opt out. As he put it, it’s hard to imagine a future without AI, but easy to imagine one without Australian AI. Perhaps an Australian AI Safety Institute will help identify crosscutting issues that existing regulatory agencies can’t effectively respond to, help collaborate globally, work diplomatically, build research networks and help Australians build trust – and communicate their key concerns to policymakers.

Time is not on our side. Australia needs to engage in international efforts to help shape the technology future that we want to live in. We should continue to use our diplomatic experience and technical expertise to support these efforts and inspire those in our region and around the globe to unite for an AI and technology ecosystem that makes the world safer, more secure and more equitable.




You may also be interested in