Alan Turing’s pioneering work with algorithms in 1941 led to the cracking of the Nazi Enigma code, shortening the Second World War by years and saving millions of lives. He could have scarcely imagined then that his genius would generate so much angst today.
However, during a BBC Radio interview only a decade later, it is clear Turing knew what he had unleashed.
I believe the attempt to make a thinking machine will help us greatly in finding out how we think ourselves.
Today, “thinking machines” are ubiquitous, and algorithms influence much of what we buy, see and learn. They connect people with communities, content and experiences. However, the undeniable advantages of these algorithms, particularly with social media engagement, come with an urgent need to examine how they are designed and how the data and preferences we contribute lead us down certain online paths.
Experience so far suggests systems intended to maximise user engagement may also contribute to a range of risks and harms to individuals and societies, even democracy and the rules-based international order itself.
This is why governments and regulators around the world have algorithms in their crosshairs. Europe’s Digital Services Act, the UK’s Online Safety Bill, and the US “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” are all part of this movement. They share an ambition to harness the genius of algorithms for good while minimising their risk of harm in a modern world that cannot function without them.
Australia enjoys a clear first-mover status in the broader field of regulating digital services for safety, having already bedded down its second phase of reform through the Online Safety Act 2021. Regulatory powers enshrined in this legislation are administered by the eSafety Commissioner, established seven years ago as the first government body in the world dedicated to protecting its citizens from online harms, such as targeted harassment and abhorrent content.
As eSafety Commissioner, my role is to lead this work, both domestically and offshore, representing Australia in international forums working to lift safety standards worldwide such as the Child Dignity Alliance Technical Working Group, which I chair, the WePROTECT Global Alliance and the World Economic Forum Global Coalition for Digital Safety.
As for the technology industry, it has long known “outrage sells” and human attention is a valuable commodity. But drawing people in by promoting conflict and extreme content can normalise and entrench prejudice, hate and polarisation, and tear at the fabric of democratic society.
What we don’t know is how much of the discord we see online today can be attributed to algorithms promoting negative content that keeps platforms “sticky”, and how much is a true reflection of individual preferences – or society’s fault lines. Greater transparency about the data inputs, knowing what a particular algorithm has been designed to achieve, and the outcomes for users are critical to improving our understanding.
At eSafety, we examined these questions in our recent position statement on recommender systems and algorithms. This acknowledged there is no simple solution to deal with the complex array of risks inherent in recommender systems. We don’t need to break the “black box” to see exactly what is happening inside every algorithm. Rather, what is required is transparency around design choices and objectives. Drawing on Safety by Design principles and risk assessment tools – an eSafety initiative which puts user safety and human rights at the centre of how to design and develop online products and services – our findings reveal some useful insights and recommendations.
For instance:
- Enhancing transparency reporting and auditing practices. More information in the public domain and accessible to researchers, experts and regulators fosters accountability, improves on future interventions and builds trust with users.
- Adjusting recommender algorithms to focus on more quality-focused metrics, such as the authoritativeness or diversity of content, instead of engagement alone. This can reduce the likelihood of users falling down rabbit holes of increasingly extreme content.
- Providing greater choice, control and feedback loops so that services enable people to explicitly shape their online experience, for example, through unwanted content flags and alternative curation models for news feeds.
- Introducing additional friction through design features such as nudges or prompts to read an article before sharing, or restricting the volume of sharing. This can serve to slow the spread of harmful or misleading content.
Encouraging services with global reach to adopt these measures will require international coordination and eSafety has been working with a range of partners, including policymakers, academics, think tanks and fellow regulators for some time to achieve this. This digital diplomacy culminated recently in the launch of a new international coordination body, the Global Online Safety Regulators Network. This initiative is a collaboration between regulators with clear responsibility for online safety regulation – Australia’s eSafety, Fiji’s Online Safety Commission, and Ofcom in the United Kingdom – with support from the Broadcasting Authority of Ireland. It will pave the way for a coherent international approach by enabling new online safety regulators to shape the trajectory of effective online harms regulation but also enable the agencies to share information, experience and best practice. This complements domestic coordination efforts, such as the Digital Platform Regulators Forum where eSafety is collaborating with partner agencies the Australian Competition and Consumer Commission, the Australian Communications and Media Authority and the Office of the Australian Information Commissioner.
Ultimately, online platforms should aim for participation and engagement through meaningful and positive experiences, not through spiralling conflict. It is within our power – and it is our responsibility – to make sure the digital world evolves in ways that engender greater trust and fairness.
No doubt Alan Turing would agree.