The Australian parliament’s Joint Committee on Intelligence and Security is currently holding an inquiry into extremist movements and radicalism in Australia. It is only the second issues-based inquiry that this particular committee has conducted; the first was into the politically charged question of foreign interference. The hearings indicate the importance that parliament has placed on addressing concerns around violent extremism, an issue that is challenging many democracies around the world.
The threat of terrorism and the nature of violent extremism has shifted substantially in the two decades since the 11 September 2001 attacks, which led to the establishment of most of the present crop of programs, departments and paradigms for counterterrorism and countering violent extremism. While the threat from international and homegrown jihadist actors remains, increasing polarisation and disinformation has contributed to the growth of a diverse array of extremist movements across the ideological spectrum, particularly among the extreme right. The inquiry’s terms of reference will allow the committee to examine whether government’s current policy settings and legislation are adequate to address a diverse, complex and decentralised violent extremist landscape.
Such a focus has generated substantial interest, with government, technology companies, academics and civil society groups offering submissions for consideration, not all yet published. The committee even received attention from the subjects of the inquiry, with at least one extremist group putting forward its own submission to argue it should not be considered as an extremist organisation. The committee sensibly declined this submission on the grounds of “parliamentary procedure and standing orders”.
Australia and governments around the world were naive to think that these corporate tech titans would simply regulate themselves.
The committee is seeking to understand the ways in which extremist groups can recruit, mobilise, incite violence, and put forward extremist and hateful narratives via internet enabled communications. (Full disclosure: I appeared as a witness during the committee’s two-day public hearings, putting forward my own submission focusing on the role technology, particularly social media, plays in extremism.)
However, there are broader considerations around extremism and technology that go beyond how extremists are using the internet. Such questions centre on the internet platforms themselves – whether there is something about their design, logic and permissive environment that contributes to and facilitates extremism.
The concern is that the structure of internet platforms contributes to the growing exposure of individuals to extremist content which has driven polarisation, and contributes to other social harms that undermine democracy, including violent extremism.
The Australian Security Intelligence Organisation has said that parts of these internet platforms act as “echo chambers of hate”. The inquiry will seek to determine whether this is indeed the case, and, if so, how to address the problem. For its part, Australian technology industry representatives stated during the hearings that it has adequately monitored and moderated extremist content on internet platforms, allowing that it remains ongoing work.
Crucial yet unanswered questions revolve around how the recommendation algorithms used by various internet platforms such as Google, Facebook and Twitter have potentially led the average user to more extremist content, and how easy access to that content plays into a person’s radicalisation process.
Such questions can’t be fully answered, because there is a lack of transparency around how recommendation algorithms are designed. So should government regulate algorithmic transparency? I think it should. Algorithmic transparency is an important issue relating to not just extremism and technology, but to understanding how computers affect more and more of our daily lives and decisions. Computer algorithms must be known and explainable in order to govern them.
Algorithms – even and especially those by commercial companies – must be audited independently. The public must learn more about how humans and algorithms intersect. People need to be able to understand how a choice to view, for example, an anti-vax video online can then lead to being recommended an anti–Covid lockdown account, and then possibly a video of Proud Boys fighting protesters, to then populating a person’s video feed with white supremacist content. The public cannot be satisfied to just take YouTube’s word for it that the company has refined its algorithms so that this no longer happens.
The Australian representatives of Facebook, Google and Twitter who fronted the public hearing did not receive the same grilling as their company CEOs when questioned in the United States by a congressional antitrust panel in 2020, which also delved into whether these platforms were drivers of polarisation and extremism. The questioning from the Australian parliamentarians was less combative, even conciliatory. However, parliamentary committee member Senator Julian Lesser signalled the frustration with the internet companies, saying, “These companies have had 15 years to demonstrate they can be corporate citizens and put self-regulation in place”.
Algorithmic transparency is another critical step to tackle violent extremism.
At the same time, Australia and governments around the world were naive to think that these corporate tech titans would simply regulate themselves. The witness testimony by representatives of the technology companies in fact at many times welcomed government guidance and regulation to help them tackle extremist exploitation of their platforms.
If the parliamentary committee is to put forward comprehensive recommendations to government, then it must consider how to enact regulations not just around content and criminal use, but in a more substantive manner similar to how governments have comprehensively regulated other industries. Traditional media, for instance, has been regulated on issues of privacy and market competition. Algorithmic transparency is another critical step to tackle violent extremism comprehensively, along with other social harms and threats to democracy.
It will come down to a question of government regulation. Will governments adequately regulate or legislate, not only regarding the legality of certain harmful content or illegal use of the internet, but the platforms that publish that content and recommend it?
This article is part of a year-long series examining extremism and technology also available at the Global Network on Extremism and Technology, of which the Lowy Institute is a core partner.
Main photo courtesy Unsplash user Athul Cyriac Ajay