Published daily by the Lowy Institute

Managing risk: Pandemics and plagues in the age of AI

An Australian-initiated international forum could hold the key to controlling unconventional weapons in the age of generative AI.

A smallpox laboratory breach in Birmingham in 1978 claimed the life of Janet Parker, a British medical photographer (Birmingham Post and Mail Archive/Mirrorpix/Getty Images)
A smallpox laboratory breach in Birmingham in 1978 claimed the life of Janet Parker, a British medical photographer (Birmingham Post and Mail Archive/Mirrorpix/Getty Images)

Once a recurring scourge that blinded, scarred and killed millions, smallpox was eradicated by a painstaking public health effort that saw the last “natural” infection occur in 1977. In what some consider an instructive moment in biosecurity (rather than a mere footnote), Janet Parker, a British medical photographer, died of smallpox the following year, after being exposed to Variola virus – the causative agent of smallpox – while working one floor above a laboratory at Birmingham University. The incident in which she lost her life was referred to as an “unnatural” infection – one occurring outside the usual context of infectious disease.

The orthopox genus – of which Variola virus is part – holds a central role in the history of infectious disease and biodefence and has had a lasting impact on human society. Mousepox, cowpox, the clumsily named “monkeypox” and other pox viruses are all derived from the orthopox genus.

At the end of the first Cold War, as a US-led Coalition was poised to launch Operation Desert Storm, fear of both biological and chemical warfare returned.

There are only two known places in which Variola virus remains: in a high containment laboratory in Russian Siberia, and at a secure Centre for Disease Control (CDC) facility in Atlanta in the United States. Neither the Russian Federation nor the United States have yet destroyed their smallpox stockpiles, for reasons that relate more to the strictures of geopolitics than the needs of ongoing research. At the end of the first Cold War, as a US-led Coalition was poised to launch Operation Desert Storm, fear of both biological and chemical warfare returned. Saddam Hussein had deployed mustard gas and other chemical agents against Kurdish civilians at Halabja, killing more than 5,000 people. In the years preceding that atrocity, scores of military personnel bore the brunt of blistering agents, nerve agents and other chemical weapons in Iraq’s protracted war against Iran.

Biological weapons were the next presumed step on Saddam’s ladder of escalation should he feel threatened by the US-led Coalition that gathered in the Saudi desert after his invasion of Kuwait. The weapons program Iraqi scientists had overseen since the 1980s had brought aflatoxins and botulinum toxin to the point of weaponisation, if not deployment. Bacillus anthracis, the bacterium that causes anthrax, was a proximate concern to Coalition troops as a potential battlefield weapon. But whether Saddam had access to Variola virus was the biggest question. Smallpox, a disease with pandemic potential, was a strategic weapon with international reach, one that might even be deployed behind Coalition lines by a small team.

CDC lab testing
A small team seeking to propagate a pox virus with pandemic potential need not physically get hold of it in full form nor need access to a government-run lab (CDC/Unsplash)

Fear of such a scenario returned with the onset of the global War on Terror in the early 2000s, and so governments from Europe to Australia began stockpiling smallpox vaccines for use in the event of a future attack. After the Islamic State in the Levant (ISIL) suddenly seized swathes of territory in Iraq and Syria in mid-2014, the group repeatedly deployed chemical weapons against civilians, and reportedly made attempts at acquiring biological weapons as well. In 2016, as ISIL’s caliphate reached its brief zenith, a Canadian scientist on the other side of the world was working to create a safer vaccine against smallpox. The researcher was engaged by a US biotech company that wanted a smallpox shot that did not carry the risk of reversion, a situation in which inoculation can cause active infection – something happily not possible with most vaccines – or death.

As part of this effort, the researcher needed a related orthopox virus to use as a viral vector. To this end, their team embarked on de novo synthesis of horsepox, a less pathogenic orthopox virus. This step, the reconstruction of a hitherto eradicated pox virus, became known as a “Rubicon in the field of biosecurity”. For the first time, an orthopox virus was created from scratch using information and material derived from purely commercial sources – and it only cost around $100,000.

A new domain in counterproliferation

Horsepox was, of course, not the first virus to be rebuilt or enhanced in a laboratory setting. In 2005, a team reconstructed some of the H1N1 virus responsible for the “Spanish” influenza pandemic that killed between 20 and 50 million people in 1918-19, using reverse genetic techniques that were cutting-edge at the time. In 2002, another research group at the State University of New York created the first entirely artificial virus, a chemically synthesised strain of polio. A year before, in 2001, an Australian team investigating contraceptives for use on the rodent population accidentally amplified a form of ectromelia, which causes mousepox, to a point that made it resistant to available pox vaccines.

What made the horsepox development such a watershed moment was the ease with which the necessary materials and genetic information were acquired. The team bought access to DNA fragments from a horsepox outbreak that occurred in Mongolia decades earlier, in 1976. A DNA synthesis company, GeneArt, was engaged to construct the DNA fragments. Hence, a small team seeking to obtain and propagate a similar pox virus with pandemic potential – say, smallpox – need not physically get hold of it in full form. Nor did they need access to a government-run lab, or the certification of tightly restricted procurement channels to do so. Instead, the virus could be recreated using means and material easily available to any private citizen, for minimal cost.

Such techniques, which are well established now, undeniably have many beneficial uses. At the onset of the Covid pandemic, when authorities in China were less than forthcoming with information, the genetic sequence of SARS-CoV-2 was published on the internet – but only after some skittish manoeuvring by Western researchers and their colleagues based in China, who were under government pressure not to share the sequence. Belated though this development was, it allowed for scientists across the world to begin designing medical countermeasures. Similar processes are used to keep track of viral evolution during other epidemics, to monitor the emergence of new variants of concern, or to detect changes in a pathogen that could cause more severe disease.

Coronavirus
It has been nearly four years since SARS-CoV-2 went from causing a regional epidemic in the Chinese city of Wuhan to a worldwide pandemic (Fusion Medical Animation/Unsplash)

Synthetic biology in 2023

Much has transpired in the fields of chemistry and synthetic biology since 2017, and even more has happened in the field of artificial intelligence. When chemistry, biology and AI are combined, what was achieved with horsepox by a small team of highly trained specialists could soon be done by an individual with scientific training below the level of a doctorate. Instead of horsepox or even smallpox, any such person could soon synthesise something far deadlier, such as Nipah virus. It might equally be done with a strain of avian influenza, which public health officials have long worried may one day gain the ability to spread efficiently between humans. Instead of costing $100,000, such a feat will soon require little more than $20,000, a desktop whole genome synthesiser and access to a well-informed large language model (LLM), if some of the leading personalities in generative AI are to be believed.

Some alarming conversation has been had in recent months over the potential for new artificial intelligence platforms to present existential risks. Much of this anxiety has revolved around future iterations of AI that might lead to a “takeoff” in artificial “superintelligence” that could surpass, oppress or extinguish human prosperity. But a more proximate threat is contained within the current generation of AI platforms. Some of the key figures in AI design, including Mustafa Suleyman, co-founder of Google’s Deep Mind, admit that large language models accessible to the public since late 2022 have sufficient potential to aid in the construction of chemical or biological weapons.

Founded in 1984 at the height of the Iran–Iraq war, the Australia Group initially focused on controlling precursor chemicals that were used in the unconventional weapons that killed scores of people on the Iran–Iraq frontline.

Details on such risks have so far been mostly vague in their media descriptions. But the manner in which LLMs could aid malicious actors in this domain is simply by lowering the “informational barriers” to constructing pathogens. In much the same way AI platforms can be used as a “wingman” for fighter pilots navigating the extremes of aerial manoeuvre in combat, an LLM with access to the right literature in synthetic biology could help an individual with minimal training overcome the difficulties of creating a viable pathogen with pandemic potential. While some may scoff at this idea, it is a scenario that AI designers have been actively testing with specialists in biodefence. Their conclusion was that little more than postgraduate training in biology would be enough.

This does not mean that (another) pandemic will result from the creation of a synthetic pathogen in the coming years. Avenues for managing such risks can be found in institutions that have already proven central to the control of biological and chemical weapons. One such forum – the Australia Group – could be the perfect place to kickstart a new era of counter-proliferation in the age of AI.

Founded in 1984 at the height of the Iran–Iraq war, the Australia Group (AG) initially focused on controlling precursor chemicals that were used in the unconventional weapons that killed scores of people on the Iran–Iraq frontline. The AG has since evolved to harmonise regulation of many dual use “chem-bio” components via comprehensive common control lists. But the dawn of a new age in artificial intelligence, coming as it has after 20 years of frenetic progress in synthetic biology, presents new challenges. As an established forum, the Australia Group could provide an opportunity for the international community to get ahead of this new threat landscape before it is too late.

After the plague

It has been nearly four years since SARS-CoV-2, the virus that causes Covid-19, went from causing a regional epidemic in the Chinese city of Wuhan to a worldwide pandemic. At the time of writing, the question of how the virus first entered the human population remains unresolved.       There are several ingredients that make both a “natural” zoonotic event and an “unnatural”, research-related infection plausible scenarios. The first ingredients relate to the changing ecologies in which viruses circulate, the increasingly intense interface between humans and animals amid growing urbanisation, and the international wildlife trade. Regarding the latter possibility, that the virus may have emerged in the course of research gone awry, it is now a well-documented fact that closely related coronaviruses were being subjected to both in-field collection and laboratory-based experimentation in the years approaching the pandemic. (Whether or not a progenitor to SARS-CoV-2 was held in any nearby facility remains in dispute.)

Whatever the case, the next pandemic may not come as a result of a research-related accident, or an innocent interaction between human and animal – it may instead be a feature of future conflict. Many of the same ingredients that were present in 2017 remain in place across the world today, with the new accelerant of generative AI as an unwelcome addition. Added to this is a new era of great power competition, an ongoing terrorist threat, and the rise of new sources of political extremism. The Australia Group has the chance to act now, before we see the use of chemical or biological weapons at any of these inflection points, which are all taking place amid a new age of artificial intelligence.




You may also be interested in