Published daily by the Lowy Institute

So, you're against drones? Then you must really hate this adorable puppy

So, you're against drones? Then you must really hate this adorable puppy

OK, so that headline is a mildly offensive way to enter the discussion started so respectfully by Jennifer Hunt and James Brown on The Interpreter. I'm sure they both love that puppy, even if they have their doubts about drones. But now that I have your attention, let me try to make a few serious points, including one about puppies.

I'll start not with drones but with 'lethally autonomous robots'. Combat drones such as those the US is using so frequently in its war on terror have a human decision-maker 'in the loop', someone (or frequently more than one person) who makes the final decision to fire on a target. Killer robots would do away with such human intervention, and the video James Brown recommends by sci-fi novelist Daniel Suarez paints a dark picture of where this technology leads, not only for warfare but for democracy.

But I found that video rather overwrought. I'm not convinced that such technology can 'reverse a five-century trend toward democracy'.

Suarez is absolutely right that military technology shapes our political institutions (see Phillip Bobbitt's The Shield of Achilles on this topic), but don't forget that lethal autonomy is in a sense not new. Suarez himself points out that such weapons are already a feature of the Korean Peninsula standoff, and he doesn't even mention the humble landmine, sea mine or booby trap. And for several decades, navies have fielded fully automated anti-aircraft defence systems because the aerial threats they face just move too fast to allow for human intervention. [fold]

Moreover, it's important to recognise that automation will not allow robots to make life and death decisions, because robots can't really make decisions at all. They are merely programmed, by humans, and if we program them to fire a missile at a target at some future time, that simply means we have moved the human decision-point forward. The current generation of drones moves the human decision maker away from the battlefield geographically; the next generation will also take them away from the battlefield chronologically. But either way, it is still a human decision, and if war crimes are committed, those who operate and even those who program these killer robots ought to be liable, because they are the ultimate decision-makers.

Speaking of war and decision-making, Jennifer Hunt writes in her piece that 'An important consideration here is the moral hazard some observers believe armed drones introduce to decision-making...decision-makers can deploy the technology with no risk to pilots' lives or ground troops. The reduced cost in blood and treasure is thought to lower the threshold for the use of force.'

I think that's true, but it is also completely commonplace and unavoidable. In fact, for as long as humans have engaged in conflict with others, we have sought a battlefield advantage through technology by making the enemy more vulnerable to our weapons and us less vulnerable to theirs. It results in an offence-defence cycle, with new technologies constantly being developed to overturn or undermine the advantages wrought by the previous generation of weapons. To put it somewhat crudely, the sword came along to give one side the advantage in hand-to-hand combat, so the spear was invented to overcome the advantages of swords. And so on.

A drone, therefore, is just a tool to reduce to the risk of aerial combat and gain an advantage over an enemy. Asking nations and military commanders to forego that potential battlefield advantage would be like asking them to not buy tanks or frigates. True, the world has managed to largely or wholly ban entire classes of weapons, such as chemical and biological arms, and landmines, but these are rare exceptions. The practical barriers against drone proliferation are therefore extremely high.

Moreover, for drone operators such as the US, drones don't change the risk calculation very much. The last war in which significant numbers of US combat aircraft were shot down was Vietnam. Since then, the US has conducted every one of its many combat operations around the world with overwhelming air superiority. Very few aircrew have been lost to enemy action. So the switch to drones is not a moral leap that suddenly makes aerial warfare low-risk for the US, because that has been the case for some decades.

By this point in the article, you're probably wondering about those puppies. Let me explain. The 'moral hazard' case against drones does not just fall down on the practical grounds sketched above, but on moral grounds too. By arguing that low-risk warfare makes war more likely, you are effectively saying that, in order to reduce the likelihood of war, war ought to be much riskier. But if that's your argument, why stop at drones? Don't soldiers' helmets also make the battlefield less risky for them? Doesn't the availability of advanced field hospitals make it more likely that commanders will risk the lives of their troops, knowing that they have a higher chance of survival?

The 'moral hazard' argument effectively says that nations ought to make themselves as vulnerable as possible because this encourages them to tread so carefully on the world stage that they will not provoke wars. It's the equivalent of asking drivers to strap puppies to their bumper-bars in order to discourage reckless driving.

There. I did it. I found a way to work puppies into an Interpreter debate. May Jessie, my dearly departed old Ridgeback-Red Heeler-cross, forgive me.

Photo by Flickr user philhearing.




You may also be interested in