Ian Wallace is a visiting fellow in the Center for 21st Century Security and Intelligence at the Brookings Institution and previously a senior official at the British Ministry of Defence, where he helped develop UK cyber strategy.
I'm grateful once again to have the chance to respond to some comments my 27 September cyber piece (Is There Such a Thing as Cyberwar?). Although I enjoyed reading Tony Healy’s comments, I disagree with his suggestion that ‘Whether cyberwar is real war is not important’.
This is more than a semantic discussion. Unless you understand the nature of the threat, it is difficult to respond adequately. And while Tony is right to say that governments are beginning to respond to the challenge, few have yet done so effectively. That's why use of the term ‘cyberwar’ is potentially so consequential. A major part of the difficulty is that, intentionally or not, ‘cyberwar’ tends to imply a military response to threats. I do not think that is sensible.
Before explaining why, however, I should recognise that a large part of the reason for the persistence of the ‘war’ analogy is the absence of good alternatives.
Put simply, cyber threats do not fit easily into the security framework that has evolved in the past couple of centuries in modern democratic states like Australia and the US. Boiled down to essentials, law enforcement has evolved to protect us from threats within our society, while the military has evolved to protect us from external threats. Cyber threats often come from overseas, which makes it difficult for law enforcement to deter or punish them, yet they rarely rise to the level that would warrant a military response. There is a gap in the middle and bureaucracies are struggling to find the right organisational response. [fold]
Some commentators feel that empowering the private sector (which owns and operates much of the internet) to take on more responsibility for its own security is the way forward, but there are problems: you don't want a private company inadvertently starting a conflict with another country.
In practice, therefore, apart from leaving private sector victims to their own devices, there are three options for governments which see a role for themselves in mitigating the national security risk caused by insecure information systems. The first is to encourage law enforcement to expand its role in cyber to disrupting as well as prosecuting intruders (eg. by working to take down botnets). The second is to help private sector actors improve their defences, for example, by sharing secret intelligence or facilitating sharing by others. Done in the right way, both of these approaches are sensible, and are being employed by governments around the world.
The third option — to put military in charge or at least give them some significant role – is less sensible. And my concern is that by misusing the term ‘cyberwar’ we unconsciously make that approach more acceptable and therefore more likely.
This is not to say that there is no role for the military in cyber defence. While militaries are not necessarily best placed to defend commercial networks against economic espionage, they cannot be indifferent to potential adversaries stealing the secret blueprints of their equipment if that is going to compromise their effectiveness on the battlefield. Similarly, it would be perverse to argue, for example, that a country should pay for its military to fight discretionary wars overseas while standing by as a cyber attack by a foreign adversary causes catastrophic damage to the homeland.
It is not necessarily a bad thing, therefore, that US CYBERCOM has, for example, recently stood up thirteen teams to be prepared to ‘defend the nation’, as long as the limited circumstances in which they are used is widely understood. The problems come if the sense of ‘being at cyberwar’ encourages overuse of such capabilities.
Experience suggest that most instances where the term ‘cyberwar’ is applied, a better description would be ‘commercial espionage’, ‘sabotage’, ‘subversion’ or just ‘cyber crime’. Using military resources to tackle these kinds of issues is problematic for at least four reasons.
First, wars are generally fought by uniformed professionals, and civilians are encouraged to get well out the way. By militarising relatively low-level cyber threats, governments risk creating a type of ‘moral hazard’ which makes ordinary people and companies less likely to take responsibility for protecting themselves. That is exactly the opposite of the sort of behaviour a responsible government should want to encourage.
Second, you risk engendering cynicism and complacency. While only a small proportion of the population of modern democracies have actually been to war, most have a strong sense that it involves destruction and death. If they start to believe we are engaged in a constant ‘cyberwar’ but see little evidence of damage and feel that it has little impact on them personally, they will be less inclined to push for the measures required to guard against much more serious uses of cyber capabilities.
Tony Healy was right to highlight Stuxnet as an example of the potential sophistication of an attack on an industrial control system (ICS). But unfortunately, he was wrong to assert that it has been a wake-up call. Many ICS experts would argue that despite the example of Stuxnet, many such systems remain woefully under-protected.
Third, by using the military, even as a short term expedient, you disincentivise other longer-term and more sustainable efforts to address the new challenges that cyber brings to security systems. The cyber threat, and therefore the need for cybersecurity, is not going away. In fact, as information systems proliferate, controlling everything from cars to medical devices to financial markets, both our ‘attack surfaces’ (ie. the ways in which we can be attacked) and the consequences of those attacks will increase. Militarising the security required to protect us makes no sense, either in resource or civil liberties terms.
It is, however, the fourth reason that I believe makes it most important to be cautious about creating an environment that encourages military involvement in cyber in all but true warfighting scenarios. As US academic Barry Posen pointed out in his influential 1984 book The Sources of Military Doctrine, military organisations, especially when they are not subject to close civilian oversight, will often adopt an offensive doctrine as a way of reducing uncertainty in complex environments.
More work needs to be done to apply this insight to cyber, but I believe there is enough anecdotal evidence to take this risk seriously until someone develops a stronger case to the contrary. It is easy to see why ‘reducing uncertainty’ is attractive in an emerging area like cyber, but given the inherent instability of the cyber environment (caused by factors like the difficulty of attribution) and the constant stream of lower-level cyber threats, there is clearly potential for serious trouble.
As a result, we should be wary of any rhetoric, like ‘cyberwar’, that desensitises us to the mis-employment of the military to address a threat that, in fact, needs to be dealt with in less risky and more sustainable ways.