Managing the Cybersecurity Dilemma

Much has been written about the digitalisation of diplomacy, and how it will render traditional diplomacy irrelevant. A favoured argument, taken up with relish by Finance Ministries, is that the new information communication technologies undermine the need for maintaining expensive physical networks of diplomats abroad. In as far as embassies remain, they focus on commercial and national promotion. Political networking and analysis are left to wither on the vine. And yet, as I argue in my new book, cyberspace is creating problems which may make traditional forms of diplomacy more important than ever. The cybersecurity dilemma offers an example.

The traditional security dilemma derives, like much of international relations theory, from Thucydides´ History of the Peloponnesian War. According to the dilemma, State A is concerned about the security threat from State B. It starts building up its military as a defensive move. State B interprets State A´s actions as an offensive move which could threaten its security. It responds by building up its own military. At the end of the cycle the military gap between states A and B is the same, but A is worse off because relations between states A and B have deteriorated, with state B seeing state A as a potential aggressor. The classic example of the cybersecurity dilemma is the Anglo-German naval arms race prior to World War I. Not only did Britain win the arms race, but suspicion of German intentions bound Britain even tighter to its French and Russian allies.

Ben Buchanan describes the security dilemma in cyberspace, what he calls the “Cybersecurity Dilemma”. State A is concerned about the cyber capacities of State B. It penetrates State B´s internet systems to try to assess those capabilities and what State B intends to do with them. State B interprets State A´s actions as a hostile cyber operation, preparing for future cyber conflict. It accordingly strengthens its capabilities and increases its penetration of State A´s systems. At the end of the cycle, State A is worse off than at the start. The “capabilities gap” remains. State B has increased both its capabilities and its penetration of State A´s systems, which it now regards as a potential aggressor.

The key to managing the security dilemma in both physical and cyberspace lies in the ability of one state to accurately and reliably interpret the intentions of other states. In physical space there are mitigating factors that can help. Weapons systems can be defensive or offensive. Traditionally, for example, fighter aircraft were defensive and bombers were offensive (a distinction that has rather dissolved of late). The positioning of weapons systems can also indicate whether they are intended offensively or defensively. But these mitigating factors do not exist in cyberspace. Weapon systems cannot be ”positioned” in such a way as to indicate offensive or defensive intentions. In a sense, cyber weapons do not exist until they are used.  Nor is it easy to distinguish between cyber operations to gather intelligence about capabilities and operations preparatory to future conflict. In cyberspace the ability of one state to identify the intentions of another state is even more important than in physical space.

Marcus Holmes has done some interesting work on the social neuroscience of identifying the intentions of others, combined with examining historical case studies ranging from Munich to the Reagan/Gorbachev summits. A preliminary conclusion is that under certain circumstances, humans are better at identifying the intentions of other humans than some philosophers would have us believe. Neuroscience experiments would suggest that a mirror system in the brain allows us to simulate the thinking processes of others, predicting their intentions. As Holmes acknowledges, there is much more work to do on this. But an interesting consequence of this work, which he explores in his diplomatic case studies, is that the ability to accurately identify intentions is significantly reinforced by regular face-to-face contact (another conclusion is that narcissists are poor at intention identification – not exactly promising for Trump´s meeting with Kim Jung-un).

The logic is simple, as I explained to a conference of diplomats last year. The ability to identify intentions is key to managing the cybersecurity dilemma. Regular face-to-face contact significantly enhances the ability to identify intentions. Who enjoy regular face-to-face contact with senior officials and politicians in foreign countries? Diplomats. Interpreting the intentions of foreign governments has always been a key role of diplomats. It is related to empathy: the ability to see issues through the eyes of the other, whether ally or rival. Politicians and journalists frequently mistake it for sympathy. Hence the accusations that diplomats “have gone native” or started representing the interests of foreign governments rather than their own. But no chess player will succeed without the ability to see the board through the eyes of his opponent. Diplomats not only have to divine the intentions of the other, but also how the other interprets our intentions.

In recent years we seem to have lost the diplomatic capacity for seeing problems through the eyes of the other. This at least is the conclusion I would draw from the Brexit experience, or the chaos of the EU´s relations with Russia and China. Buchanan´s discussion of the cybersecurity dilemma suggests we need to recover it if we want to manage the risk of escalation in cyber conflict. Holmes´ discussion of the social neuroscience suggests the importance of regular face-to-face contact at senior level. Ironically, digital technologies, and the challenges they throw up, may have increased rather than undermined the importance of diplomacy, and of a rather traditional kind.

Language matters, even in cyberspace

Linguistic precision in cyberspace

Linguistic rigour matters. And not just because I studied philosophy and my tutors insisted on it to the point of obsession. Linguistic imprecision reflects muddled thinking. This is as true of cyberspace as physical space, as I discovered when writing my book on cyberdiplomacy. Unfortunately, linguistic precision in cyberspace is rare. The media in particular use terms like cyberwar and cyber attack so indiscriminately as to cause public confusion about cyberspace, and its dangers. But these terms have policy implications which should urge more caution in their use.

To begin: we are not in a cyberwar. Wars involve physical damage, to both humans and things. In other words, people get killed and things get blown up. Careless, and inaccurate, talk of cyberwar trivialises warfare and its enormous human cost. In cybersecurity this physical damage is referred to as kinetic effects. Kinetic effects as an intended consequence of cyber operations have so far been extremely rare. There is only one clear cut case: the attack, supposedly by the US and Israel on the Iranian nuclear processing plant at Natanz (the US may have also used cyber operations to cause kinetic damage to the N Korean medium range ballistic missiles, but it remains speculation). This is not to say that cyber operations will not escalate to kinetic attacks on critical civilian infrastructure, for example power generation or air traffic control systems. The Russians have carried out disruption operations against the power grid in Ukraine, although only temporary and possibly to signal capabilities to the US. When and if they do happen, the human casualties could be terrible. But we should await the occurrence of real kinetic attacks before talking about cyberwar.

The indiscriminate use of the word “cyberattack” by politicians, journalists and even academics (who should know better) conceals a multitude of sins. The word “attack” is a theory laden and emotionally charged word. It implies an unacceptable aggression, to which there should be some kind of response or counterattack. It is curious that we regularly use the word attack in cyberspace where we would not use it in physical space. For example, the use of cyberattack to refer to espionage operations. We would never talk about espionage attacks in the physical world, but rather espionage operations. The distinction is important. The use of the word “operations” for espionage in the physical world recognises that such operations, while not welcome, are part of international relations and constant, with their own rules of the game. These rules of the game include that espionage itself is no casus belli, and that intelligence officers, as opposed to the agents they recruit, are generally immune. Is it because we talk about cyberespionage attacks that the US government is indicting Russian and Chinese intelligence officers for their activities in cyberspace in a way that it has never done for their activities in physical space.

It would better if we referred to cyber operations. This would allow us to distinguish between the different kinds of operations and the motivations behind them. This would better enable us to devise strategies to deal with each kind of operation. Such a classification of operations would include degradation operations, designed to cause permanent damage either in cyberspace or physical space (kinetic damage); disruption operations, designed to temporarily disrupt systems; espionage operations, designed to steal data; criminal operations, designed to steal money; or information operations, designed to destabilise societies by spreading a mix of information and disinformation through online platforms. Distinguishing motivations also matters. Cyberespionage operations may be aimed at identifying the true intentions of a foreign government, stealing intellectual property to close a technology gap or stealing personal data as preparation for further espionage or criminal operations. Some of these activities may be acceptable (seeking to identify government intentions) or unacceptable (stealing intellectual property). These distinctions matter. In the Cold War espionage may have helped avoid nuclear war in both 1962 and 1983.

Cybersecurity is trendy. It helps sell newspapers (albeit online) and (I hope) books. Stories like the dangers of Huawei´s involvement in setting 5G industrial standards bring home the need to get diplomats more involved in internet governance. But so far kinetic damage from cyber operations is limited. There is no evidence of people being killed by cyber operations. It may come to pass. Indiscriminate use of terms like “cyberwar” and “cyberattack” will only make that more likely, while hiding the more interesting story of what is really going on. 5

Treating Facebook as a Geopolitical Actor

A parliamentary committee in Britain has called for formal regulation of social media platforms like Facebook, including a mandatory code of ethics and an independent regulator. In the process it accused Facebook of behaving like “digital gangsters”. The parliamentary report is the culmination of an investigation begun following the Cambridge Analytica scandal. As I have argued elsewhere, and as the parliamentary committee seems to have realised, Cambridge Analytica was only the tip of the Facebook iceberg. More significant is its role in facilitating Russian disinformation operations.

In my new book “Cyberdiplomacy: Managing Governance and Security Online” I argue that social media companies like Facebook, as well as search engines like Google, should be treated not as ordinary companies, even less as neutral platforms, but as geopolitical actors in their own right. The algorithms that underlie their operations, and guarantee their advertising revenues, are consciously used by Russian and other disinformation campaigns to place their fake news in the echo chambers most likely to believe it. Far from neutral platforms for building social networks or searching information, or even the mechanisms for monetising their users´ data which the parliamentary commission identified, they are active collaborators in Russian attempts to destabilise Western societies and fragment Western institutions. In fact it is worse than that. The same social media algorithms that facilitate disinformation operations undermine western public diplomacy, in as far as it depends on social media, by limiting its reach to those who already agree with it.

Although Facebook may be reluctantly accepting its reality as a mechanism for monetising its users´ data, it still cannot, or will not, accept its role as a geopolitical actor. It still insists that its platform is internationally neutral and is taken advantage of by the bad guys. In other words, that it is an innocent victim of forces beyond its control. This will not run and shows only Facebook´s, or Mark Zuckerberg´s, ignorance of international law and relations. Neutrality in international law carries responsibilities as well as privileges. One of the responsibilities is not to allow foreign forces to cross your territory to attack a third country. It can be illustrated by the dilemma of Belgium in August 1914. Germany requested passage for its armies to cross Belgian territory to attack France. If Belgium agreed, it would lose its neutrality and become a de facto ally of Germany against France. If it refused, and resisted the German incursion, it would become a de facto ally of France against Germany. It chose the latter and paid a terrible price.

Facebook´s position is analogous. Russia is using it (and other social media platforms), and in particular its underlying algorithms, in operations to destabilise western societies. If Facebook acquiesces in this it becomes a de facto ally of Russia against the west. If it wants to avoid this, its only alternative is to become an ally of the west against Russian disinformation operations. Simply taking down pages when they are found to be false, or employing fact checkers to identify fake news, will not cut it. Not least because skilful disinformation operations combine true, ambiguous and fake news in ways not always easy to disentangle. If Facebook and other social media platforms and search engines are serious about not being Russian allies, they must share the algorithms underlying their platforms with western governments so that these can better understand how to counter Russian operations. And this means that social media and search engine companies must recognise and accept their own role as geopolitical actors.

The British parliamentarians err by treating Facebook as just a company that needs regulating. Western governments need to engage with these companies as geopolitical actors, bringing home the realities of their position in cyberspace, and the responsibilities they have taken on. If these companies want to collaborate with the west they can share details of their algorithms with western governments (confidentially of course). If not they should be seen as de facto allies of Russia and other hostile powers carrying out disinformation operations on their platforms, and treated accordingly. Denmark has taken a bold step in appointing an Ambassador to the Tech Sector, the Tech Ambassador. This implicitly recognises the tech companies as international actors, although his remit so far does not include the geopolitical agenda. The ultimate sanction, of course, for Facebook and other social media companies is if the west decides to launch its own disinformation operations against Russia and other rivals on the same platforms, taking advantage of the same algorithms. How would Mark Zuckerberg´s advertisers respond to Facebook being reduced to a wasteland of information warfare?