You are here: Home / Internet Governance / Blog / Setting International Norms of Cyber Conflict is Hard, But that Doesn't Mean that We Should Stop Trying

Setting International Norms of Cyber Conflict is Hard, But that Doesn't Mean that We Should Stop Trying

Posted by Arindrajit Basu and Karan Saini at Sep 30, 2019 07:00 PM |
Filed under:
Last month, cyber-defense analyst and geostrategist Pukhraj Singh penned a stinging epitaph, published by MWI, for global norms-formulation processes that are attempting to foster cyber stability and regulate cyber conflict—specifically, the Tallinn Manual.

The article by Arindrajit Basu and Karan Saini was published by Modern War Institute on September 30, 2019.


His words are important, and should be taken seriously by the legal and technical communities that are attempting to feed into the present global governance ecosystem. However, many of his arguments seem to suffer from an unjustified and dismissive skepticism of any form of global regulation in this space.

He believes that the unique features of cyberspace render governance through the application of international law close to impossible. Given the range of developments that are in the pipeline in the global cyber norms proliferation process, this is an excessively defeatist attitude toward modern international relations. It also unwittingly encourages the continued weaponization of cyberspace by fomenting a “no holds barred” battlespace, to the detriment of the trust that individuals can place in the security and stability of the ecosystem.

“The Fundamentals of Computer Science”

Singh argues that the “fundamentals of computer science” render rules of international humanitarian law (IHL)—which serve as the governing framework during armed conflict in other domains—inapplicable, and that lawyers and policymakers have gotten cyber horribly wrong. Singh theorizes that in the case of the United States having pre-positioned espionage malware in Russian military networks, that malware could have been “repurposed or even reinterpreted as an act of aggression.”

The possibility of a fabricated act of espionage being used as justification for an escalated response exists within the realm of analogous espionage, too. A reconnaissance operation that has been compromised can also be repurposed midway into a full-blown armed attack, or could be reinterpreted as justification for an escalatory response. However, international law states that self-defense can only be exercised when the “necessity of self-defense is instant, overwhelming, leaving no choice of means, and no moment of deliberation.” In order to legitimize any action taken under the guise of self-defense, the threat would have to be imminent and the response both necessary and proportionate. There is nothing inherently unique in the nature of cyber conflict that would render the traditional law of self-defense moot.

Further, the presumption that cyber operations are ambiguous and often uncontrollable, as Singh suggests, is flawed. An exploit that is considered “deployment-ready” is the result of an attacker’s attempts at fine-tuning variables—until it is determined that the particular vulnerability can be exploited in a manner that is considered to be reasonably reliable. An exploit may have to be worked upon for quite some time for it to behave exactly how the attacker intends it to. While it is true that there still may be unidentified factors that can potentially alter the behavior of a well-developed exploit, a skilled operator or malware author would nonetheless have a reasonable amount of certainty that an exploit code’s execution will result in the realization of only a certain possible set of predefined outcomes.

It is true that a number of remote exploits that target systems and networks may make use of unreliable vulnerabilities, where outcomes may not be fully apparent prior to execution—and sometimes even afterward. However, for most deployment-ready exploits, this would simply not be the case. In fact, the example of the infamous Stuxnet malware, which Singh uses in his article, helps buttress our point.

Singh questions whether India should have interpreted the widespread infection of systems within the region—which also happened to affect certain critical infrastructure—as an armed attack. This question can cursorily be dismissed since we now know that Stuxnet did not cause any deliberate damage to Indian computing infrastructure. A 2013 report by journalist Joseph Menn correctly states that “the only place deliberately affected [by Stuxnet] was an Iranian nuclear facility.” Therefore, for India to claim mere infection of systems located within the bounds of its territory as having been an armed attack, it would have to concretely demonstrate that the operators of Stuxnet caused “grave harm”—as described in IHL—purely by way of having infected those machines, through execution of malicious instructions programmed in the malware’s payload.

At the same time, it should not be dismissed that the act of the Stuxnet malware infecting a machine could very well be interpreted by a state as constituting an armed attack. However, given the current state of advancement in malware decompilation and reverse-engineering studies, the process of deducing instructions that a particular malicious program seeks to execute can in most cases be performed in a reasonably reliable manner. Thus, for a state to make such a claim, it would have to prove that the malware did indeed cause grave harm, that which meets the criteria of the “scale and effects” threshold laid down in Nicaragua v. United States—whether it was caused due to operator interaction or preprogrammed instructions—along with sufficient reasoning and evidence for attributing it to a state.

An analysis of the Stuxnet code made it apparent that operators were seeking out machines that had the Siemens STEP 7 or SIMATIC WinCC software installed. The authors of the malware quite clearly had prior knowledge that the nuclear centrifuges that they intended to target made use of a particular type of programmable logic controllers, which the STEP 7 and WinCC software interacted with. On the basis of this prior knowledge, the authors of Stuxnet made design choices by which, upon infection, target machines would communicate to the Stuxnet command-and-control server—including identifiers such as operating system version, IP address, workstation name, and domain name—whether or not the infected system had the STEP 7 or WinCC software installed. This allowed the operators of Stuxnet to easily identify and distinguish machines that they would ultimately attack for fulfilling their objectives. In effect, this gave them some amount of control over the scale of damage they would deliberately cause.

It has been theorized that the malware reached the nuclear facility in Iran through a flash drive. It may be true that widespread and unnecessary propagation of the worm—which could be described as it “going out of control”—was not something the operators had intended (as it would attract unwanted attention and raise alarm bells across the board). It has nonetheless been several years since Stuxnet was in action, and there have been no documented cases of Stuxnet having caused grave harm to Indian (or other) computers. For all purposes, it could be said that the risk of collateral damage was minimized as the control operators were able to direct the execution of damaging components of the malware, to a degree that could be interpreted as having complied with IHL—thereby making it a calculated cyberattack, with controllable effects.

However, if the adverse effects of the operation were to be indiscriminate (i.e., machines were tangibly damaged immediately upon being infected), and could not be controlled by the operator within reasonable bounds, then the rules of IHL would render the operation illegal—a red line that, among other declarations, the recent French statement on the application of international law to cyberspace recognizes.

“Bizarre and Regressive”: The Westphalian Precept of Territoriality

Singh’s next grievance is with the precept of territoriality and sovereignty in cyberspace. However, the reasoning he provides decrying this concept is unclear at best. The International Group of Experts authoring the Tallinn Manual argued that “cyber activities occur on territory and involve objects, or are conducted by persons or entities, over which States may exercise their sovereign prerogatives.” They continued to note that even though cyber operations can transcend territorial domains, they are conducted by “individuals and entities subject to the jurisdiction of one or more state.”

Contrary to Singh’s assertions, our reasoning is entirely in line with the “defend forward” and “persistent engagement” strategies adopted by the United States defense experts. In fact, Gen. Paul Nakasone, commander of US Cyber Command—whose interview Singh cites to explain these strategies—explicitly states in that interview that “we must ‘defend forward’ in cyberspace as we do in the physical domains. . . . [Naval and air forces] patrol the seas and skies to ensure that they are positioned to defend our country before our borders are crossed. The same logic applies in cyberspace.” This is a recognition of the Westphalian precept of territoriality in cyberspace—which includes the right to take pre-emptive measures against adversaries before the people and objects within a nation’s sovereign borders are negatively impacted.

Below-the-Threshold Operations

Singh also argues that most cyber operations would not reach the threshold armed attack to invoke IHL. He concludes, therefore, that applying the rules of IHL “bestows another garb of impunity upon rogue cyber attacks.” However, as discussed above, the application of IHL does not require a certain threshold of intensity, but the mere application of armed force that is attributable to a state.

Therefore, laying down “red lines” by, for example, applying the principle of distinction, which seeks to minimize damage to civilian life and property, actually works toward setting legal rules that seek to prevent the negative civilian fallout of cyber conflict. There appears to be no reason why any cyberattack by a state should harm civilians without the state using all means possible to avoid this harm. If there is an ongoing armed conflict, this entails compliance with the IHL principles of necessity and proportionality, ensuring that any collateral damage ensuing as a result of an operation is proportionate to the military advantage being sought.

Moreover, we agree that certain information operations may not cause any damage in terms of injury to human life or property. But IHL is not the only framework for governing cyber conflict. Ongoing cyber norms proliferation efforts are attempting to move beyond the rigid application of international law to account for the unique challenges of cyberspace. Despite the flaws in the process thus far, individuals from a variety of backgrounds and disciplines must engage meaningfully and shape effective regulation in this space. Singh’s “garb of impunity” exists when there are a lack of restrictions on collateral damage caused by cyber operations, to the detriment of civilian life and property alike.

Obstacles in Developing Customary International Law

His third argument is on the fetters limiting the development of customary international law in the cyber domain. This is a valid concern. Until recently, most states involved in cyber operations have adopted a stance of silence and ambiguity with regard to their legal position on the applicability of international law in cyberspace or their position on the Tallinn Manual.

This is due to multiple reasons: First, states are not certain if the rules of the Tallinn Manual protect their long-term interests of gaining covert operational advantages in the cyber domain, which acts as a disincentive for strongly endorsing the rules laid out therein. Second, even those states keen on applying and adhering to the manual may not be able to do so in the absence of technical and effective processes that censure other states that do not comply. Given this ambiguity, states have demonstrated a preference to engage in cyber operations and counteroperations that are below the threshold—in other words, those that do not bring IHL into play. However, as others have convincingly argued, it is incorrect to assume that the current trend of silence and ambiguity will continue.

Recent developments indicate that the variety of normative processes and actors alike may render the Tallinn Manual more relevant as a focal point in the discussions. The UKFrance, GermanyEstoniaCuba (backed by China and Russia), and the United States have all engaged in public posturing in advocacy of their respective positions regarding the applicability of international law in cyberspace, in varying degrees of detail—which is essentially customary international law in the making. The statements made by a number of delegations at the recently concluded first substantive session of the United Nations’ Open-Ended Working Group covered a broad range of issues, from capacity building to the application of international law, which is the first step towards fostering consensus among the variety of global actors.

Positive Conflict and the Future of Cyber Norms

The final argument—a theme that runs from the beginning of Singh’s article—is a stark criticism of Western-centric cyber policy processes. Despite attempts to foster inclusivity, efforts like those that produced the Tallinn Manual are still driven largely from and by the United States in an attempt to, as Singh describes it, keep “cyber offense fully potentiated.” This is an unfortunate reality, but one that is not limited solely to the cyber domain. For example, in an excellent paper written in 2001, retired US Air Force Maj. Gen. Charles Dunlap explained “that ‘lawfare,’ that is, the use of law as a weapon of war, is the newest feature of 21st century combat.”

We are presented therefore with two options: either sit back and witness the hegemonization of policy discourse by a limited number of powerful states, or actively seek to contest these assumptions by undertaking adversarial work across standards-setting bodies, multilateral and multi-stakeholder norms-setting forums, as well as academic and strategic settings. In a recent paper, international law scholar Monica Hakimi argues that international law can serve as a fulcrum for facilitating positive conflict in the short run between a variety of actors across industry, civil society, and military and civilian government entities, which can lead to the projection of shared governance endeavors in the long run. Despite its several flaws, the Tallinn Manual can serve as a this type of fulcrum for facilitating this conflict.

In writing a premature eulogy of efforts to bring to realization a set of norms in cyberspace, Singh dismisses that historically, global governance regimes have taken considerable time  and effort to come into being and emerge after an arduous process of continuous prodding and probing. This process necessitates that any existing assumptions—and the bases on which they are constructed—are challenged regularly, so that we can enumerate and ultimately arrive at an agreeable definition for what works and what does not. Rejecting these processes in their entirety foments a global theater of uncertainty, with no benchmarks for cooperation that stakeholders in this domain can reasonably rely on.

Document Actions

Filed under: