Skip to main content

An Ethical Argument for High-Security IT

From an ethical point of view, cyberwarfare is a fascinating new subject that brings together many different issues in security ethics and media ethics in a unique way. In the big picture, it is true that cyberwarfare is still war, or at least conflict, whose fundamental form is not affected by the arrival of the new agents, hackers. The main motives and features of war are largely preserved, conventions such as the law of war do not require any kind of new interpretation, and of course there can also be a just war in cyberwar, with the result that there is no justification for simplistic narratives of a categorical shift, or in calls for a blanket ban.

However, from the ethical perspective, the agency of hackers in conjunction with the particular substrate on which they act, with the equally particular modes of action and resulting tactical conditions and strategic options, is something new. Manipulative observation and action in complete silence and invisibility, or under a false flag, tactical exploitation of information, of knowledge and opinion, or of detailed technical processes buried deep within social systems, and the symphonies of these actions in geostrategic effects, provoke conceptual and operational shifts in many traditional approaches of “offensive” and “defensive”, and hence new weightings or new hierarchical configurations of values, which, in turn, require ethical consideration.

Incidentally, not all of this is necessarily negative. Cyberwarfare has an appealing set of characteristics in that it can be conducted in a way which is low-cost, extremely precise and entirely “bloodless”. Always militarily desirable, the goal of victory without fighting, even against a superior enemy, has become more possible than ever before through the advent of cyberwarfare. If it is possible simply to put an army out of action during an intervention, so that any further hostile activities are technically impossible, this ability alone may have a significant peace-keeping and stabilizing effect.

However, the goal which is preferred as a matter of military necessity is not necessarily always the ethically preferable one. If the unjust invader can disable the just defender – and not the other way around – then ultimately cyberwarfare appears after all to be just a method and means, rather than a separate type of warfare, and as such it is subject to that duality of any technology according to which it cannot be particularly condemned, nor particularly preferred, without a context. Thus what will be required in future is above all a detailed, technologically and contextually informed description for specific cases –such as the “information operations” variants, which should be regarded as controversial – in which it is possible to decide more specifically under what initial conditions and subject to what circumstances value judgments can be made and considered. Yet from today’s perspective, even taking into account a certain degree of progress in the international law debate, this is still a long way off.

Nevertheless, at the present time there are a number of clearer ethical problems, particularly relating to the constant erosion of security and its needlessness. To highlight this erosion and the associated problems more distinctly, it is necessary to briefly outline the status quo of IT security.

So what is the current risk situation? The cybersecurity problem remains pressing, and is still far from being solved. The likelihood of attacks has hardly decreased. Quite the opposite: There are significantly more attackers, since the NSA has done a good job of advertising in this field over recent years. First there was Stuxnet, an impressive demonstration of sabotage capabilities and of an enormous reach and strike efficiency. Then, like an avalanche of advertising brochures for cyberoffensive troops, came the Snowden documents, which demonstrated how extraordinarily much has already happened in this field and – ex negativo in the NSA’s lack of detection prior to the publication of these documents – how extremely effective camouflage, deception and invisibility are in this area, how easy it is to attack, intercept, manipulate and carry out sabotage in this field.

Consequently, many actors are interested in building up an offensive force. Organized criminal cartels and every intelligence service in the world will now be pushing to acquire such capabilities. In this respect, the risk is increasing.

So is the risk falling in respect of vulnerabilities and damage, as a result of better IT security? Unfortunately not.

At the present time, the foundations of our information technology systems are not becoming more secure, but rather less secure. The fundamental problems of tens of thousands of critical vulnerabilities in our IT substrate have in no way been fixed or even adequately addressed in an innovation strategy. While some companies have made investments, it has hardly been with a strategic direction, or sufficient resources. Other big industry players are actually cutting back. Microsoft, for example, recently dissolved its security department, making some staff redundant and moving others into the more lucrative cloud business. From here, therefore, from one of the juggernauts among the de facto IT monopolists, no increase in security can be expected. Owing to rapid expansion in many fields, with new flaws and vulnerabilities, a large increase in insecurity is more likely.

The IT security industry, despite a lot of attention, has also not done much. This field is populated by small and medium-sized enterprises with insufficient resources to finance major innovations in anticipation of possibly distant future returns, whose perspective on the problem is still structurally oriented to small-scale cybercrime, as they pursue outmoded development paradigms of the nineties and noughties. These paradigms are evinced in detail in the three lines of attack “defend”, “degrade” and “deter”.

“Defend”, the first line of attack, involves three paradigms “ad hoc”, “ex post facto” and the “perimeter” concept, and is concerned primarily with setting up one or more boundaries with observation and intervention options in a sociotechnical system, and with the management of incidents upon detection. Yet detection in this field – and especially in cyberwar, which makes the most efficient use of cybersecurity flaws – is ineffective. The NSA operations, for example, came to light almost entirely via the Snowden documents. Of more than 230 operations which are now known to have existed in 2011, only one was detected (Flame). This speaks volumes about the effectiveness of the entire approach. Furthermore, the concepts for incident management are immature and lack strategic focus. They are based on the already weak hypothesis that as a defender you have few advantages, but you at least have the advantage that you know and can better control your own territory. Thus, while accepting that it is not possible to prevent an attack, the aim is at least to prevent the exfiltration of information by the attacker. However, since attackers have at their disposal numerous options for exfiltration, this concept too is still awaiting proof of its effectiveness. Avoidance of the occurrence of incidents, i.e. an increase in basic passive security, only takes place in rudimentary and helpless form, for instance employee training that warns against opening strange attachments (and conveniently shifts responsibility onto the user). This approach, too, particularly in a cyberwar, owing to the many possible vectors of attack, is practically irrelevant and serves only to guarantee a basic level of hygiene. That which is clearly preferable – the establishment of higher basic resistance – i.e. the ex ante unassailability of a system, lies outside the conceptual reach of current approaches to IT security.

“Degrade”, the second line of attack, is cited as a complement to “defend”, and can be similarly quickly dealt with. Here it is assumed that given good enough detection of attacks, a system with information sharing can be built, via which detected attacks are promptly notified to all potential victims, who consequently arm their own detection mechanisms and are no longer attackable. This in turn is supposed to have the long-term result that attacks are on a significantly smaller scale and are less economically appealing to attackers. Yet this arrangement fails to consider various structural features, such as the poor detection rate already mentioned, and then the high modularizability and easy variability of attacks, the attackers’ precise economic models and possibilities for their impairment through “degrade” approaches, the requirements for completeness and operational efficiency of information sharing, the tactical flexibility of attackers in switching to business models which scale in different ways and – again particularly in the case of cyberwar – the equally tactical alternative of scaling not through mass distribution in many different systems, but through targeted, yet persistent, laterally spreading attacks. All these factors raise considerable doubts about the “degrade” approach, which, however, can neither be proved nor disproved, since the necessary empirical data are shrouded in obscurity. But experiences from industry with the years of information sharing and particularly with more dangerous espionage campaigns provide evidence of the failure of this approach, at least in practice.

“Deter”, the last line of attack, is finally also conceived of as a complement to the two other approaches. In this case, the traditional active deterrent idea of “deterrence by punishment” comes into play, where attackers are either threatened with drastic measures in the event of successful attribution, or countermeasures are directly imposed on attackers as a punishment intended to impact on the cost/benefit rationale for future attacks. But this approach, too, has had only limited effectiveness to date. Attribution, owing to the inevitable, necessary structural features of being digital, is an unsolvable problem of cybersecurity. Current success stories of attribution, such as the exposure of Chinese espionage campaigns, are merely superficial successes, since they must have received assistance from human intelligence, to a large extent could only have come about due to major flaws in the enemy’s operational security, and furthermore are to a certain degree politically supported and desired. Current attempts to establish attribution should therefore be regarded as being only temporary, and they have the further disadvantage of forcing attackers into evolutionary development of better camouflage and operational security. Because of the extensive scope that exists in this respect, these attempts will hardly screen out or deter attackers, but will make the problem significantly more invisible.

Thus none of these approaches brings particularly clear or sustainably effective security gains. Instead, it can be assumed that uncertainties are shifted in various ways, but which have been neither tactically nor strategically anticipated, and which could therefore even produce a series of unpleasant surprises.

The net result of the widespread buildup of offensive capabilities along with an expansion of vulnerabilities, together with paradigmatically inefficient IT security technologies, is an accelerating, spreading and heterogenizing lack of security manifested as an increased possibility of attack, in asymmetrical form, since it is much stronger in states and structures that are highly technologized.

Now, based on this initial situation, a number of particularly problematic points can be identified with regard to security ethics and the ethics of technology. They are described briefly here.

The negligence of tolerating a lack of security

First of all, it may be stated that the lack of security in IT is widely known and in many cases has been known about for a considerable length of time, and it is tolerated to an absurd degree. In many places, over many years and up to the present day, people have worked in certain knowledge of high vulnerability along this vector, especially within many militaries, without the problem being sufficiently escalated politically to initiate lasting change. In part, this tolerance is due to complicity. In the past, many of today’s security actors thought that flawed security approaches were sufficient, and implemented those approaches. Now they cannot change their position without raising doubts about their basic competence. Other, new security actors are unable to master the complexity of the topic and tend to delegate or diffuse their responsibility – often to security or IT companies. Tolerance also arises as a result of epistemic uncertainty, ranging from assumptions about the reality of the risk to the relationship between the actual and potential costs of security flaws versus their elimination, to a lack of knowledge about systemic weaknesses in existing security approaches. Both problems give rise to their own ethical perspectives and questions. Tolerance through complicity raises general questions about professional ethics and, in cyberwarfare, the inseparably related special responsibility of the military in its professional role of defender. There is a need to discuss how self-protection of one’s career should be weighed up against responsible security conduct, and what alternatives can be developed that facilitate morally less problematic behavior. Other questions are raised by epistemic uncertainty and resultant problems with regard to the ethically preferable behavior in situations with high risk and high uncertainty. In view of the high risk of war and geostrategic erosion present in cyberwarfare, if there is uncertainty regarding the appropriate perspectives on the problem and the levels of protection to be implemented, it might be advisable to adopt a “maximum” approach, i.e. to assume the worst and – provided no significant conflict of values occurs – to implement maximum security requirements. For a more precise evaluation, the difference between acceptance and acceptability as relevant to the ethics of technology, which is emphasized by Christoph Hubig, could be considered here. What is accepted by businesses or militaries based on semi-informed scenario assessments produced at short notice, and on a cybersecurity return on investment that is difficult to estimate, is not necessarily acceptable. Rather, what is acceptable should be formulated first, so as then to be able to address deficits in the practice of acceptance and associated conflicts.

Increase in conflict potentials

Another difficulty associated with the initial situation described above is that the large number of security flaws incentivizes many other military and criminal actors to develop offensive capabilities. Of course, in purely theoretical terms, this may have a neutral overall effect or lead to a positive change in stability, but it is likely to result in a multiplication and heterogenization of the problem, and create problematic offensive path dependencies among the actors, as once capabilities are acquired, their offensive use is at least more likely to suggest itself than before. This too is not necessarily a bad thing, for instance if the offensive use is in the context of a just war. However, the preponderance of unjust war and the numerous possibilities for subversive or tentative warfare resulting from the incentive of high invisibility and falsifiability of identities suggest that multiplication, heterogenization and increasing path dependencies will result in a growing number of smaller conflicts in the special case of cyberwarfare. These in turn could lead to escalations more easily than in other, more strongly established varieties of war, since the novelty of cyberwar means that the interpretation of even minor incidents is still uncertain and, amplified by media hype, could end up being more aggressive.

Escalatory compensation mechanisms

Another problem that arises and needs to be addressed ethically is the compensation mechanisms for poor basic security that become apparent in the “deter” approach. Despite glaring shortcomings in passive protection and the attribution of attacks, these mechanisms still attempt to develop a deterrent effect by drastically increasing the size of the penalty – which is the only thing that still remains in the realm of deterrence. In other words, if it is not possible to stop and only rarely possible to identify an attacker, then the attacker should at least receive a draconian punishment if, for once, he or she is successfully caught, so as still to achieve any kind of deterrent effect at all. While this line of reasoning is militarily functional and understandable (and is already practiced experimentally, e.g. in the Tallinn Manual, at least in the form of harsh threats), it significantly increases the risk of escalation by inviting a corresponding attitude to false flag operations under the particular condition of the falsifiability of identities. At the same time, it gives “honestly caught” attackers the impression of highly disproportionate action, which the accused attacker might then compensate for with other reactions, producing a spiral of escalation. Finally, in the context of compensation mechanisms, there is also the problem of significantly increasing global Internet surveillance, with its very own collateral damage to freedom – since the functioning of “deter” approaches requires maximum efforts to acquire intelligence about the enemy, which can be achieved above all via surveillance technologies.

These three problems are currently three of the more difficult structural problems of cyberwarfare. At the same time, they have clearly identifiable ethical dimensions.

However, in addition to simply weighing up values and determining the methods to be used for this weighing up, any ethical discussion will also require alternative courses of action if it is to have theoretical substance and practical relevance. Here the question arises first of all whether we even possess any alternatives. For if there are no other options, we are simply faced with practical constraints, which may not seem very ethically desirable, and which we may complain about, but about which ultimately there is little to discuss, since there are no alternatives. Particularly in the field of IT security, we do indeed frequently encounter this attitude of surrender to a lack of alternatives. Many of the existing actors are too used to the status quo, and new actors in any case are unaware of any options, with the result that it has almost become an article of faith that we just have to live with this lack of security, like we do with climate change.

But this is wrong.

In many niche areas, the computer sciences have developed various approaches to high-security IT, which is less vulnerable as a basic technology and which by technological means simply does away with a large portion of the cybersecurity problems. In particular, the high number of vulnerabilities resulting from widespread programming errors, and the poor transparency and control resulting from excessive complexity, are serious and fundamental problems that have actually been technically solvable for some time. High-security IT may then be the decisive game-changer that also effectively addresses the three problems discussed above. Firstly, the security gains resulting from high-security IT are so clear-cut, so dramatic and so conclusively demonstrable that they leave no more room for negligent tolerance of security flaws in critical structures. The initial costs are affordable and no performance losses are expected, thus making for an even better and clearer case – especially from the point of view of acceptability. Secondly, the prompt inclusion of high-security systems in critical structures would have the effect of significantly inhibiting the development of the attacker field. Almost all of the smaller actors would no longer be able to muster the resources and expertise necessary to attack such structures, while for bigger actors the cost-benefit calculations would be thrown back to the level of the 1980s. The golden age of signals intelligence would return to a bronze age, and the global conflict potential resulting from high and widespread offensive capacity and escalation would be significantly reduced. Thirdly, there would no longer be any kind of basis for escalatory compensation mechanisms, since there would no longer be a fundamental lack of security needing to be compensated for, or rather since the compensation mechanisms would be a significantly worse option. This would eliminate destabilization due to possible escalations and losses of freedom due to mass surveillance.

High-security IT would therefore be an ethically preferable solution to the cybersecurity problem. The only – but big and powerful – enemy of this approach is the giant that this new approach would kill, namely the old IT. Above all it is the manufacturers and monopolists of existing chips and operating systems, of enterprise resources software and other products, who are preventing the emergence of this specific alternative approach. And thus much in this field ultimately revolves around the question, to be evaluated ethically, of whether we should be supporting a structurally deficient IT substrate at the expense of global security.

Summary

Sandro Gaycken

Dr. Sandro Gaycken’s research is focused on privacy, internet freedom, cybersecurity and its impact on modern warfare, intelligence and foreign affairs. He aims to solve the strategic cyberdefense problem through strong high security IT-concepts from computer science, coupled with strong industrial policies to overcome market failures. He is appointed director in NATO’s SPS program on national cyberstrategies and director of the cybersecurity working group. He served in the design of the German Foreign and Security Policy on IT-matters as the lead-author of the “internet freedom” and the “cybersecurity/cyberdefence” parts of this policy. He testified as an expert in many hearings in the Bundestag and provided strategic advice to the UNO, NATO, G8, EU and IAEA. He also served as an expert witness in international court cases concerned with military cyber espionage and cyber sabotage.

sandro.gaycken@esmt.org


Download PDF here

All articles in this issue

Cybersecurity and Civil Liberties: A Task for the European Union
Annegret Bendiek
An Ethical Argument for High-Security IT
Sandro Gaycken
Cyberwarfare: Challenges to International Law
Robin Geiß
State-Sponsored Hacktivism and the Advent of "Soft War"
George R. Lucas, Jr.
Cyberwarfare: Hype or New Threat?
Götz Neuneck
Why Should We Worry About the Militarization of Cyberspace?
Dinah PoKempner
What Ethics Has To Do With the Regulation of Cyberwarfare
Mariarosaria Taddeo

Specials

Isabel Skierka Felix FX Lindner Michael Hange