Disclosing Vulnerabilities

Does Wcry show the NSA should disclose 0-days?

The recent (highly damaging) Wcry ransomware worm is derived from NSA code recently disclosed by hackers. This has lead Microsoft (and others) to call on the government to disclose security vulnerabilities so they can be fixed rather than stockpiling them for use in offensive hacking operations. However, I think the lesson we should learn from this incident is exactly the opposite.

This debate about how to balance the NSA‘s two responsibilities: protecting US computer systems from infiltration and gathering intelligence from foreign systems is hardly new (and Bruce Schneier’s take on it is worth reading). The US government is very much aware of this tension and has a special process, the vulnerabilities equities process (VEP), to decide whether or not to disclose a particular vulnerability. Microsoft is arguing that recent events illustrate just how much harm is caused by stockpiled vulnerabilities and, analogizing this incident to the use of stolen conventional weaponry, suggesting the government needs to take responsibility by always choosing to report vulnerabilities to vendors so they can be patched.

However, if anything, this incident illustrates the limitations of reporting vulnerabilities to vendors. Rather than being 0-days the vulnerabilities used by the Wcry worm were already patched a month before the publication of the NSA exploits and the circumstances of the patch suggest that the NSA, aware that it had been compromised, reported these vulnerabilities to Microsoft. Thus, rather than illustrating the dangers of stockpiling vulnerabilities, this incident reveals the limitations of reporting vulnerabilities. Even once vulnerabilities are disclosed the difficulty convincing users to update and the lack of support for older operating systems leave a vast many users at risk. In contrast, once a patch is released (or even upon disclosure to a vendor) the vulnerability can no longer be used to collect intelligence from security aware targets, e.g., classified systems belonging to foreign governments.

It is difficult not to interpret Microsoft’s comments on this issue as an attempt to divert blame. After all, it is their code which is vulnerable and it was their choice to cease support for windows XP. However, to be fair, this is not the first time they have taken such a position publicly. Back in February Microsoft called for a “Digital Geneva Convention” under which governments would forswear “cyber-attacks that target the private sector or critical infrastructure or the use of hacking to steal intellectual property” and commit to reporting vulnerabilities rather than stockpiling them.

While there may an important role for international agreement to play in this field Microsoft’s proposal here seems hopelessly naive. There are good reasons why there has never been an effective international agreement barring spying and they all apply to this case as well. There is every incentive for signatories to such a treaty to loudly affirm it and then secretly continue to stockpile vulnerabilities and engage in offensive hacking. While at first glance one might think that we could at least leave the private sector out of this that ignores the fact that many technologies are dual purpose1 and that frequently the best way to access government secrets will be to compromise email accounts hosted by private companies as well as the uses big data can be put to by government actors. Indeed, the second that a government thought such a treaty was being followed they would move all their top secret correspondence to (in country version of) something like gmail.

Successful international agreements forswearing certain weapons or behaviors need to be verifiable and not (too) contrary to the interests of the great powers. The continued push to ban land mines is unlikely to be successful as long as they are seen as important to many powerful countries’ (including a majority of permanent security council members) military strategies2 and it is hard to believe that genuinely giving up stockpiling vulnerabilities and offensive hacking would be in the interests of Russia or China. Moreover, if a treaty isn’t verifiable there is no reason for countries not to defect and secretly fail to comply. While Microsoft proposes some kind of international cooperative effort to assign responsibility for attacks it is hard to see how this wouldn’t merely encourage false flag operations to trigger condemnation and sanctions against rivals. It is telling that the one aspect of such a treaty that would be verifiable, the provision banning theft of IP (at least for use by private companies rather than for national security purposes), is the only aspect Microsoft points to as having been the subject of a treaty (a 2015 US-China agreement).

While it isn’t uncommon for idealistic individuals and non-profit NGOs to act as if treaties can magic away the realities of state interests and real world incentives I have trouble believing Microsoft is this naive about this issue. I could very well be wrong on this point but it’s hard for me not to think their position on this issue is more about shifting blame for computer security problems than a thoughtful consideration of the costs and benefits.

Of course, none of this is to say that there isn’t room for improvement in how the government handles computer security vulnerabilities. For instance, I’m inclined to agree with most of the reforms mentioned here. As far as the more broad question of whether we should tip the scales more toward reporting vulnerabilities instead of stockpiling them I think that depends heavily on how frequently the vulnerabilities we find are the same as those found by our rivals and how quickly our intelligence services are able to discover what vulnerabilities are known to our rivals. As such information is undoubtedly classified (and for good reasons) it seems the best we can do is make sure congress exercises substantial oversight and use the political process to encourage presidents to install leadership at the NSA who understands these issues.


  1. Facial recognition technology can be used to identify spies, code advertisers uses to surreptitiously identify and track customers is ideal for covert surveillance and the software the NSA uses to monitor it’s huge data streams was built by private sector companies using much of the same technology used to various kinds of search engines. 
  2. A less idealistic treaty that recognize the role for land mines in major military operations probably could have done more to safe guard civilians from harm by, instead, banning persistent mines. As such a ban would actually favor the interests of the great powers (persistent mines are easier to make by low tech actors) they would have helped enforce it rather than providing cover for irresponsible use of landmines. 

Rejecting Rationality

I felt I would use this first post to explain this blog’s title. It is not, despite appearances to the contrary, meant to suggest any animosity toward the rationality community nor sympathy with the idea that when evaluating claims we should ever favor emotions and intuition over argumentation and evidence. Rather, it is intended as a critique of the ambiguous overuse of the term `rationality’ by the rationality community in general (and Yudkowsky specifically).

I want to suggest that there are two different concepts we use the word rationality to describe and that the rationality community overuses the term in a way that invites confusion. Both conceptions of rationality are judgements of epistemic virtue but the nature of that virtue differs.

Rationality As Ideal Evaluation Of Evidence

The first conception of rationality reflects the classic idea that rationality is a matter of a priori theoretical insight. This makes intuitive sense as rationality, in telling us how we should respond to evidence, shouldn’t depend on the particular way the evidence turns out. On this conception rationality constrains how one reaches judgements from arbitrary data and something is rational just if we expect it to maximize true beliefs in the face of a completely unknown/unspecified fact pattern1. In other words this is the kind of rationality you want if you are suddenly flung into another universe where the natural laws, number of dimensions or even the correspondence between mental and physical states might differ radically from our own.

On this conception having logically coherent beliefs and obeying the axioms of probability can be said to be rationally required (as doing so never forces you to belief less truths) but it’s hard to make a case for much else. Carnap (among others) suggested at one point that there might be something like a rationally (in this sense) preferable way of assigning priors but the long history of failed attempts and conceptual arguments suggests this isn’t possible.

Note that on this conception of rationality it is perfectly appropriate to criticize a belief forming method for how it might perform if faced with some other set of circumstances. For instance, we could appropriately criticize the rule: never believe in ghosts/psychics on the grounds that it would have lead us to the wrong conclusions in a world where these things were real.

Rationality As Heuristic

The second conception of rationality is simpler. Rationality is what will lead human beings like us to true beliefs in this world. Thus, this notion of rationality can take into account things that happen to be true. For instance, consider the rule that when asked a question on a math test (written by humans in the usual circumstances) that calls for a numerical answer you should judge that 0 is the most probable answer. This rule is almost certainly truth-conducive but only because it happens to be true that human psychology tends to favor asking questions whose answer is 0.

Now a heuristic like this might, at first, seem pretty distant from the kind of things we usually mean by rationality but think about some of the rules that are frequently said to be rationally required/favored. For instance, one should steelman2 your opponents arguments, try to consider the issue in the most dispassionate way you can manage and you should break up complex/important events.

For instance, suppose that humans were psychologically disposed to be overly deferential so it was far more common to underestimate the strength of your argument than it was to underestimate your opponent’s argument. In this case steelmanning would make us even more likely to reach the wrong conclusions not less. Similarly, our emotions could have reflected useful information available to our subconscious minds but not our concuss minds in such a way that they provided a good guide to truth. In such a world trying to reach probability judgements via dispassionate consideration wouldn’t be truth conducive.

Thus, on this conception of rationality whether or not a belief forming method is rational depends only on how well it does in the actual world.

The Problematic Ambiguity

Unfortunately, when people in the rationality community talk about rationality they tend to blur these two concepts together. That is they advocate belief forming mechanisms that could only be said to be rational in the heuristic sense but assume that they can determine matters of rationality purely by contemplation without empirical evidence.

For instance, consider these remarks by Yudkowsky or this lesswrong post. Whether or not they come out and assert it they convey the impression that there is some higher discipline or lifestyle of rationality which goes far beyond simply not engaging in logical contradiction or violating probability axioms. Yet they seem to assume that we can determine what is/isn’t rational by pure conceptual analysis rather than empirical validation.

This issue is even more clear when we criticize others for the ways they form beliefs. For instance, we are inclined to say that people who adopt the rule ‘believe what my community tells me is true’ or ‘believe god exists/doesn’t exist regardless of evidence’ are being irrational since such rules would yield incorrect results if they had been born in a community with crazy beliefs or in a universe with/without deities. Yet, as I observed above the very rules we take to be core rational virtues have the very same property.

The upshot of this isn’t that we should give up on finding good heuristics for truth. Not at all. Rather, I’m merely suggesting we take more care, especially in criticizing other people’s belief forming methods, to ensure we are applying coherent standards.

A Third Way

One might hope that there was yet another concept of rationality that someone split the difference of the two I provided here. A notion that allows us to take into account things like our psychological makeup or seemingly basic (if contingent) properties our universe has, e.g., we experience it as predictable rather than being an orderless succession of experiential states, but doesn’t let us build in facts like Yeti’s don’t exist into supposedly rational belief forming mechanisms. Frankly, I’m skeptical that any such coherent notion can be articulated but don’t currently have a compelling argument for that claim.

Finally, I’d like to end by pointing out there is another issue we should be aware of regarding the term rationality (though hardly unique to it). That is rationality is ultimately a property of belief forming rules while in the actual world what we get is instances of belief formation and some vague intentions about how we will form beliefs in the future. Thus there is the constant temptation to simply find some belief forming rule that qualifies as sufficiently rational and use it to justify this instance of our belief. However, it’s not generally valid to infer that you are forming beliefs appropriately just because each belief you form agrees with some sufficiently rational (in the heuristic sense) belief forming mechanism.

For instance, suppose there are a 100 different decent heuristics for forming a certain kind of belief. We know that each one is imperfect and gets different cases wrong but any attempt to come up with a better rule doesn’t yield anything humans (with our limited brains) can usefully apply. It is entirely plausible that almost any particular belief of this kind matches up with 1 of these 100 different heuristics thus allowing you to always cite a justification for your belief even though you underperform every single one of these heuristics.



  1. I’m glossing over the question of whether there is a distinction between an arbitrary possible world and a `random’ possible world. For instance, suppose that some belief forming rule is true in all but finitely many possible worlds (out of some hugely uncountable set of possible worlds). That rule is not authorized in an arbitrary possible world (choose the counterexample world and it leads to falsehood) but intuitively it seems justified and any non-trivial probability measure (i.e. one that doesn’t concentrate on any finite set of worlds) on the space of possible worlds would assign probability 1 to the validity of the belief forming procedure. However, this won’t be an issue in this discussion. 
  2. The opposite of strawmannirg. Rendering your opponents argument in the strongest fashion possible.