Decision Theory Anti-realism

There Is No Fact Of The Matter About Correct Decision Theory

With the recent flurry of posts in the rationalist community about which decision theory ( e.g. CDT, EDT, UDT etc..) it’s time to revisit the theme of this blog: rejecting rationality realism. In this case that means pointing out that there isn’t actually a well-defined fact of the matter about which decision theory is better. Of course, nothing stops us from arguing with each other about the best decision theory but those disagreements are more like debates about what’s the best programming language than disagreements about the chemical structure of Benzene.

Any attempt to compare decision theories must first address the question: what does it mean for one decision theory to be better than another? Unlike many pseudo-problems1 there is a seemingly meaningful answer to this question: one decision theory is better than another to the extent that the choices it recommends lead to better outcomes for the agent. Other than some ambiguity about which theory is better if neither dominates the other it seems like this gives a straightforward criteria for superiority: we just look at actual outcomes and see which decision theory offers the best results for an agent. However, this only appears to give a well-defined criteria because in every day life the subtle differences between the various ways to understand a choice and how to conceptualize making a choice don’t matter.

In particular, the kind of scenarios which distinguish between the various decision theories yield different answers depending on whether you want to know who you should be (i.e. total source code) to do best, how you should program an agent if you want them to do best, which decision rule should you adopt for you to do best, and what choice gives you the best outcome. Furthermore, these scenarios call into question how the supposed ‘choices’ made by the decision theory relate to our intuitive notion in a way that makes them relevant to some notion of good decision making or if they are simply demanding the laws of physics/logic give way to offer them a better outcome in a way that has nothing to do with actual decisions.

Intuitions and Motivation

I’m sure some readers are shaking their heads at this point and saying something like

I don’t need to worry about technical issues about how to understand a choice. I can easily walk through Newcomb style problems and the rules straightforwardly tell me who gets what which is enough to satisfy my intuitive notion that theory X is better. Demanding one specify all these details is nitpicking.

To convince you that’s not enough let me provide an extreme dramatization of how purported payouts can be misleading and the question turns on a precise specification of the question. Consider the following Newtonian, rather than Newcombian, problem. You fall off the top of the empire State building what you do as you fall past the fifth floor? What would one say about the virtues of Floating Decision Theory which tells us that in such a situation we should make the choice to float gently to the ground. Now obviously, one would prefer to float rather than fly but posing the problem as a decision between these two choices doesn’t render it a real choice. Obviously, there is something dubious about evaluating your decision theory based on it’s performance on the float/fall question. At least on one conception a decision theory is no worse for failing to indicate the agent do something impossible for them so we can’t merely blindly assume that anytime we are handed a set of ‘choices’ and told what their payoffs are we can simply take those at face value.

Yet, this is precisely the situation we encounter in the original Newcomb problem as the very assumption of predictability which allows the demon2 to favor the 1 boxers ensures the physical impossibility of choosing any number of boxes other than what you did choose. Of course, the same is (up to quantum mechanical randomness) true of any actual `choice’ by a real person but under certain circumstances we find it useful to idealize it as free choice. What’s different about the Newcomb problem is that, understood naively, it simultaneously asks us to idealize selecting 1 or 2 boxes as a free choice while assuming it isn’t actually. Thus, it’s reasonable to worry that our intuitions about choices can’t just be applied uncritically in Newcomb type problems and now I’ll hope to motivate the concern that there might be multiple ways to understand the question being asked.

Let’s now modify this situation, by imagining that we actually live in the Marvel Universe so there are a number of people (floaters) who respond to large falls by, moments before impact, suddenly decelerating and floating gently to the ground. Now suppose we pose the question of whether, as you fall past the 5th floor, you should choose to have been born a floater or not. Obviously, this question suffers from the same infirmities as the above example in that intuitively there is no ‘choice’ involved in being a floater or not but being a floater. However, we can mask this flaw by instead of phrasing the choice as between being a floater and not instead phrasing it as being between yelling, “Holy shit I’m a floater” and concentrating totally on desperately trying to orient yourself so your feet strike first. Now presuming there is a strong (even exceptionless) psychological regularity that only floaters take the first option it follows that EDT recommends making such a yell while CDT doesn’t.

However, taking a look at the situation it seems clear that the two theories are in some sense answering different questions. If I wanted to know whether or not it is preferable to be the kind of person who yells “Holy shit I’m a floater” then I should consult EDT for an answer. Instead, if I’m interested in what I should do in that situation that doesn’t seem particularly relevant. I believe this should move us to consider the possibility we haven’t asked a clear question when we ask what the right decision theory is and in the next section I will consider a variety of ways the problem we’re trying to solve can be precisified and not they give rise to different decision theories.

Possible Precisifications

Ultimately, there is something a bit weird about asking what decision a real physical agent should take in a given situation. After all, the agent will act just as it’s software dictates and/or the laws of physics require. Thus, as Yudkowsky recognizes, any comparison of decision theories is asking some kind of counterfactual. However, which counterfactual we ask makes a huge difference in what decision theory is preferable. For instance, all of the following are potential ways to precifisify the question of what it means for it to be better for XDT to be a better deciscion theory than YDT.

  1. If there was a miracle that overrode the agent’s programming/physical laws at the moment of a choice then doing so in the manner prescribed by XDT yields better outcomes than doing so in a manner prescribed by YDT.
  2. In fact those actual agents who more often choose the outcome favored by XDT do better than those who choose the outcome favored by YDT.
  3. Those actual agents which adopt/apply XDT do better than those who adopt/apply YDT.
  4. Suppose there is a miracle that overrode physical laws at the moment the agent’s programming/internal makeup is specified then if the miracle results in outcomes more consistent with XDT than YDT the agent does better.
  5. As above except with applying XDT/YDT instead of just favoring outcomes which tend to agree with it.
  6. Moving one level up we could ask about which performs better, agents whose programming inclines them to adopt XDT or YDT when considered.
  7. Finally, if what we are interested in is actually coding agents, i.e., writing AI software, we might ask whether programmers who code their agents to reason in a manner that prefers choice A produce agents that do better than programmers who code agents to reason in a manner that prefers choice B.
  8. Pushing that one level up we could ask about whether programmers who are inclined to adopt/apply XDT/YDT as true produce agents which do better.

One could continue and list far more possibilities but these six are enough to illustrate the point.

For instance, note that if we are asking question 1 CDT outperforms EDT. For the purposes of question 1 the right answer to the Newcomb problem is to be a 2 boxer. After all, if we idealize the choice as a miracle that allows deviation from physical law then the demon’s prediction of whether we would be a two-boxer or one-boxer no longer must be accurate so two-boxes always outperforms one boxing. It doesn’t matter that your software says you will choose only one box if we are asking about outcomes where a miracle occurs and overrides that software.

On the other hand it’s clearly true that EDT does better than CDT with respect to question 2. That’s essentially the definition of EDT.

To distinguish the remaining options we need to consider a range of different scenarios such as demons who punish agents who actually apply/adopt XDT/YDT in reaching their conclusions. Or consider Newcombian demons who punish agents who adopt (or whose programmers adopted one of XDT/YDT).

Ultimately, which criteria we should use to compare decision theories depends on what we want to achieve. Different idealizations/criteria will be appropriate depending on whether we are asking which rule we ourselves should adopt, how we should program agents to act, how we should program agents who program agents etc.. etc… Moreover, I’d suggest that once we’ve fully preciscified the kind of question we want to ask the whole debate about which decision theory is best becomes irrelevant. Given a fully specified question we can just sit down and compute (or do empirical analysis) and when we can’t it indicates that we’ve failed to fully specify what we are asking.

The Use of Decision Theory By Agents

As a postscript I’d note that it’s also misguided to assume that the right way to program some kind of AI agent is to have that agent adopt some kind of decision theory like framework. Many discussions of decision theories seem to presume this by phrasing questions in terms of what decision theory should an AI apply/adopt. However, there is no reason to suppose that the way to produce the behavior favored by XDT is for the agent to actually believe/apply XDT. For instance, if a demon punishes agents who have adopted XDT then the outcomes XDT prefers might be best achieved by agents which explicitly eschew XDT. More pragmatically, it’s not at all clear that the most effective way for agents to reach XDT compatible outcomes is to perform the considerations demanded by XDT. That’s a good way to implement some algorithms but not all.

The reason that decision theory is useful in normal situations (i.e. lacking Omega/Newcombian demons) is that it’s a decent heuristic to assume that the way we internally consider outcomes/make choices doesn’t affect the payout we receive. Under this assumption pretty much all ways of preciscifying the question give the same answer and it offers some good advice for programming agents. However, the usefulness of the framework once we abandon this isn’t clear and can’t simply be assumed.
Thus, not only would I argue that the debate over which decision theory is best is misguided, but that we need to be more careful about the assumptions we make about applicability as well.

Thus, not only would I argue that the debate over which decision theory is best is misguided, but that we need to be more careful about the assumptions we make about applicability as well.


  1. For instance, any attempt to answer what makes one programming language better than another reveals substantial disagreement about which tradeoffs are desirable and no agreed upon framework for resolving them. Indeed, we in some sense all recognize that which programming language tradeoffs are desirable is context dependent. 
  2. Or in Yudkowsky’s formulation, Omega.