## Does Predictive Processing Explain Too Much?

### A Request For Clarification On What Predictive Processing Rules Out

So Scott Alexander has an interesting book review up about Surfing Uncertainty which I encourage everyone to read themselves. However, most of the post is really an exploration of the “predictive processing” model for brain function. I’ll leave a more in depth explanation of what this model is to Scott and just offer the following excerpt for those readers to lazy to click through.

Predictive processing begins by asking: how does this happen? By what process do our incomprehensible sense-data get turned into a meaningful picture of the world.

The key insight: the brain is a multi-layer prediction machine. All neural processing consists of two streams: a bottom-up stream of sense data, and a top-down stream of predictions. These streams interface at each level of processing, comparing themselves to each other and adjusting themselves as necessary.
….
As these two streams move through the brain side-by-side, they continually interface with each other. Each level receives the predictions from the level above it and the sense data from the level below it. Then each level uses Bayes’ Theorem to integrate these two sources of probabilistic evidence as best it can. This can end up a couple of different ways.

The upshot of these different ways is that when everything happens as predicted the higher levels remain unnotified of any change but that when there is a mismatch it draws attention from these higher layers. However, in some circumstances a strong prediction from a higher layer can cause lower layers to “rewrite the sense data to make it look as predicted.”

I admit that I’m intrigued by the idea of predictive processing, especially the suggestion that our muscle control is actually effectuated merely by predicting’ our arm will be in a certain state and acting to minimize prediction error. However, my first reaction is to wonder how much content there is in this model.

Describing some kind of processing or control task in terms of predictions has a certain universality kind of feel to it. This is only a vague sense based on a book review but I worry that invoking the predictive processing model to describe how our brains work is much like invoking the lambda calculus model to describe how a particular computer functions. Namely, I worry that predictive processing is such a powerful model that virtually anything remotely plausible as a mechanism for processing sense data and effectuating control over our limbs could be fit into the model — meaning it offers no real insight.

I mean it was already apparent before this model came on to the scene that how we see even low level visual data is affected by high level classifications. The various figure-ground illusions make this point quite clearly. It was also already apparent that attention to one task (counting passes) could limit our ability to notice some other kind of oddity (a guy in a gorilla suit). However, its far from clear that the predictive processing model really adds anything to our understanding here.

Indeed, to even make sense of these examples we have to understand the relevant predictions to happen at a very abstract level that is highly context dependent so that by focusing on the number of basketball passes in a game it no longer counts as a sufficiently unpredicted event when a man in a gorilla suit walks past (or allows some other story about why paying one sort of attention suppresses this kind of notice). That’s fine but allowing this level of abstraction/freedom in describing the thing to be predicted makes me wonder what couldn’t be suitably described in terms of this model.

The attempt to describe our imagination, e.g., our ability to picture a generic police officer in our minds, as utilizing the mental machinery that would generate a sense-data stream as a prediction to match against reality raises more questions. Obviously, the notion of matching must be a very high level one quite removed from the actual pictorial representation if the mental image we conjure when we think of policemen is to be seen as matching the sense-data stream experienced when we encounter a policeman. Yet if the level at which we are evaluating a predictive match is so abstract why do we imagine a particular image when we think of a policeman and not merely whatever vague high level abstracta we will judge to match when we actually view a policeman. I’m sure there is a plausible theory to tell here about invoking the same lower level machinery we use to process sense-data when we imagine and leveraging that same feedback but, again, I’m left wondering what work predictive processing is really doing here.

More generally, I wonder to what extent all these predictions wouldn’t result from just assuming, as we know to be true, that the brain processes information in ‘layers’, there can be feedback between these layers and frequently the goal of our mental tasks is to predict events or control actions. Its not even obvious to me that the claimed predictions of the theory like the placebo effect couldn’t have equally well been spun the other way if the effect had been different, e.g., when your high level processes predict that you won’t feel pain it will be particularly salient when you nevertheless do feel pain so placebo pain meds should result in more people reporting pain.

But I haven’t read the book myself yet so maybe predictive processing has been suitably preciscified in the book so as to rule out many plausible ways the brain might have behaved and to clearly predict outcomes like the placebo effect. However, I wrote this post merely to raise the possibility that a paradigm like this can fail precisely because it is too good at describing phenomena. Hopefully, my worries are misplaced and someone can explain to me in the comments just what kind of plausible models of brain function this paradigm rules out.

## Rejecting Rationality

I felt I would use this first post to explain this blog’s title. It is not, despite appearances to the contrary, meant to suggest any animosity toward the rationality community nor sympathy with the idea that when evaluating claims we should ever favor emotions and intuition over argumentation and evidence. Rather, it is intended as a critique of the ambiguous overuse of the term rationality’ by the rationality community in general (and Yudkowsky specifically).

I want to suggest that there are two different concepts we use the word rationality to describe and that the rationality community overuses the term in a way that invites confusion. Both conceptions of rationality are judgements of epistemic virtue but the nature of that virtue differs.

### Rationality As Ideal Evaluation Of Evidence

The first conception of rationality reflects the classic idea that rationality is a matter of a priori theoretical insight. This makes intuitive sense as rationality, in telling us how we should respond to evidence, shouldn’t depend on the particular way the evidence turns out. On this conception rationality constrains how one reaches judgements from arbitrary data and something is rational just if we expect it to maximize true beliefs in the face of a completely unknown/unspecified fact pattern1. In other words this is the kind of rationality you want if you are suddenly flung into another universe where the natural laws, number of dimensions or even the correspondence between mental and physical states might differ radically from our own.

On this conception having logically coherent beliefs and obeying the axioms of probability can be said to be rationally required (as doing so never forces you to belief less truths) but it’s hard to make a case for much else. Carnap (among others) suggested at one point that there might be something like a rationally (in this sense) preferable way of assigning priors but the long history of failed attempts and conceptual arguments suggests this isn’t possible.

Note that on this conception of rationality it is perfectly appropriate to criticize a belief forming method for how it might perform if faced with some other set of circumstances. For instance, we could appropriately criticize the rule: never believe in ghosts/psychics on the grounds that it would have lead us to the wrong conclusions in a world where these things were real.

### Rationality As Heuristic

The second conception of rationality is simpler. Rationality is what will lead human beings like us to true beliefs in this world. Thus, this notion of rationality can take into account things that happen to be true. For instance, consider the rule that when asked a question on a math test (written by humans in the usual circumstances) that calls for a numerical answer you should judge that 0 is the most probable answer. This rule is almost certainly truth-conducive but only because it happens to be true that human psychology tends to favor asking questions whose answer is 0.

Now a heuristic like this might, at first, seem pretty distant from the kind of things we usually mean by rationality but think about some of the rules that are frequently said to be rationally required/favored. For instance, one should steelman2 your opponents arguments, try to consider the issue in the most dispassionate way you can manage and you should break up complex/important events.

For instance, suppose that humans were psychologically disposed to be overly deferential so it was far more common to underestimate the strength of your argument than it was to underestimate your opponent’s argument. In this case steelmanning would make us even more likely to reach the wrong conclusions not less. Similarly, our emotions could have reflected useful information available to our subconscious minds but not our concuss minds in such a way that they provided a good guide to truth. In such a world trying to reach probability judgements via dispassionate consideration wouldn’t be truth conducive.

Thus, on this conception of rationality whether or not a belief forming method is rational depends only on how well it does in the actual world.

### The Problematic Ambiguity

Unfortunately, when people in the rationality community talk about rationality they tend to blur these two concepts together. That is they advocate belief forming mechanisms that could only be said to be rational in the heuristic sense but assume that they can determine matters of rationality purely by contemplation without empirical evidence.

For instance, consider these remarks by Yudkowsky or this lesswrong post. Whether or not they come out and assert it they convey the impression that there is some higher discipline or lifestyle of rationality which goes far beyond simply not engaging in logical contradiction or violating probability axioms. Yet they seem to assume that we can determine what is/isn’t rational by pure conceptual analysis rather than empirical validation.

This issue is even more clear when we criticize others for the ways they form beliefs. For instance, we are inclined to say that people who adopt the rule ‘believe what my community tells me is true’ or ‘believe god exists/doesn’t exist regardless of evidence’ are being irrational since such rules would yield incorrect results if they had been born in a community with crazy beliefs or in a universe with/without deities. Yet, as I observed above the very rules we take to be core rational virtues have the very same property.

The upshot of this isn’t that we should give up on finding good heuristics for truth. Not at all. Rather, I’m merely suggesting we take more care, especially in criticizing other people’s belief forming methods, to ensure we are applying coherent standards.

### A Third Way

One might hope that there was yet another concept of rationality that someone split the difference of the two I provided here. A notion that allows us to take into account things like our psychological makeup or seemingly basic (if contingent) properties our universe has, e.g., we experience it as predictable rather than being an orderless succession of experiential states, but doesn’t let us build in facts like Yeti’s don’t exist into supposedly rational belief forming mechanisms. Frankly, I’m skeptical that any such coherent notion can be articulated but don’t currently have a compelling argument for that claim.

Finally, I’d like to end by pointing out there is another issue we should be aware of regarding the term rationality (though hardly unique to it). That is rationality is ultimately a property of belief forming rules while in the actual world what we get is instances of belief formation and some vague intentions about how we will form beliefs in the future. Thus there is the constant temptation to simply find some belief forming rule that qualifies as sufficiently rational and use it to justify this instance of our belief. However, it’s not generally valid to infer that you are forming beliefs appropriately just because each belief you form agrees with some sufficiently rational (in the heuristic sense) belief forming mechanism.

For instance, suppose there are a 100 different decent heuristics for forming a certain kind of belief. We know that each one is imperfect and gets different cases wrong but any attempt to come up with a better rule doesn’t yield anything humans (with our limited brains) can usefully apply. It is entirely plausible that almost any particular belief of this kind matches up with 1 of these 100 different heuristics thus allowing you to always cite a justification for your belief even though you underperform every single one of these heuristics.

1. I’m glossing over the question of whether there is a distinction between an arbitrary possible world and a `random’ possible world. For instance, suppose that some belief forming rule is true in all but finitely many possible worlds (out of some hugely uncountable set of possible worlds). That rule is not authorized in an arbitrary possible world (choose the counterexample world and it leads to falsehood) but intuitively it seems justified and any non-trivial probability measure (i.e. one that doesn’t concentrate on any finite set of worlds) on the space of possible worlds would assign probability 1 to the validity of the belief forming procedure. However, this won’t be an issue in this discussion.
2. The opposite of strawmannirg. Rendering your opponents argument in the strongest fashion possible.