Thoughts on rationalism and the rationalist community from a skeptical perspective. The author rejects rationality in the sense that he believes it isn't a logically coherent concept, that the larger rationalism community is insufficiently critical of it's beliefs and that ELIEZER YUDKOWSKY IS NOT THE TRUE CALIF.
Why do we put up with this kind of shit (preventing college athletes from even monetizing their own youtube videos)? I mean no one really believes that this is deeply important to preserve some weird value of “amatuerness” in sports do they? Even if you thought that why not let the athletes themselves vote on whether being amatuer is worth the loss of revenue?
If you think they are too young and immature go ask former college athletes to make the call. Of course they won’t because this is all about making money off them.
Sigh, it just pisses me off that we can fight so much about issues of political controversy on campuses but in places where its clear as day that colleges are doing the wrong thing and both the right and left should support fixing somehow gets a pass. Yes this is just a contentless rant but sometimes it happens.
When it is revealed that a public figure said something with racist/sexist overtones criticism piles on fast. Even if it is clear that the figure doesn’t really have these racist/sexist attitudes the common refrain is that its still extremely harmful because it normalizes racism/sexism/etc.. Presumably the theory being that if other people believe that high status people commonly behave this way they will think its ok for them to as well.
Is this just a lie (or self-deception) for partisan purposes? I mean consider the implications if you really believed the following back when Bush was President (No one will plausibly believe Trump isn’t saying sexist things whatever you do):
G. W. Bush isn’t really a racist/sexist (replace with Clinton if you prefer) but he sometimes uses racist/sexist language without thinking in the privacy of the white house.
If people realized the president was saying these racist things they too would think that racism was ok and it would have bad consequences.
First, you should be much more angry at whatever staffer leaked the fact that the president used racist language than at the president himself. The staffer who leaked it had time to contemplate it and still choose to make the country think the president uses racist slurs while the president has a slip of the tongue from time to time. Indeed, you should be most angry if the staffer is a minority themselves who claims to be leaking the information because of his concern for racial justice. Even if you give the leaker some kind of pass for ignorance1 at the very least you should be trying your damnedest to (quietly) discourage any future such leaks.
Second, you should be archenemies with the liberal activists and members of the social justice community who spin stories about how racist/sexist the president is even, perhaps especially when it is true (excepting perhaps the very rare case where you believe it will do enough to affect the balance of power to outweigh the harms to race relations). Even with Trump it should be inexcusable to make allegations about dog whistle racism without absolutely rock solid evidence such as staffer testimony of intent and recognition in the community.
Third, you should be worried about maximally racist/sexist interpretations of a public figure’s comments. Even if it is plausible they meant them in the worst possible way you should favor the least racist/sexist interpretation that is plausible just so you don’t further normalize racism/sexism.
Yet, while I see people make the ‘this normalizes X’ argument all the damn time I’ve yet to see them get angry upset or even remonstrate people who are working to push marginally plausible theories of racist/sexist intent or dubitable claims of racist/sexist language. I’ve certainly never seen anyone making such an argument even suggest that it was bad/wrong for someone to leak that information. To the contrary they usually suggest it was in the national interest.
So how should one understand such claims? They can’t really believe the harm from normalization is that big a deal or they wouldn’t be on board with accusations that offer only minor political benefit at the cost of normalizing such behavior. My best guess is what they really mean is: how dare you break this social norm which I feel is very important. Even though your action only had a really tiny harmful effect the norm is really important because without it people would come to believe it was normal and acceptable to engage in racism/sexism.
That’s a fair statement but notice the implication: since any particular incident only does minor harm to this norm and barely nudges people’s sense of what is normal only a minor penalty is appropriate. After all, the benefit to the speaker is presumably virtually nothing from the slur and its sufficient if everyone takes relatively weak action to ensure they don’t utter any slurs so a small deterrent should suffice. In other words we still can’t interpret the speaker as making a cogent complaint as their intent in raising the specter of normalization was to show why this kind of behavior was so serious we couldn’t just let it go with a slap on the wrist but the speaker’s own disposition to prioritize a modicum of political advantage over avoiding further instances of normalization shows that he can’t coherently believe that the possibility of normalization shows the seriousness of the offense.
Shouldn’t the speaker get the very same pass if he hasn’t worked harder to control his occasional slips of the tongue because he isn’t aware that it has any negative effect on others? Often racist phrases are picked up simply from hearing them said so the speaker isn’t in any way morally more responsible than the leaker…indeed arguably a better position as the leaker has to sit down and think over if he should leak while the speaker may have never even done that regarding his slips of the tongue. ↩
I felt I would use this first post to explain this blog’s title. It is not, despite appearances to the contrary, meant to suggest any animosity toward the rationality community nor sympathy with the idea that when evaluating claims we should ever favor emotions and intuition over argumentation and evidence. Rather, it is intended as a critique of the ambiguous overuse of the term `rationality’ by the rationality community in general (and Yudkowsky specifically).
I want to suggest that there are two different concepts we use the word rationality to describe and that the rationality community overuses the term in a way that invites confusion. Both conceptions of rationality are judgements of epistemic virtue but the nature of that virtue differs.
Rationality As Ideal Evaluation Of Evidence
The first conception of rationality reflects the classic idea that rationality is a matter of a priori theoretical insight. This makes intuitive sense as rationality, in telling us how we should respond to evidence, shouldn’t depend on the particular way the evidence turns out. On this conception rationality constrains how one reaches judgements from arbitrary data and something is rational just if we expect it to maximize true beliefs in the face of a completely unknown/unspecified fact pattern1. In other words this is the kind of rationality you want if you are suddenly flung into another universe where the natural laws, number of dimensions or even the correspondence between mental and physical states might differ radically from our own.
On this conception having logically coherent beliefs and obeying the axioms of probability can be said to be rationally required (as doing so never forces you to belief less truths) but it’s hard to make a case for much else. Carnap (among others) suggested at one point that there might be something like a rationally (in this sense) preferable way of assigning priors but the long history of failed attempts and conceptual arguments suggests this isn’t possible.
Note that on this conception of rationality it is perfectly appropriate to criticize a belief forming method for how it might perform if faced with some other set of circumstances. For instance, we could appropriately criticize the rule: never believe in ghosts/psychics on the grounds that it would have lead us to the wrong conclusions in a world where these things were real.
Rationality As Heuristic
The second conception of rationality is simpler. Rationality is what will lead human beings like us to true beliefs in this world. Thus, this notion of rationality can take into account things that happen to be true. For instance, consider the rule that when asked a question on a math test (written by humans in the usual circumstances) that calls for a numerical answer you should judge that 0 is the most probable answer. This rule is almost certainly truth-conducive but only because it happens to be true that human psychology tends to favor asking questions whose answer is 0.
Now a heuristic like this might, at first, seem pretty distant from the kind of things we usually mean by rationality but think about some of the rules that are frequently said to be rationally required/favored. For instance, one should steelman2 your opponents arguments, try to consider the issue in the most dispassionate way you can manage and you should break up complex/important events.
For instance, suppose that humans were psychologically disposed to be overly deferential so it was far more common to underestimate the strength of your argument than it was to underestimate your opponent’s argument. In this case steelmanning would make us even more likely to reach the wrong conclusions not less. Similarly, our emotions could have reflected useful information available to our subconscious minds but not our concuss minds in such a way that they provided a good guide to truth. In such a world trying to reach probability judgements via dispassionate consideration wouldn’t be truth conducive.
Thus, on this conception of rationality whether or not a belief forming method is rational depends only on how well it does in the actual world.
The Problematic Ambiguity
Unfortunately, when people in the rationality community talk about rationality they tend to blur these two concepts together. That is they advocate belief forming mechanisms that could only be said to be rational in the heuristic sense but assume that they can determine matters of rationality purely by contemplation without empirical evidence.
For instance, consider these remarks by Yudkowsky or this lesswrong post. Whether or not they come out and assert it they convey the impression that there is some higher discipline or lifestyle of rationality which goes far beyond simply not engaging in logical contradiction or violating probability axioms. Yet they seem to assume that we can determine what is/isn’t rational by pure conceptual analysis rather than empirical validation.
This issue is even more clear when we criticize others for the ways they form beliefs. For instance, we are inclined to say that people who adopt the rule ‘believe what my community tells me is true’ or ‘believe god exists/doesn’t exist regardless of evidence’ are being irrational since such rules would yield incorrect results if they had been born in a community with crazy beliefs or in a universe with/without deities. Yet, as I observed above the very rules we take to be core rational virtues have the very same property.
The upshot of this isn’t that we should give up on finding good heuristics for truth. Not at all. Rather, I’m merely suggesting we take more care, especially in criticizing other people’s belief forming methods, to ensure we are applying coherent standards.
A Third Way
One might hope that there was yet another concept of rationality that someone split the difference of the two I provided here. A notion that allows us to take into account things like our psychological makeup or seemingly basic (if contingent) properties our universe has, e.g., we experience it as predictable rather than being an orderless succession of experiential states, but doesn’t let us build in facts like Yeti’s don’t exist into supposedly rational belief forming mechanisms. Frankly, I’m skeptical that any such coherent notion can be articulated but don’t currently have a compelling argument for that claim.
Finally, I’d like to end by pointing out there is another issue we should be aware of regarding the term rationality (though hardly unique to it). That is rationality is ultimately a property of belief forming rules while in the actual world what we get is instances of belief formation and some vague intentions about how we will form beliefs in the future. Thus there is the constant temptation to simply find some belief forming rule that qualifies as sufficiently rational and use it to justify this instance of our belief. However, it’s not generally valid to infer that you are forming beliefs appropriately just because each belief you form agrees with some sufficiently rational (in the heuristic sense) belief forming mechanism.
For instance, suppose there are a 100 different decent heuristics for forming a certain kind of belief. We know that each one is imperfect and gets different cases wrong but any attempt to come up with a better rule doesn’t yield anything humans (with our limited brains) can usefully apply. It is entirely plausible that almost any particular belief of this kind matches up with 1 of these 100 different heuristics thus allowing you to always cite a justification for your belief even though you underperform every single one of these heuristics.
I’m glossing over the question of whether there is a distinction between an arbitrary possible world and a `random’ possible world. For instance, suppose that some belief forming rule is true in all but finitely many possible worlds (out of some hugely uncountable set of possible worlds). That rule is not authorized in an arbitrary possible world (choose the counterexample world and it leads to falsehood) but intuitively it seems justified and any non-trivial probability measure (i.e. one that doesn’t concentrate on any finite set of worlds) on the space of possible worlds would assign probability 1 to the validity of the belief forming procedure. However, this won’t be an issue in this discussion. ↩
The opposite of strawmannirg. Rendering your opponents argument in the strongest fashion possible. ↩