Thoughts on rationalism and the rationalist community from a skeptical perspective. The author rejects rationality in the sense that he believes it isn't a logically coherent concept, that the larger rationalism community is insufficiently critical of it's beliefs and that ELIEZER YUDKOWSKY IS NOT THE TRUE CALIF.

Algorithmic Gaydar

Machine Learning, Sensitive Information and Prenatal Hormones

So there’s been some media attention recently to this study which found they were able to accurately predict sexual orientation with 91% for men and 83% for women. Sadly, everyone is focusing on the misleading idea that we can somehow use this algorithm to decloak who is gay and who isn’t rather than the really interesting fact that this is suggestive of some kind of hormonal or developmental cause of homosexuality.

Rather, given 5 pictures of a gay man and 5 pictures of a straight man 91% of the time it is able to correctly pick out the straight man. Those of us who remember basic statistics with all those questions about false positive rates should realize that, given the low rate of homosexuality in the population, this algorithm doesn’t actually give very strong evidence of homosexuality at all. Indeed, one would expect that, if turned loose on a social network, the vast majority of individuals judged to be gay would be false positives. However, in combination with learning based on other signals like your friends on social media one could potentially do a much better job. But at the moment there isn’t much of a real danger this tech could be used by anti-gay governments to identity and persecute individuals.

Also, I wish the media would be more careful about their terms. This kind of algorithm doesn’t reveal private information it reveals sensitive information inadvertently exposed publicly.

However, what I found particularly interesting was the claim in the paper that they were able to achieve a similar level of accuracy for photographs taken in a neutral setting. This, along with other aspects of the algorithm, strongly suggest the algorithm isn’t picking up on some kind of gay/straight difference in what kind of poses people find appealing. The researchers also generated a heat map of what parts of the image the algorithm is focusing on and while some of them do suggest grooming based information about hair, eyebrows or beard play some role the strong role that the nose, checks and corners of the mouth play suggests that relatively immutable characteristics are pretty helpful in predicting orientation.

The authors acknowledge that personality has been found to affect facial features in the long run so this is far from conclusive. I’d also add my own qualification that there might be some effect of the selection procedure that plays a role, e.g., if homosexuals are less willing to use a facial closeup on dating sites/facebook if they are ugly the algorithm could be picking up on that. However, it is at least interestingly suggestive evidence for the prenatal hormone theory (or other developmental theory) of homosexuality.

Can Hurricanes Return Shot Bullets?

So online there have been some suggestions that shooting a bullet into a hurricane could send it back to you by wrapping it around the eye.

So based on the images presented on wikipedia area presented by a 7.62mm rifle bullet to a wind striking it from the side is less than 12mm x 30 mm but more than 1/2 that so at least .00018 m^2. The minimum hurricane wind speed is 33 m/s so plugging it into the wind load calculator giving us about .12 Newton. This compares to the force of gravity of about .01kg *9.8 ~ .1 Newton. Yet fired straight up a bullet will only travel on the order of a kilometer while the size of a hurricane eye is typically 30+ km in size.

So no, a bullet won’t wrap around the hurricane and come back to hit you. The wind effect simply whips it off course too quickly to do a full circuit but one could hit something unintended if one fires with a strong cross wind.

I strongly suspect that simple range limitations would be an easier way to figure this out but I found the calculation kinda fun.

Does Predictive Processing Explain Too Much?

A Request For Clarification On What Predictive Processing Rules Out

So Scott Alexander has an interesting book review up about Surfing Uncertainty which I encourage everyone to read themselves. However, most of the post is really an exploration of the “predictive processing” model for brain function. I’ll leave a more in depth explanation of what this model is to Scott and just offer the following excerpt for those readers to lazy to click through.

Predictive processing begins by asking: how does this happen? By what process do our incomprehensible sense-data get turned into a meaningful picture of the world.

The key insight: the brain is a multi-layer prediction machine. All neural processing consists of two streams: a bottom-up stream of sense data, and a top-down stream of predictions. These streams interface at each level of processing, comparing themselves to each other and adjusting themselves as necessary.
As these two streams move through the brain side-by-side, they continually interface with each other. Each level receives the predictions from the level above it and the sense data from the level below it. Then each level uses Bayes’ Theorem to integrate these two sources of probabilistic evidence as best it can. This can end up a couple of different ways.

The upshot of these different ways is that when everything happens as predicted the higher levels remain unnotified of any change but that when there is a mismatch it draws attention from these higher layers. However, in some circumstances a strong prediction from a higher layer can cause lower layers to “rewrite the sense data to make it look as predicted.”

I admit that I’m intrigued by the idea of predictive processing, especially the suggestion that our muscle control is actually effectuated merely by `predicting’ our arm will be in a certain state and acting to minimize prediction error. However, my first reaction is to wonder how much content there is in this model.

Describing some kind of processing or control task in terms of predictions has a certain universality kind of feel to it. This is only a vague sense based on a book review but I worry that invoking the predictive processing model to describe how our brains work is much like invoking the lambda calculus model to describe how a particular computer functions. Namely, I worry that predictive processing is such a powerful model that virtually anything remotely plausible as a mechanism for processing sense data and effectuating control over our limbs could be fit into the model — meaning it offers no real insight.

I mean it was already apparent before this model came on to the scene that how we see even low level visual data is affected by high level classifications. The various figure-ground illusions make this point quite clearly. It was also already apparent that attention to one task (counting passes) could limit our ability to notice some other kind of oddity (a guy in a gorilla suit). However, its far from clear that the predictive processing model really adds anything to our understanding here.

Indeed, to even make sense of these examples we have to understand the relevant predictions to happen at a very abstract level that is highly context dependent so that by focusing on the number of basketball passes in a game it no longer counts as a sufficiently unpredicted event when a man in a gorilla suit walks past (or allows some other story about why paying one sort of attention suppresses this kind of notice). That’s fine but allowing this level of abstraction/freedom in describing the thing to be predicted makes me wonder what couldn’t be suitably described in terms of this model.

The attempt to describe our imagination, e.g., our ability to picture a generic police officer in our minds, as utilizing the mental machinery that would generate a sense-data stream as a prediction to match against reality raises more questions. Obviously, the notion of matching must be a very high level one quite removed from the actual pictorial representation if the mental image we conjure when we think of policemen is to be seen as matching the sense-data stream experienced when we encounter a policeman. Yet if the level at which we are evaluating a predictive match is so abstract why do we imagine a particular image when we think of a policeman and not merely whatever vague high level abstracta we will judge to match when we actually view a policeman. I’m sure there is a plausible theory to tell here about invoking the same lower level machinery we use to process sense-data when we imagine and leveraging that same feedback but, again, I’m left wondering what work predictive processing is really doing here.

More generally, I wonder to what extent all these predictions wouldn’t result from just assuming, as we know to be true, that the brain processes information in ‘layers’, there can be feedback between these layers and frequently the goal of our mental tasks is to predict events or control actions. Its not even obvious to me that the claimed predictions of the theory like the placebo effect couldn’t have equally well been spun the other way if the effect had been different, e.g., when your high level processes predict that you won’t feel pain it will be particularly salient when you nevertheless do feel pain so placebo pain meds should result in more people reporting pain.

But I haven’t read the book myself yet so maybe predictive processing has been suitably preciscified in the book so as to rule out many plausible ways the brain might have behaved and to clearly predict outcomes like the placebo effect. However, I wrote this post merely to raise the possibility that a paradigm like this can fail precisely because it is too good at describing phenomena. Hopefully, my worries are misplaced and someone can explain to me in the comments just what kind of plausible models of brain function this paradigm rules out.