Skepticism About Study Linking Trump Support To Racism

Or Dear God More Suspicious Statistical Analysis

So I see people posting this vox article suggesting Trump, but not Clinton, supporters are racist and I want to advise caution and urge people to actually read the original study.

Vox’s takeaway is,

All it takes to reduce support for housing assistance among Donald Trump supporters is exposure to an image of a black man.

Which they back up with the following description:

In a randomized survey experiment, the trio of researchers exposed respondents to images of either a white or black man. They found that when exposed to the image of a black man, white Trump supporters were less likely to back a federal mortgage aid program. Favorability toward Trump was a key measure for how strong this effect was.

If you look at the actual study its chock full of warning signs. They explicitly did not find any statistically significant difference between those Trump voters given the prompts showing black or white aid recipients degree of support for the program or degree of anger they felt or blame they assigned towards those recipients. Given that this is the natural reading of Vox’s initial description its already disappointing (Vox does elaborate to some extent but not in a meaningfully informative way).

What the authors of the study did is asked for a degree of Trump support (along with many other questions such as liberal/conservative identification, vote preference, racial resentment giving researchers a worryingly large range of potentially analysises they could have conducted). Then they regressed the conditional effect of the black/white prompt on the level of blame, support and anger against degree of Trump support controlling for a whole bunch of other crap (though they do claim ‘similar’ results without controls) and are using some dubious claims about this regression to justify their claims. This should already raise red flags about research degree of freedom especially given the pretty unimpressive R^2 values.

But what should really cause one to be skeptical is that the regression of Hillary support with conditional effect of black/white prompt shows a similar upward slope (visually the slope appears on slightly less for Hillary support than it did for Trump) though at the extreme high end of Hillary support the 95% confidence interval just barely includes 0 while for Trump it just barely excludes it. Remember, as Andrew Gelman would remind us the difference between significant and non-significant results isn’t significant and indeed the study didn’t find a significant difference between how Hillary and Trump support interacted with the prompt in terms of degree of support for the program. In other words if we take the study at face value it suggests at only a slightly lower confidence level that increasing support for Hillary makes one more racist.

So what should we make of this strange seeming result? Is it really the case that Hillary support also makes one more racist but just couldn’t be captured by this survey? No, I think there is a more plausible explanation: the primary effect this study is really capturing is how willing one is to pick larger numbers to describe one’s feelings. Yes, there is a real effect of showing a black person rather than a white person on support for the program (though showing up as not significant on its own in this study) but if you are more willing to pick large numbers on the survey this effect looks larger for you and thus correlates with degree of support for both Hillary and Trump.

To put this another way imagine there are two kinds of people who answer the survey. Emoters and non-emoters. Non-emoters keep all their answers away from the extremes and so the effect of the black-white prompt on them is numerically pretty small and they avoid expressing strong support for either candidate (support is only a positive variable) while Emoters will show both a large effect of the black-white prompt (because changes in their opinion result in larger numerical differences) and a greater likelihood of being a strong Trump or Hillary supporter.

This seems to me to be a far more plausible explanation than thinking that increasing Hillary support correlates with increasing racism and I’m sure there are any number of other plausible alternative interpretations like this. Yes, the study did seem to suggest some difference between Trump and Hillary voters on the slopes of the blame and anger regressions (but not support for the program) but this may reflect nothing more pernicious than the unsurprising fact that conservative voters are more willing to express high levels of blame and anger toward recipients of government aid.

However, even if you don’t accept my alternative interpretation the whole thing is sketchy as hell. Not only do the researchers have far too many degrees of freedom (both in terms of the choice of regression to run but also in criteria for inclusion of subjects in the study) for my comfort but the data itself was gathered via a super lossy survey process creating the opportunity for all kinds of bias to enter into the process not to mention. Moreover, the fact that all the results are about regressions is already pretty worrisome as it is often far too easy to make strong seeming statistical claims about regressions, a worry which is amplified by the fact that they don’t actually plot the data. I suspect that there is far more wrong with this analysis than I’m covering here so I’m hoping someone with more serious statistical chops than I have such as Andrew Gelman will analyze these claims.

But even if we take the study’s claims at face value the most you could infer (and technically not even this) is that there are some more people who are racist among strong Trump supporters than among those who have low support for Trump which is a claim so unimpressive it certainly doesn’t deserve a Vox article much less support the description given. Indeed, I think it boarders on journalistically unethical to show the graphs showing the correlation between increasing support for Trump and prompt effect but not the ones showing similar effects for support of Hillary. However, I’m willing to believe this is the result of the general low standards for science literacy in journalism and the unfortunate impression that statistical significance is some magical threshold.

Study: Trump fans are much angrier about housing assistance when they see an image of a black man

All it takes to reduce support for housing assistance among Trump supporters is exposure to an image of a black man. That’s the takeaway from a new study by researchers Matthew Luttig, Christopher Federico, and Howard Lavine, set to be published in Research & Politics.

Does Predictive Processing Explain Too Much?

A Request For Clarification On What Predictive Processing Rules Out

So Scott Alexander has an interesting book review up about Surfing Uncertainty which I encourage everyone to read themselves. However, most of the post is really an exploration of the “predictive processing” model for brain function. I’ll leave a more in depth explanation of what this model is to Scott and just offer the following excerpt for those readers to lazy to click through.

Predictive processing begins by asking: how does this happen? By what process do our incomprehensible sense-data get turned into a meaningful picture of the world.

The key insight: the brain is a multi-layer prediction machine. All neural processing consists of two streams: a bottom-up stream of sense data, and a top-down stream of predictions. These streams interface at each level of processing, comparing themselves to each other and adjusting themselves as necessary.
As these two streams move through the brain side-by-side, they continually interface with each other. Each level receives the predictions from the level above it and the sense data from the level below it. Then each level uses Bayes’ Theorem to integrate these two sources of probabilistic evidence as best it can. This can end up a couple of different ways.

The upshot of these different ways is that when everything happens as predicted the higher levels remain unnotified of any change but that when there is a mismatch it draws attention from these higher layers. However, in some circumstances a strong prediction from a higher layer can cause lower layers to “rewrite the sense data to make it look as predicted.”

I admit that I’m intrigued by the idea of predictive processing, especially the suggestion that our muscle control is actually effectuated merely by `predicting’ our arm will be in a certain state and acting to minimize prediction error. However, my first reaction is to wonder how much content there is in this model.

Describing some kind of processing or control task in terms of predictions has a certain universality kind of feel to it. This is only a vague sense based on a book review but I worry that invoking the predictive processing model to describe how our brains work is much like invoking the lambda calculus model to describe how a particular computer functions. Namely, I worry that predictive processing is such a powerful model that virtually anything remotely plausible as a mechanism for processing sense data and effectuating control over our limbs could be fit into the model — meaning it offers no real insight.

I mean it was already apparent before this model came on to the scene that how we see even low level visual data is affected by high level classifications. The various figure-ground illusions make this point quite clearly. It was also already apparent that attention to one task (counting passes) could limit our ability to notice some other kind of oddity (a guy in a gorilla suit). However, its far from clear that the predictive processing model really adds anything to our understanding here.

Indeed, to even make sense of these examples we have to understand the relevant predictions to happen at a very abstract level that is highly context dependent so that by focusing on the number of basketball passes in a game it no longer counts as a sufficiently unpredicted event when a man in a gorilla suit walks past (or allows some other story about why paying one sort of attention suppresses this kind of notice). That’s fine but allowing this level of abstraction/freedom in describing the thing to be predicted makes me wonder what couldn’t be suitably described in terms of this model.

The attempt to describe our imagination, e.g., our ability to picture a generic police officer in our minds, as utilizing the mental machinery that would generate a sense-data stream as a prediction to match against reality raises more questions. Obviously, the notion of matching must be a very high level one quite removed from the actual pictorial representation if the mental image we conjure when we think of policemen is to be seen as matching the sense-data stream experienced when we encounter a policeman. Yet if the level at which we are evaluating a predictive match is so abstract why do we imagine a particular image when we think of a policeman and not merely whatever vague high level abstracta we will judge to match when we actually view a policeman. I’m sure there is a plausible theory to tell here about invoking the same lower level machinery we use to process sense-data when we imagine and leveraging that same feedback but, again, I’m left wondering what work predictive processing is really doing here.

More generally, I wonder to what extent all these predictions wouldn’t result from just assuming, as we know to be true, that the brain processes information in ‘layers’, there can be feedback between these layers and frequently the goal of our mental tasks is to predict events or control actions. Its not even obvious to me that the claimed predictions of the theory like the placebo effect couldn’t have equally well been spun the other way if the effect had been different, e.g., when your high level processes predict that you won’t feel pain it will be particularly salient when you nevertheless do feel pain so placebo pain meds should result in more people reporting pain.

But I haven’t read the book myself yet so maybe predictive processing has been suitably preciscified in the book so as to rule out many plausible ways the brain might have behaved and to clearly predict outcomes like the placebo effect. However, I wrote this post merely to raise the possibility that a paradigm like this can fail precisely because it is too good at describing phenomena. Hopefully, my worries are misplaced and someone can explain to me in the comments just what kind of plausible models of brain function this paradigm rules out.

Cool Cassini Pics

Cassini Just Sent Back Closest Ever Images of Saturn, And They’re Incredible

NASA’s Cassini probe is plunging to its death. The nuclear-powered spacecraft has orbited Saturn for 13 years, and sent back hundreds of thousands of images. The photos include close-ups of the gaseous giant, its famous rings, and its enigmatic moons – including Titan, which has its own atmosphere, and icy Enceladus, which has a subsurface ocean that could conceivably harbour microbial life.

Don’t Change The p-value Threshold

Personally, I think the proposal to ‘change’ the p-value for significant results from .05 to .005 is a mistake. The only sense in which this proposal has any real bite is if journals and hiring committees respond by treating research that doesn’t meet p < .005 as less important but all that does is make the incentives for the kind of behavior causing all the problems much stronger.

I’d much rather have a well designed (ideally pre-registered) trial at p < .05 than a p < .005 result that is cherry picked as a result of after the fact choice of analysis. Rather than making the distinction between well designed appropriate methodology and dangerous potentially misleading methodology more apparent this further obscures it and tells any scientist who was standing on principle they need to stop hoping their better methodology will be appreciated and do something to compete on p-value with papers published using problematic data analysis.

In particular, I think this kind of proposal doesn’t take sufficient account of the economics and incentives of researchers. Yes, p < .005 studies would be more convincing but they also cost more (both in $ and time) so by telling fledgling researchers they need p < .005 you force them to put all their eggs in one basket making dubious data analysis choices that much more tempting when their study fails to meet the threshold.

What we need is more results blind publication processes (in which journals publish the results based merely on a description of the experimental process without knowledge of what the results found). That would both help combat many of these biases and truly evaluate researchers on their ability not their luck. Ideally such studies would be pre-accepted before results were actually analyzed. Of course there still needs to be a place for merely suggestive work that invites further research but it should be regarded as such without any particular importance assigned to p-value.

However, as these are only my brief immediate thoughts I’m quite open to potential counterarguments.