Thoughts on rationalism and the rationalist community from a skeptical perspective. The author rejects rationality in the sense that he believes it isn't a logically coherent concept, that the larger rationalism community is insufficiently critical of it's beliefs and that ELIEZER YUDKOWSKY IS NOT THE TRUE CALIF.
If you want understand the Facebook hearings it’s useful to think not about privacy or technology but about what politicians want. In the Peltzman model of regulation, politicians use regulation to tradeoff profits (wanted by firms) and lower prices (wanted by constituents) to maximize what politicians want, reelection.
Tyler Cowen provides a great analysis of one of the generic calls for regulating big data (and Facebook in particular). Putting this together with his previous post pointing out that it would cost us each ~$80/year to use facebook on a paid basis1. Taken together they make a compelling case that there is no appetite in the US for serious laws protecting data privacy and that whatever laws we do get will probably do more harm than good.
To expand on Cowen’s point a little bit let’s seriously consider for a moment what a world where the law granted individuals broad rights to control how their information was kept and used. That would be a world where it would suddenly be very hard to conduct a little poll on your blog. Scott Alexander came up with some interesting hypothesizes regarding brain functioning and trans-gender individuals by asking his readers to fill out a survey. But doing that survey meant collecting personal and medical information about his readers (their gender identification, age, other mental health diagnoses) and storing it for analysis. He certainly wouldn’t have bothered to do any such think if he was required to document regulatory compliance, include a mechanism for individuals to request their data be removed or navigate complex consent and disclosure rules (now you’ve gotta store emails and passwords making things worse and risk liability if you become unable to delete info). And what about the concerned parent afraid children in her town are getting sick too frequently. Will it now be so difficult for her to post a survey that we won’t discover the presence of environmental carcinogens?
One is tempted to respond that these cases are obviously different. These aren’t people using big data to track individuals but people choosing to share non-personally identifiable data on a survey. But how can we put that into a law and make it so obvious bloggers don’t feel any need to consult attorneys before running survey?
One might try and hang your hat on the fact that the surveys I described don’t record your email address or name2. However, if you don’t want repeated voting to be totally trivial that means recording an IP address. Enough questions and you’ll end up deanonymizing everyone and there is always a risk (Oops, turns out there is only one 45 year old Broglida). On the other hand google if it’s ok as long as you don’t deliberately request real world identifying information the regulation is toothless — google doesn’t really care what your name is they just want your age, politics, click history etc.. .
Well maybe it should only be about passively collected data. That’s damn hard to define already (why is a click on an ajax link in a form different than a click on a link to a story) and risks making normal http server logs illegal. Besides, it’s a huge benefit to consumers that startups are able to see which design or UI visitors prefer. So checking if users find a new theme or video controls preferable (say by serving it to 50% of them and seeing if they spend more time on the site) shouldn’t require corporate counsel be looped in or we make innovation and improvement hugely expensive. Moreover, users with special needs and other niche interests are likely to particularly suffer if there is no low cost hassle free way of trying out alternate page versions and evaluating user response.
Ultimately, we don’t really want the world that we could get by regulating data ownership. It’s not the world in which facebook doesn’t have scary power. It’s the world where companies like facebook have more scary power because they have the resources to hire legal counsel and lobby for regulatory changes to ensure their practices stay technically legal while the startups and potential competitors don’t have those advantages. Not only do we not want the world we would get by passing data ownership regulations I don’t think most people even have a clear idea why that would be a good thing. People just have a vague feeling of discomfort with companies like facebook not a clear conception of a particular harm to avoid and that’s a disastrous situation for regulation.
Having said this, I do fear the power of companies like facebook (and even governmental entities) to blackmail individuals based on the information they are able to uncover with big data. However, I believe the best response to this is more openness and, ideally, an open standards based social network that doesn’t leave everything in the hands of one company. Ultimately, that will mean less privacy and less protection for our data but that’s why specifying the harm you fear really matters. If the problem is, as I fear, the unique leverage being the sole possessor of this kind of data provides facebook and/or governments then the answer is to make sure they aren’t the sole possessor of anything.
Here is her NYT piece, I’ll go through her four main solutions, breaking up, paragraph by paragraph, what is one unified discussion: What would a genuine legislative remedy look like? First, personalized data collection would be allowed only through opt-in mechanisms that were clear, concise and transparent.
Now, while a subscription funded facebook would surely be much much cheaper I think Cowen is completely correct when he points out that any fee based system would hugely reduce the user base and therefore the value of using facebook. Indeed, almost all of the benefit facebook provides over any random blogging platform is simply that everyone is on it. Personally, I favor an open social graph but this is even less protective of personal information. ↩
Even that is pretty limiting. For instance, it prevents running any survey that wants to be able to do a follow up or simply email people their individual results ↩
EDIT: I was almost entierly wrong about this. See retraction
So at the moment there is a trend for women on social media to post ‘me too’ to indicate they have been a victim of sexual harassment or assault. The originator described the idea saying
If all the women who have been sexually harassed or assaulted wrote ‘Me too’ as a status, we might give people a sense of the magnitude of the problem
While I understand the attraction of wanting to fix things by posting on social media this craze is about as useful as trying to fix racism by posting facebook updates saying ‘racism is bad’ making it at best silly. At worst it further discourages women from entering male dominated areas (STEM, CS, congressional politics) by increasing the level of fear and anxiety felt about harassment with potentially other negative rebound effects.
Presumably, the idea is that, by illustrating the number of women affected people will realize just how big a problem this is and extra resources or attention will help rectify the situation. However, one would hardly expect this to either convince those who resist the idea that this is a serious problem or who accept it but don’t realize their actions are part of the problem.
Of course, one might respond that the true point is to convince those who have been victims of sexual harassment or assault that their experience isn’t an isolated case and its a problem shared by many other women. Unfortunately, the mere fact that a large number of other women post ‘me too’ just isn’t a good measure of the magnitude of the problem. Knowing that many people have once experienced something that they are willing to construe as sexual harassment/assault when doing so lets them feel they are making a difference and gaining social approval isn’t very informative. Heck, if I were female and I believed this would help I would lie and say ‘me too’ even if I hadn’t so experienced it just to help make a difference.
So even other victims of sexual harassment/assault shouldn’t have their estimate of the frequency of such behavior elevated by this information provided they at least realize that many other women out there believe sexual assault/harassment is a problem that deserves more attention. Something they surely must to even process and understand this new evidence. After all, provided many other women believe that sexual harassment/assault deserves more attention they would be inclined to post ‘me too’ even if they only had a single moment of harassment once in their life (the people posting believe they are helping and want to be part of that solution by helping). I don’t believe that is what is happening but the point is that seeing other people post ‘me too’ should leave your prior about how frequent and serious the problem is roughly where it is.
Ultimately, then this leaves this whole trend down in the messy world of emotional effects where I fear there are more potentially. harmful emotional effects (discouraging or scaring women) as there are potential beneficial ones.
To be clear I do think it could be helpful if women posted descriptions of their individual experiences with harassment or assault and described how those experiences affected them. Seeing people describe both the frequency, severity and emotional harm is at least plausibly the sort of thing that could convince skeptics but this is something that women are going to be, understandably, reluctant to do. What I’m objecting to here is the idea that just be saying ‘me too’ and nothing else one is likely to make things better.