Politician’s Incentives Regarding Facebook

God I hope not but sounds plausible.

The Peltzman Model of Regulation and the Facebook Hearings – Marginal REVOLUTION

If you want understand the Facebook hearings it’s useful to think not about privacy or technology but about what politicians want. In the Peltzman model of regulation, politicians use regulation to tradeoff profits (wanted by firms) and lower prices (wanted by constituents) to maximize what politicians want, reelection.

Privacy Regulation Is Likely Unworkably Hard

Don't Count On The Government Regulating Facebook

Tyler Cowen provides a great analysis of one of the generic calls for regulating big data (and Facebook in particular). Putting this together with his previous post pointing out that it would cost us each ~$80/year to use facebook on a paid basis1. Taken together they make a compelling case that there is no appetite in the US for serious laws protecting data privacy and that whatever laws we do get will probably do more harm than good.

To expand on Cowen’s point a little bit let’s seriously consider for a moment what a world where the law granted individuals broad rights to control how their information was kept and used. That would be a world where it would suddenly be very hard to conduct a little poll on your blog. Scott Alexander came up with some interesting hypothesizes regarding brain functioning and trans-gender individuals by asking his readers to fill out a survey. But doing that survey meant collecting personal and medical information about his readers (their gender identification, age, other mental health diagnoses) and storing it for analysis. He certainly wouldn’t have bothered to do any such think if he was required to document regulatory compliance, include a mechanism for individuals to request their data be removed or navigate complex consent and disclosure rules (now you’ve gotta store emails and passwords making things worse and risk liability if you become unable to delete info). And what about the concerned parent afraid children in her town are getting sick too frequently. Will it now be so difficult for her to post a survey that we won’t discover the presence of environmental carcinogens?

One is tempted to respond that these cases are obviously different. These aren’t people using big data to track individuals but people choosing to share non-personally identifiable data on a survey. But how can we put that into a law and make it so obvious bloggers don’t feel any need to consult attorneys before running survey?

One might try and hang your hat on the fact that the surveys I described don’t record your email address or name2. However, if you don’t want repeated voting to be totally trivial that means recording an IP address. Enough questions and you’ll end up deanonymizing everyone and there is always a risk (Oops, turns out there is only one 45 year old Broglida). On the other hand google if it’s ok as long as you don’t deliberately request real world identifying information the regulation is toothless — google doesn’t really care what your name is they just want your age, politics, click history etc.. .

Well maybe it should only be about passively collected data. That’s damn hard to define already (why is a click on an ajax link in a form different than a click on a link to a story) and risks making normal http server logs illegal. Besides, it’s a huge benefit to consumers that startups are able to see which design or UI visitors prefer. So checking if users find a new theme or video controls preferable (say by serving it to 50% of them and seeing if they spend more time on the site) shouldn’t require corporate counsel be looped in or we make innovation and improvement hugely expensive. Moreover, users with special needs and other niche interests are likely to particularly suffer if there is no low cost hassle free way of trying out alternate page versions and evaluating user response.

Ultimately, we don’t really want the world that we could get by regulating data ownership. It’s not the world in which facebook doesn’t have scary power. It’s the world where companies like facebook have more scary power because they have the resources to hire legal counsel and lobby for regulatory changes to ensure their practices stay technically legal while the startups and potential competitors don’t have those advantages. Not only do we not want the world we would get by passing data ownership regulations I don’t think most people even have a clear idea why that would be a good thing. People just have a vague feeling of discomfort with companies like facebook not a clear conception of a particular harm to avoid and that’s a disastrous situation for regulation.

Having said this, I do fear the power of companies like facebook (and even governmental entities) to blackmail individuals based on the information they are able to uncover with big data. However, I believe the best response to this is more openness and, ideally, an open standards based social network that doesn’t leave everything in the hands of one company. Ultimately, that will mean less privacy and less protection for our data but that’s why specifying the harm you fear really matters. If the problem is, as I fear, the unique leverage being the sole possessor of this kind of data provides facebook and/or governments then the answer is to make sure they aren’t the sole possessor of anything.

Zeynep Tufekci’s Facebook solution – can it work? – Marginal REVOLUTION

Here is her NYT piece, I’ll go through her four main solutions, breaking up, paragraph by paragraph, what is one unified discussion: What would a genuine legislative remedy look like? First, personalized data collection would be allowed only through opt-in mechanisms that were clear, concise and transparent.


  1. Now, while a subscription funded facebook would surely be much much cheaper I think Cowen is completely correct when he points out that any fee based system would hugely reduce the user base and therefore the value of using facebook. Indeed, almost all of the benefit facebook provides over any random blogging platform is simply that everyone is on it. Personally, I favor an open social graph but this is even less protective of personal information. 
  2. Even that is pretty limiting. For instance, it prevents running any survey that wants to be able to do a follow up or simply email people their individual results 

Ambiguity, Silence and Complicity

How Good People Make It Impossible To Discuss Race, Gender and Religion

Listening to the Klein-Harris discussion about the Charles Murray controversy affected me pretty intensely. I was struck by how charitable, compassionate and reasonable Klein was in his interaction with Harris. Klein honestly didn’t think Harris was a bad guy or anything just someone who was incorrect on a factual issue and, because of the same kind of everyday biases we all have, insufficiently responsive to the broader context. Indeed, it seemed that Klein even saw Murray himself as merely misguided and perhaps inappropriately fixated not fundamentally evil. How then to square this with the fact that Klein’s articles (both the ones he wrote and served as editor for) unquestionably played a huge role in many people concluding that Harris was beyond the pale and the kind of racist scum that right thinking people shouldn’t even listen to?

Unlike Harris I don’t think Klein was being two-faced or deliberately malicious in what he wrote about Harris. Indeed, what Klein did is unfortunately all too common among well-intentioned individuals on the left and academics in particular (and something I myself have been guilty of). Klein spoke up to voice his view about a view he felt was wrong or mistaken about race but then simply choose to keep silent rather than explicitly standing up to disclaim the views of those who would moralize the discussion. This can seem harmless because in other contexts one can simply demure from voicing an opinion about controversial points which might get one in trouble but key ambiguities in how we understand notions like racist/sexist/etc and accusations of bias or insufficient awareness of/concern for the plight of underprivileged groups has the effect of turning silence into complicity.

The danger is that someone in Klein’s position faces strong pressure from certain factions on the left not to defend Murray’s views and those of his supporters as being within the realm of appropriate discussion and debate. Indeed, as Klein thinks that not only is Murray wrong but wrong in a dangerous and potentially harmful way it’s understandable that he would see no reason to throw himself in front of the extremists who don’t merely want to say Harris is mistaken but believe he should be subject to the same ostracism that we apply to members of the KKK. So Klein simply presents his criticisms of Harris and Murray and calls attention to the ways in which he thinks their views are not only wrong but actively harmful in a way that resonates with past racial injustices but doesn’t feel the need to step forward and affirmatively state his belief that Harris is probably just making a mistake for understandable human reasons not engaging in some kind of thought crime.

In other contexts one could probably just stand aside and not engage this issue but when it comes to race and racism there is a strong underlying ambiguity as to whether one is saying a claim is racist in the sense of being harmful to racial minorities or in the sense that believing it deserves moral condemnation. Similarly, there is a strong ambiguity between claiming that someone is biased in the sense of having the universal human failing of being more sympathetic to situations they can relate to or is biased in the sense of disliking minorities. These tend to run together since once everyone agrees something is racist, e.g., our punitive drug laws, then only those who don’t mind being labeled racists tend to support them even though there are plenty of well-intentioned reasons to have those beliefs, e.g., many black pastors were initially supportive of the harsh drug laws.

Unfortunately, the resulting effect is that failing to stand up and actively deny that one is calling for moral condemnation for having the wrong views on questions of race (or gender or…) one ends up implicitly encouraging such condemnation.

Harris and Klein

Double Charity Failure

I’m generally a defender of Harris and I believe Vox (under Klein) was uncharitable to Murray and Harris. Even in this interview I think he (probably unintentionally) suggests that we should take Murray’s arguments less seriously because of his political aims and implied motivations.

However, Klein is dead on the nose when he accuses Harris of not being willing to extend the same charity to others he wants extended to him. Disagreements are hard and understanding other people is very difficult and Harris (like all of us) does have trouble extending charity when it feels near something that’s a personal attack on him or understanding how other people’s errors may be motivated by similar emotional response to prior unfairness.

My sense is the Klein’s real position is a reasonable view that Murray is very wrong on the science in a way that is harmful and that Harris gets it wrong because of the issue above. However, I think Harris is absolutely right in criticizing Klein for speaking in ways he should know are likely to lead to extreme moral condemnation.

Klein should know that the way his articles (and the articles in Vox while he was editor) will be interpreted by the public as going far beyond a mild criticism that Harris makes the same kind of unremarkable mistake we all do talking about tough political issues. I don’t think Klein is being malicious here and Harris is uncharitable in assuming this but I think he should be faulted for not being much more clear to his readers that he isn’t suggesting Harris is beyond the realm of reasonable disagreement…merely that he thinks he is well-intentioned, but wrong, in a way that happens to be harmful.

In short Harris and Klein both fall short of the ideal of charity and they both could do a great deal more to communicate that well-intentioned good people can disagree intensely and even think another person’s views are harmful without having to think they are a bad person.

Waking Up Podcast #123 – Identity & Honesty | Sam Harris

In this episode of the Waking Up podcast, Sam Harris speaks with Ezra Klein, Editor-at-Large for Vox Media, about racism, identity politics, intellectual honesty, and the controversy over his podcast with Charles Murray (Waking Up #73).

More Confusion About Gender Equality

It's Never Been About Numerical Equality

So apparently the Swedish government is going to pay women to edit Wikipedia out of concern that wikipedia contribution is heavily biased in favor of men. This misunderstands what’s desirable about gender equality in a serious way. While this may be nothing more than harmless idiocy it provides an important warning about the importance of taking a hard look at programs designed to increase gender equity.

There is no intrinsic good to having the same number of women editing Wikipedia (or engaged in any particular career or activity) as men. Rather, there is a harm when people are denied the ability to pursue their passion or interest on account of bias or stereotypes about their gender.

Now, if one believes that some activity discriminates against interested women one might think that artificially inducing women to participate (affirmative action, or even payment) is an effective long term strategy to change attitudes, e.g., working with women will change the attitudes of men in the field and place women in positions of power so future women won’t face the same discrimination. However, wikipedia actively encourages using unidentifiable user names, doesn’t require gender identification and there is no evidence of a toxic bro-culture among frequent editors. Thus, there is no reason to think injecting more female editors into wikipedia will reduce the amount of discrimination face by women in the future. Indeed, even if you believe that women are underrepresented on wikipedia because of discrimination or stereotyping, e.g., women aren’t techie or women aren’t experts, then paying women to edit wikipedia is wasting money that could have been used to combat this actual harm.

Moreover, there is no particular evidence that the edits made by frequent editors to wikipedia are particularly likely to be somehow slanted against women or otherwise convey a bias that this kind of program would be expected to rectify. Indeed, paying members of particular groups to edit wikipedia is an assault on wikipedia’s reliability. While I’m not particularly concerned about Swedish women the underlying principle that no one should be able to pay to ensure wikipedia is more reflective of the views of a certain identity group is important. I mean what happens to information about the Armenian genocide if Turkey decides that it should pay Turks to increase their representation on Wikipedia?

But why care about this at all? I mean so what if the Swedes blow some money stupidly? It’s not like men are suffering and need to be protected from the injustice of it all.

The reason we should care is that it’s shows in a clear and uncontrovertible fashion how easily well intentioned concern about gender equity can go off the rails. Given the potential blowback and murkiness of the issues there is a tendency to just take for granted the fact that programs which claim to be about improving gender equity are at least plausibly targeted at that end. However, this proves that even in the most public circumstances its dangerously easy for people to conflate ensuring numerical equality with increasing gender equality. Given that in many circumstances the risk isn’t merely wasting money but, as in affirmative action and quota programs, actively making things worse (e.g. by making people suspect female colleagues didn’t really earn their positions) we need to be far more careful that such programs are doing some worth those costs.

Not Enough Women at Wikipedia? | EconLog | Library of Economics and Liberty

by Pierre Lemieux …women need state encouragement to do some of the one million edits that are made on Wikipedia every day. Presumably, this will promote the liberation of women. The Swedish government, or at least its foreign minister, wants…

A Norm Against Partisan Smearing?

Reading Reich’s book (Who We Are And How We Got Here) really drives home to me just how tempting it is to collapse into tribal based cheering (e.g. cheering on your genes/genetic history/etc as the best) and how important our norms against racism are in limiting this.

It makes me wonder if we couldn’t develop similarly strong norms about not cheering on your political/social tribe in the same manner. It’s a more delicate situation since we need to preserve the ability to disagree and offer useful criticism. However, it still seems to me that we might be able to cultivate a norm which strongly disapproved of trying to make the other side look bad or implying they are improperly motivated/biased.

I mean, of course, we won’t actually get rid of hypocrisy or self-serving beliefs but if it required the same kind of extreme caution to allege bad faith to the other ideologies that we require to make claims about racial differences it might make a big difference.

Failing Business 101

The Idiotic Idea Of Apple Competing With Intel

There is a rumor going around that apple may try and replace Intel chips in it’s computers with their own in-house chip. Now, it’s certainly conceivable that apple will offer a cheap low-end laptop based on the chips it uses for the iPhone and iPad. Indeed, that’s probably a great opportunity. However, the idea that apple might switch completely to their own in-house Silicon is such a bad business idea that I have to assume they won’t try.

I mean suppose for a moment that apple thought they could outdo Intel and AMD in designing high end processors. What should apple do? Well they could design processors in-house just for their own computers limiting their potential profits and assuming substantial risk if they turn out to be wrong. Alternatively, they could spin off a new processor design company (perhaps with some kind of cooperation agreement) which could sell their processors to all interested parties while limiting their risk exposure. Now, I think the later option is clearly preferable and as it seems pretty implausible to think Intel and AMD are so badly run as to make such a venture attractive so it would be even less attractive to try and compete with Intel in house.

Now why doesn’t this same argument apply to apple’s choice to design it’s own ARM chips for the iPhone? First, apple was able to buy state of the art IP to start from which wouldn’t be available in they were designing a high performance desktop/laptop CPU. Secondly, because of the high degree of integration in mobile devices there were real synergies apple could realize by designing the chip and phone in combination, e.g., implementing custom hardware to support various iphone functions. Considering desktops and highend laptops there are no such pressures. There is plenty of space to put any dedicated hardware in another chip and no special apple specific features that would be particularly valuable to implement in the CPU.

On the other hand a cheap(er) laptop that could run iPad apps could be a great deal. Just don’t expect them to replace Intel chips on the high end systems.

Apple is actively working on Macs that replace Intel CPUs

A new Bloomberg report claims Apple is working on its own CPUs for the Mac, with the intent to ultimately replace the Intel chips in its computers with those it designs in-house. According to Bloomberg’s sources, the project (which is internally called Kalamata) is in the very early planning stages, but it has been approved by executives at the company.