Politician’s Incentives Regarding Facebook

God I hope not but sounds plausible.

The Peltzman Model of Regulation and the Facebook Hearings – Marginal REVOLUTION

If you want understand the Facebook hearings it’s useful to think not about privacy or technology but about what politicians want. In the Peltzman model of regulation, politicians use regulation to tradeoff profits (wanted by firms) and lower prices (wanted by constituents) to maximize what politicians want, reelection.

Privacy Regulation Is Likely Unworkably Hard

Don't Count On The Government Regulating Facebook

Tyler Cowen provides a great analysis of one of the generic calls for regulating big data (and Facebook in particular). Putting this together with his previous post pointing out that it would cost us each ~$80/year to use facebook on a paid basis1. Taken together they make a compelling case that there is no appetite in the US for serious laws protecting data privacy and that whatever laws we do get will probably do more harm than good.

To expand on Cowen’s point a little bit let’s seriously consider for a moment what a world where the law granted individuals broad rights to control how their information was kept and used. That would be a world where it would suddenly be very hard to conduct a little poll on your blog. Scott Alexander came up with some interesting hypothesizes regarding brain functioning and trans-gender individuals by asking his readers to fill out a survey. But doing that survey meant collecting personal and medical information about his readers (their gender identification, age, other mental health diagnoses) and storing it for analysis. He certainly wouldn’t have bothered to do any such think if he was required to document regulatory compliance, include a mechanism for individuals to request their data be removed or navigate complex consent and disclosure rules (now you’ve gotta store emails and passwords making things worse and risk liability if you become unable to delete info). And what about the concerned parent afraid children in her town are getting sick too frequently. Will it now be so difficult for her to post a survey that we won’t discover the presence of environmental carcinogens?

One is tempted to respond that these cases are obviously different. These aren’t people using big data to track individuals but people choosing to share non-personally identifiable data on a survey. But how can we put that into a law and make it so obvious bloggers don’t feel any need to consult attorneys before running survey?

One might try and hang your hat on the fact that the surveys I described don’t record your email address or name2. However, if you don’t want repeated voting to be totally trivial that means recording an IP address. Enough questions and you’ll end up deanonymizing everyone and there is always a risk (Oops, turns out there is only one 45 year old Broglida). On the other hand google if it’s ok as long as you don’t deliberately request real world identifying information the regulation is toothless — google doesn’t really care what your name is they just want your age, politics, click history etc.. .

Well maybe it should only be about passively collected data. That’s damn hard to define already (why is a click on an ajax link in a form different than a click on a link to a story) and risks making normal http server logs illegal. Besides, it’s a huge benefit to consumers that startups are able to see which design or UI visitors prefer. So checking if users find a new theme or video controls preferable (say by serving it to 50% of them and seeing if they spend more time on the site) shouldn’t require corporate counsel be looped in or we make innovation and improvement hugely expensive. Moreover, users with special needs and other niche interests are likely to particularly suffer if there is no low cost hassle free way of trying out alternate page versions and evaluating user response.

Ultimately, we don’t really want the world that we could get by regulating data ownership. It’s not the world in which facebook doesn’t have scary power. It’s the world where companies like facebook have more scary power because they have the resources to hire legal counsel and lobby for regulatory changes to ensure their practices stay technically legal while the startups and potential competitors don’t have those advantages. Not only do we not want the world we would get by passing data ownership regulations I don’t think most people even have a clear idea why that would be a good thing. People just have a vague feeling of discomfort with companies like facebook not a clear conception of a particular harm to avoid and that’s a disastrous situation for regulation.

Having said this, I do fear the power of companies like facebook (and even governmental entities) to blackmail individuals based on the information they are able to uncover with big data. However, I believe the best response to this is more openness and, ideally, an open standards based social network that doesn’t leave everything in the hands of one company. Ultimately, that will mean less privacy and less protection for our data but that’s why specifying the harm you fear really matters. If the problem is, as I fear, the unique leverage being the sole possessor of this kind of data provides facebook and/or governments then the answer is to make sure they aren’t the sole possessor of anything.

Zeynep Tufekci’s Facebook solution – can it work? – Marginal REVOLUTION

Here is her NYT piece, I’ll go through her four main solutions, breaking up, paragraph by paragraph, what is one unified discussion: What would a genuine legislative remedy look like? First, personalized data collection would be allowed only through opt-in mechanisms that were clear, concise and transparent.


  1. Now, while a subscription funded facebook would surely be much much cheaper I think Cowen is completely correct when he points out that any fee based system would hugely reduce the user base and therefore the value of using facebook. Indeed, almost all of the benefit facebook provides over any random blogging platform is simply that everyone is on it. Personally, I favor an open social graph but this is even less protective of personal information. 
  2. Even that is pretty limiting. For instance, it prevents running any survey that wants to be able to do a follow up or simply email people their individual results 

Ambiguity, Silence and Complicity

How Good People Make It Impossible To Discuss Race, Gender and Religion

Listening to the Klein-Harris discussion about the Charles Murray controversy affected me pretty intensely. I was struck by how charitable, compassionate and reasonable Klein was in his interaction with Harris. Klein honestly didn’t think Harris was a bad guy or anything just someone who was incorrect on a factual issue and, because of the same kind of everyday biases we all have, insufficiently responsive to the broader context. Indeed, it seemed that Klein even saw Murray himself as merely misguided and perhaps inappropriately fixated not fundamentally evil. How then to square this with the fact that Klein’s articles (both the ones he wrote and served as editor for) unquestionably played a huge role in many people concluding that Harris was beyond the pale and the kind of racist scum that right thinking people shouldn’t even listen to?

Unlike Harris I don’t think Klein was being two-faced or deliberately malicious in what he wrote about Harris. Indeed, what Klein did is unfortunately all too common among well-intentioned individuals on the left and academics in particular (and something I myself have been guilty of). Klein spoke up to voice his view about a view he felt was wrong or mistaken about race but then simply choose to keep silent rather than explicitly standing up to disclaim the views of those who would moralize the discussion. This can seem harmless because in other contexts one can simply demure from voicing an opinion about controversial points which might get one in trouble but key ambiguities in how we understand notions like racist/sexist/etc and accusations of bias or insufficient awareness of/concern for the plight of underprivileged groups has the effect of turning silence into complicity.

The danger is that someone in Klein’s position faces strong pressure from certain factions on the left not to defend Murray’s views and those of his supporters as being within the realm of appropriate discussion and debate. Indeed, as Klein thinks that not only is Murray wrong but wrong in a dangerous and potentially harmful way it’s understandable that he would see no reason to throw himself in front of the extremists who don’t merely want to say Harris is mistaken but believe he should be subject to the same ostracism that we apply to members of the KKK. So Klein simply presents his criticisms of Harris and Murray and calls attention to the ways in which he thinks their views are not only wrong but actively harmful in a way that resonates with past racial injustices but doesn’t feel the need to step forward and affirmatively state his belief that Harris is probably just making a mistake for understandable human reasons not engaging in some kind of thought crime.

In other contexts one could probably just stand aside and not engage this issue but when it comes to race and racism there is a strong underlying ambiguity as to whether one is saying a claim is racist in the sense of being harmful to racial minorities or in the sense that believing it deserves moral condemnation. Similarly, there is a strong ambiguity between claiming that someone is biased in the sense of having the universal human failing of being more sympathetic to situations they can relate to or is biased in the sense of disliking minorities. These tend to run together since once everyone agrees something is racist, e.g., our punitive drug laws, then only those who don’t mind being labeled racists tend to support them even though there are plenty of well-intentioned reasons to have those beliefs, e.g., many black pastors were initially supportive of the harsh drug laws.

Unfortunately, the resulting effect is that failing to stand up and actively deny that one is calling for moral condemnation for having the wrong views on questions of race (or gender or…) one ends up implicitly encouraging such condemnation.

Harris and Klein

Double Charity Failure

I’m generally a defender of Harris and I believe Vox (under Klein) was uncharitable to Murray and Harris. Even in this interview I think he (probably unintentionally) suggests that we should take Murray’s arguments less seriously because of his political aims and implied motivations.

However, Klein is dead on the nose when he accuses Harris of not being willing to extend the same charity to others he wants extended to him. Disagreements are hard and understanding other people is very difficult and Harris (like all of us) does have trouble extending charity when it feels near something that’s a personal attack on him or understanding how other people’s errors may be motivated by similar emotional response to prior unfairness.

My sense is the Klein’s real position is a reasonable view that Murray is very wrong on the science in a way that is harmful and that Harris gets it wrong because of the issue above. However, I think Harris is absolutely right in criticizing Klein for speaking in ways he should know are likely to lead to extreme moral condemnation.

Klein should know that the way his articles (and the articles in Vox while he was editor) will be interpreted by the public as going far beyond a mild criticism that Harris makes the same kind of unremarkable mistake we all do talking about tough political issues. I don’t think Klein is being malicious here and Harris is uncharitable in assuming this but I think he should be faulted for not being much more clear to his readers that he isn’t suggesting Harris is beyond the realm of reasonable disagreement…merely that he thinks he is well-intentioned, but wrong, in a way that happens to be harmful.

In short Harris and Klein both fall short of the ideal of charity and they both could do a great deal more to communicate that well-intentioned good people can disagree intensely and even think another person’s views are harmful without having to think they are a bad person.

Waking Up Podcast #123 – Identity & Honesty | Sam Harris

In this episode of the Waking Up podcast, Sam Harris speaks with Ezra Klein, Editor-at-Large for Vox Media, about racism, identity politics, intellectual honesty, and the controversy over his podcast with Charles Murray (Waking Up #73).

More Confusion About Gender Equality

It's Never Been About Numerical Equality

So apparently the Swedish government is going to pay women to edit Wikipedia out of concern that wikipedia contribution is heavily biased in favor of men. This misunderstands what’s desirable about gender equality in a serious way. While this may be nothing more than harmless idiocy it provides an important warning about the importance of taking a hard look at programs designed to increase gender equity.

There is no intrinsic good to having the same number of women editing Wikipedia (or engaged in any particular career or activity) as men. Rather, there is a harm when people are denied the ability to pursue their passion or interest on account of bias or stereotypes about their gender.

Now, if one believes that some activity discriminates against interested women one might think that artificially inducing women to participate (affirmative action, or even payment) is an effective long term strategy to change attitudes, e.g., working with women will change the attitudes of men in the field and place women in positions of power so future women won’t face the same discrimination. However, wikipedia actively encourages using unidentifiable user names, doesn’t require gender identification and there is no evidence of a toxic bro-culture among frequent editors. Thus, there is no reason to think injecting more female editors into wikipedia will reduce the amount of discrimination face by women in the future. Indeed, even if you believe that women are underrepresented on wikipedia because of discrimination or stereotyping, e.g., women aren’t techie or women aren’t experts, then paying women to edit wikipedia is wasting money that could have been used to combat this actual harm.

Moreover, there is no particular evidence that the edits made by frequent editors to wikipedia are particularly likely to be somehow slanted against women or otherwise convey a bias that this kind of program would be expected to rectify. Indeed, paying members of particular groups to edit wikipedia is an assault on wikipedia’s reliability. While I’m not particularly concerned about Swedish women the underlying principle that no one should be able to pay to ensure wikipedia is more reflective of the views of a certain identity group is important. I mean what happens to information about the Armenian genocide if Turkey decides that it should pay Turks to increase their representation on Wikipedia?

But why care about this at all? I mean so what if the Swedes blow some money stupidly? It’s not like men are suffering and need to be protected from the injustice of it all.

The reason we should care is that it’s shows in a clear and uncontrovertible fashion how easily well intentioned concern about gender equity can go off the rails. Given the potential blowback and murkiness of the issues there is a tendency to just take for granted the fact that programs which claim to be about improving gender equity are at least plausibly targeted at that end. However, this proves that even in the most public circumstances its dangerously easy for people to conflate ensuring numerical equality with increasing gender equality. Given that in many circumstances the risk isn’t merely wasting money but, as in affirmative action and quota programs, actively making things worse (e.g. by making people suspect female colleagues didn’t really earn their positions) we need to be far more careful that such programs are doing some worth those costs.

Not Enough Women at Wikipedia? | EconLog | Library of Economics and Liberty

by Pierre Lemieux …women need state encouragement to do some of the one million edits that are made on Wikipedia every day. Presumably, this will promote the liberation of women. The Swedish government, or at least its foreign minister, wants…

A Norm Against Partisan Smearing?

Reading Reich’s book (Who We Are And How We Got Here) really drives home to me just how tempting it is to collapse into tribal based cheering (e.g. cheering on your genes/genetic history/etc as the best) and how important our norms against racism are in limiting this.

It makes me wonder if we couldn’t develop similarly strong norms about not cheering on your political/social tribe in the same manner. It’s a more delicate situation since we need to preserve the ability to disagree and offer useful criticism. However, it still seems to me that we might be able to cultivate a norm which strongly disapproved of trying to make the other side look bad or implying they are improperly motivated/biased.

I mean, of course, we won’t actually get rid of hypocrisy or self-serving beliefs but if it required the same kind of extreme caution to allege bad faith to the other ideologies that we require to make claims about racial differences it might make a big difference.

Failing Business 101

The Idiotic Idea Of Apple Competing With Intel

There is a rumor going around that apple may try and replace Intel chips in it’s computers with their own in-house chip. Now, it’s certainly conceivable that apple will offer a cheap low-end laptop based on the chips it uses for the iPhone and iPad. Indeed, that’s probably a great opportunity. However, the idea that apple might switch completely to their own in-house Silicon is such a bad business idea that I have to assume they won’t try.

I mean suppose for a moment that apple thought they could outdo Intel and AMD in designing high end processors. What should apple do? Well they could design processors in-house just for their own computers limiting their potential profits and assuming substantial risk if they turn out to be wrong. Alternatively, they could spin off a new processor design company (perhaps with some kind of cooperation agreement) which could sell their processors to all interested parties while limiting their risk exposure. Now, I think the later option is clearly preferable and as it seems pretty implausible to think Intel and AMD are so badly run as to make such a venture attractive so it would be even less attractive to try and compete with Intel in house.

Now why doesn’t this same argument apply to apple’s choice to design it’s own ARM chips for the iPhone? First, apple was able to buy state of the art IP to start from which wouldn’t be available in they were designing a high performance desktop/laptop CPU. Secondly, because of the high degree of integration in mobile devices there were real synergies apple could realize by designing the chip and phone in combination, e.g., implementing custom hardware to support various iphone functions. Considering desktops and highend laptops there are no such pressures. There is plenty of space to put any dedicated hardware in another chip and no special apple specific features that would be particularly valuable to implement in the CPU.

On the other hand a cheap(er) laptop that could run iPad apps could be a great deal. Just don’t expect them to replace Intel chips on the high end systems.

Apple is actively working on Macs that replace Intel CPUs

A new Bloomberg report claims Apple is working on its own CPUs for the Mac, with the intent to ultimately replace the Intel chips in its computers with those it designs in-house. According to Bloomberg’s sources, the project (which is internally called Kalamata) is in the very early planning stages, but it has been approved by executives at the company.

Team Rittenhouse

Or Why The Heros In Timeless Are Idiots

I just finished the first season of Timeless and I can’t help but feel the characters, and especially Lucy, are being irrational. (Some spoilers but only serious spoiler is in bullet 3 at the very end).

What they know about Rittenhouse is that every generation is initially totally horrified and appalled at the thought of it, wishes it was destroyed and eventually comes to see it as important and necessary once they have sufficient time to think and evaluate the evidence yet this gives them no pause as to whether they themselves might be in the wrong.

Indeed, the only defensible reason to be an ardent support of a democratic system is that the evidence of the past few centuries makes a super strong case that democracies are good places to live and produce far more utilities than dictatorships but if you found out that US had secretly been an oligarchy the whole time the evidence would actually point the other way and suggest skepticism of non-oligarchical rule (or well influence).

The show writers try hard to make sure we see Rittenhous as evil by making hereditary and talking about good strong bloodlines but once you’ve decided on a dictatorial system (or I guess technically a very small oligarchy) hereditary rule is probably not merely desirable but a necessity as otherwise the subsequent set of rulers will conflict with the children of the last. By having 50+ members they can smooth out the ups and downs of monarchical hereditary rule and even work expel the less able members creating positive selection pressure. As far as being from ‘good families’ well every government needs a mythos to legitimize themselves and inspire its members even if its small.

As far as the supposed bad acts that Rittenhouse is supposed to have committed in the show I have three comments.

  1. Those bad acts actually pale beside the genuine injustices and horrors the legitimate democratic government of our country committed at the same times. I mean WWI was basically a pointless slaughter of 400k US soldiers or more then there are thinks like Tuskeege, Japanese detentions etc.. etc..

    Even the recent ‘bad acts’ like killing Flynn’s family in the name of some kind of national necessity (or was it just impunity by some elements) doesn’t seem out of line compared to the (IMO often justified) use of drone strikes even given the occasional civilian casualties to protect our national interest. If that’s the price we have to pay to get the kind of America we have then it doesn’t seem particularly large.

    Especially when you compare that to the acts taken by the supposedly misguided but ultimately forgivable/noble Flynn in his campaign against them including attempts to assist nazis, commit massive terrorism with huge death tolls etc…

  2. Rittenhouse is (as I understand the extent of its power) is essentially analogous to the UK house of lords before the reforms which made almost all the positions non-hereditary and let commons pass laws without approval from lords after a 2 then 1 year wait. That always seemed like a pretty good system to me…focus and primary power reflects the people but you essentially (modulo a bit of self-interest) also have to convince a bunch of rando professional legislators who don’t have to please constituents.

  3. But even if Rittenhouse isn’t great at the moment at the end of the season Lucy has cleaned out all the old power structure and is basically being offered the keys to the kingdom. It’s totally irrational for her not just to accept and try and use that power to make the world a better place and turn Rittenhouse into a force for good.

NRA Conferences Reduce Gun Injuries?

Misleading Reporting and Dubious Statistics

So the following letter is being widely reported online as if it is evidence for the importance of gun control. I’m skeptical of the results as I detail in the next post but even if one takes the results at face value the letter is pretty misleading and the media reporting is nigh fraudulent.

In particular if one digs into the appendix to the letter one finds the following statement: “many of the firearm injuries observed in the commercially insured patient population may reflect non-crime-related firearm injuries.” This is unsurprising as using health insurance data means you are only looking at patients rich enough to be insured and willing to report their injury as firearms related: so basically excluding anyone injured in the commission of a crime or who isn’t legally allowed to use a gun. As a result they also analyzed differences in crime rates and found no effect.

So even on it’s face this study would merely show that people who choose to use firearms are sometimes injured in that use. That might be a good reason to stay away from firearms yourself but not additional reason for regulation as is being suggested in the media.

Moreover, if the effect is really just about safety at gun ranges then its unclear if the effect is from lower use of such ranges or that the NRA conference encourages greater care and best practices.

Reasons To Suspect The Underlying Study

Also, I’m pretty skeptical of the underlying claim in the study. The size of the effect claimed is huge relative to the number of people who attend an NRA conference. I mean about 40% of US households are gun owners but only ~80,000 people attend nationwide NRA conventions or ~.025% of the US population or ~.0625 of US gun owners. Thus, for this statistic to be true because NRA members are busy at the conference we would have to believe NRA conference attendees were a whopping 320 times more likely to be inflict a gun related injury than the average gun owner.

Now if we restrict our attention to homicides this is almost surely not the case. Attending an NRA convention requires a certain level of financial wealth and political engagement which suggests membership in a socioeconomic class less likely to commit gun violence and than the average gun owner. And indeed, the study finds no effect in terms of gun related crime. Even if we look to non-homicides gun deaths from suicides far outweigh those from accidents and I doubt those who go to an NRA convention are really that much more suicidal inclined.

An alternative likely explanation is that the NRA schedules its conferences for certain times of the year when people are likely to be able to attend and we are merely seeing seasonal correlations masquerading as effects from the NRA conference (a factor they don’t control for). Also as they run all subgroup analysises and don’t report the results for census tracks and other possible subgroups the possibility for p-hacking is quite real. Looking at the graph they provide I’m not exactly overwhelmed.Not exactly convincing graph

The claim gets harder to believe when one considers the fact that people who attend NRA meetings almost surely don’t give up going to firing ranges during the meeting. Indeed, I would expect (though haven’t been able to verify) that there would be any number of shooting range expeditions during the conference and that this would actually mean many attendees would be more likely to handle a gun during that time period.

Though, once one realizes that the data set one is considering is only those who make insurance claims relating to gun related injuries it is slightly more plausible but only at the cost of undermining the significance of the claim. Deaths and suicides are much less likely to produce insurance claims and the policy implications aren’t very clear if all we are seeing is a reduction in people injured because of incorrect gun grips (see the mythbusters about this..such injuries can be quite serious).

Artificial Intelligence And The Structure Of Thought

Why Your Self-Driving Car Won't Cause Armageddon

In recent years a number of prominent individuals have raised concerns about our ability to control powerful AIs. The idea is that once we create truly human level generally intelligent software or AGI computers will undergo an intelligence explosion and will be able to escape any constraints we place on them. This concern has perhaps been most throughly developed by Eliezer Yudkowsky.

Unlike the AI in bad science fiction the concern isn’t that the AI will be evil or desire dominion the way humans are but simply that it will be too good at whatever task we set it to perform. For instance, suppose Waymo builds an AI to run its fleet of self-driving cars. The AI’s task is to converse with passengers/app users and route its vehicles appropriately. Unlike more limited self-driving car software this AI is programmed to learn the subtleties of human behavior so it can position a pool of cars in front of the stadium right before the game ends and helpfully show tourists the sites. On Yudkowsky’s vision the engineers achieve this by coding in a reward function that the software works to maximize (or equivalently a penalty function it works to minimize). For instance, in this case the AI might be punished based on negative reviews/frustrated customers, deaths/damage from accidents involving its vehicles, travel delays and customers who choose to use a competitor rather than Waymo. I’m already skeptical that (super) human AI would have anything identifiable as a global reward/utility function but on Yudkowsky’s picture AGI is something like a universal optimizer which is set loose to do its best to achieve rewards.

The concern is that the AI would eventually realize that it could minimize its punishment by arranging for everyone to die in a global pandemic since then there would be no bad reviews, lost customers or travel delays. Given the AI’s vast intelligence and massive data set it would then hack into microbiology labs and manipulate the workers there to create a civilization ending plague. Moreover, no matter what kind of firewalls or limitations we try and place on the AI as long as it can somehow interact with the external world it will find a way around these barriers. Since its devilishly difficult to specify any utility function without such undesirable solutions Yudkowsky concludes that AGI poses a serious threat to the human species.

Rewards And Reflection

The essential mechanism at play in all of Yudkowsky’s apocalyptic scenarios is that the AI examines its own reward function, realizes that some radically different strategy would offer even greater rewards and proceeds to surreptitiously work to realize this alternate strategy. Now its only natural that a sufficiently advanced AI would have some degree of reflective access to its own design and internal deliberation. After all it’s common for humans to reflect on our own goals and behaviors to help shape our future decisions, e.g., we might observe that if we continue to get bad grades we won’t get into the college we want and as a result decide that we need to stop playing World of Warcraft.

At first blush it might seem obvious that realizing its rewards are given by a certain function would induce an AI to maximize that function. One might even be tempted to claim this is somehow part of the definition of what it means for an agent to have a utility function but that’s trading off on an ambiguity between two notions of reward.

The sense of reward which gives rise to the worries about unintended satisfaction is that of positive reinforcement. It’s the digital equivalent of giving someone cocaine. Of course, if you administer cocaine to someone every time they write a blog post they will tend to write more blog posts. However, merely learning that cocaine causes a rewarding distribution of dopamine in the brain doesn’t cause people to go out and buy cocaine. Indeed, that knowledge could just as well have the exact opposite effect. Similarly, there is no reason to assume that merely because an AGI has a representation of their reward function they will try and reason out alternative ways to satisfy it. Indeed, indulging in anthropomorphizing for a moment, there is no reason to assume that an AGI will have any particular desire regarding rewards received by its future time states much adopt a particular discount rate.

Of course, in the long run, if a software program was rewarded for analyzing its own reward function and finding unusual ways to activate it then it could learn to do so just as people who are rewarded with pleasurable drug experiences can learn to look for ways to short-circuit their reward system. However, if that behavior is punished, e.g., humans intervene and punish the software when it starts recommending public transit, then the system will learn to avoid short-circuiting its reward pathways just like people can learn to avoid addictive drugs. This isn’t to say that there is no danger here, left alone an AGI, just like a teen with access to cocaine, could easily learn harmful reward seeking behavior. However, since the system doesn’t start in a state in which it applies its vast intelligence to figure out ways to hack its reward function the risk is far less severe.

Now, Yudkowsky might respond by saying he didn’t really mean the system’s reward function but its utility function. However, since we don’t tend to program machine learning algorithms by specifying the function they will ultimately maximize (or reflect on and try to maximize) its unclear why we need to explicitly specify a utility function that doesn’t lead to unintended consequences. After all, Yudkowsky is the one trying to argue that its likely that AGI will have these consequences so merely restating the problem in a space that has no intrinsic relationship to how one would expect AGI to be constructed doesn’t do anything to advance his argument. For instance, I could point out that phrased in terms of the locations of fundamental particles its really hard to specify a program that excludes apocalyptic arrangements of matter but that wouldn’t do anything to convince you that AIs risked causes such apocalypses since such specifications have nothing to do with how we expect an AI to be programed.

The Human Comparison

Ultimately, we have one example of a kind of general intelligence: the human brain. Thus, when evaluating claims about the dangers of AGI one of the first things we should do is see if the same story applies to our brain and if not if there is any special reason to expect our brains to be different.

Looking at the way humans behave its striking how poorly Yudkowsky’s stories describe our behavior even though evolution has shaped us in ways that make us far more dangerous than we should expect AGIs to be (we have self-preservation instincts, approximately coherent desires and beliefs, and are responsive to most aspects of the world rather than caring only about driving times or chess games). Time and time again we see that we follow heuristics and apply familiar mental strategies even when its clear that a different strategy would offer us greater activation of reward centers, greater reproductive opportunities or any other plausible thing we are trying to optimize.

The fact that we don’t consciously try and optimize our reproductive success and instead apply a forest of frameworks and heuristics that we follow even when they undermine our reproductive success strongly suggests that an AGI will most likely function in a similar heuristic layered fashion. In other words, we shouldn’t expect intelligence to come as a result of some pure mathematical optimization but more as a layered cake of heuristic processes. Thus, when an AI responsible for routing cars reflects on its performance it won’t see the pure mathematical question of how can I minimize such and such function any more than we see the pure mathematical question of how can I cause dopamine to be released in this part of my brain or how can I have more offspring. Rather, just as we break up the world into tasks like ‘make friends’ or ‘get respect from peers’ the AI will reflect on the world represented in terms of pieces like ‘route car from A to B’ or ‘minimize congestion in area D’ that bias it towards a certain kind of solution and away from plots like avoid congestion by creating a killer plague.

This isn’t to say there aren’t concerns. Indeed, as I’ve remarked elsewhere I’m much more concerned about schizophrenic AIs than I am about misaligned AI’s but that’s enough for this post.