Thoughts on rationalism and the rationalist community from a skeptical perspective. The author rejects rationality in the sense that he believes it isn't a logically coherent concept, that the larger rationalism community is insufficiently critical of it's beliefs and that ELIEZER YUDKOWSKY IS NOT THE TRUE CALIF.

Stop Calling Subjects Ethically Fraught

It's An Excuse Not An Argument

Listening to the Last Week Tonight on Gene Editing (it’s pretty good) and seeing this debate about paying organ donors I’m compelled to call out the practice of simply asserting that something is ethically fraught or troublesome.

Both with respect to not compensating organ donors (something which could save huge numbers of lives) and with (mostly prospective) limits on eliminating genetic disease or even barring improvement I think we let people who are simply uncomfortable with change off the hook by constantly repeating the supposed truism that the issue is ethically fraught or there are serious ethical concerns. It’s basically a free pass that excuses the fact that they are putting their discomfort ahead of people’s welfare.

Under all the scenarios/conditions seriously being considered No, there aren’t ethical concerns. Fears like letting a bank reposes your kidney are no more relevant to the proposals on the table than the fear that debtors will enslave people is to wages. Similarly, concerns about racially motivated eugenics programs have no plausible relationship to any kind of gene therapy even being prospectively considered.

Of course, we should hear potential concerns about such policies just like we would for any other policy/technology. However, opponents should be on the spot to either shut up or come up with compelling arguments suggesting harms. Based on the fact that the opponent in the WSJ to paying for organ donation is reduced to arguments like “The introduction of money for a precious good comes at the cost of the ability for one to aspire to virtue” makes me doubt they can come up with such arguments.

I’d add that I think philosophers are partially to blame on this point. As a matter of philosophical interest we correctly find clever new arguments seeking to show that paid organ donation is actually somehow problematically coercive or otherwise wrong more interesting than the obvious argument that it saves lives. However, just as physicists need to convey to the public that the very thing which makes theories which deviate from the standard model interesting also makes them less likely I think philosophers need to do this as well.

How to Provide Better Incentives to Organ Donors

Three experts discuss strategies to address the shortage of organs available for people who need transplants.

Social Control and The Principle Agent Problem

The Chinese Example And The Dangers Of Restricting Free Speech

This interesting post reminded me of my suspicion that a lot of the censorship in China isn’t the result of Xi Jinping’s crazed desire to be repressive. Almost certainly Xi would benefit from far less censorship and may indeed benefit from reports in the media exposing misbehavior by low level party officials but the incentives of those with the power to control expression (both to show off their loyalty and hide embarrassing events) means that far more censorship gets implemented than Xi would ideally want.

I think this is an important lesson for those who want to limit our free speech (or academic freedoms) when it comes to issues of race, gender harassment and the like. Even though the speech that one intends to ban may not have much value and impose great harms one needs to keep in mind the risks posed in delegating the practical authority to determine what speech qualifies.

Politician’s Incentives Regarding Facebook

God I hope not but sounds plausible.

The Peltzman Model of Regulation and the Facebook Hearings – Marginal REVOLUTION

If you want understand the Facebook hearings it’s useful to think not about privacy or technology but about what politicians want. In the Peltzman model of regulation, politicians use regulation to tradeoff profits (wanted by firms) and lower prices (wanted by constituents) to maximize what politicians want, reelection.

Privacy Regulation Is Likely Unworkably Hard

Don't Count On The Government Regulating Facebook

Tyler Cowen provides a great analysis of one of the generic calls for regulating big data (and Facebook in particular). Putting this together with his previous post pointing out that it would cost us each ~$80/year to use facebook on a paid basis1. Taken together they make a compelling case that there is no appetite in the US for serious laws protecting data privacy and that whatever laws we do get will probably do more harm than good.

To expand on Cowen’s point a little bit let’s seriously consider for a moment what a world where the law granted individuals broad rights to control how their information was kept and used. That would be a world where it would suddenly be very hard to conduct a little poll on your blog. Scott Alexander came up with some interesting hypothesizes regarding brain functioning and trans-gender individuals by asking his readers to fill out a survey. But doing that survey meant collecting personal and medical information about his readers (their gender identification, age, other mental health diagnoses) and storing it for analysis. He certainly wouldn’t have bothered to do any such think if he was required to document regulatory compliance, include a mechanism for individuals to request their data be removed or navigate complex consent and disclosure rules (now you’ve gotta store emails and passwords making things worse and risk liability if you become unable to delete info). And what about the concerned parent afraid children in her town are getting sick too frequently. Will it now be so difficult for her to post a survey that we won’t discover the presence of environmental carcinogens?

One is tempted to respond that these cases are obviously different. These aren’t people using big data to track individuals but people choosing to share non-personally identifiable data on a survey. But how can we put that into a law and make it so obvious bloggers don’t feel any need to consult attorneys before running survey?

One might try and hang your hat on the fact that the surveys I described don’t record your email address or name2. However, if you don’t want repeated voting to be totally trivial that means recording an IP address. Enough questions and you’ll end up deanonymizing everyone and there is always a risk (Oops, turns out there is only one 45 year old Broglida). On the other hand google if it’s ok as long as you don’t deliberately request real world identifying information the regulation is toothless — google doesn’t really care what your name is they just want your age, politics, click history etc.. .

Well maybe it should only be about passively collected data. That’s damn hard to define already (why is a click on an ajax link in a form different than a click on a link to a story) and risks making normal http server logs illegal. Besides, it’s a huge benefit to consumers that startups are able to see which design or UI visitors prefer. So checking if users find a new theme or video controls preferable (say by serving it to 50% of them and seeing if they spend more time on the site) shouldn’t require corporate counsel be looped in or we make innovation and improvement hugely expensive. Moreover, users with special needs and other niche interests are likely to particularly suffer if there is no low cost hassle free way of trying out alternate page versions and evaluating user response.

Ultimately, we don’t really want the world that we could get by regulating data ownership. It’s not the world in which facebook doesn’t have scary power. It’s the world where companies like facebook have more scary power because they have the resources to hire legal counsel and lobby for regulatory changes to ensure their practices stay technically legal while the startups and potential competitors don’t have those advantages. Not only do we not want the world we would get by passing data ownership regulations I don’t think most people even have a clear idea why that would be a good thing. People just have a vague feeling of discomfort with companies like facebook not a clear conception of a particular harm to avoid and that’s a disastrous situation for regulation.

Having said this, I do fear the power of companies like facebook (and even governmental entities) to blackmail individuals based on the information they are able to uncover with big data. However, I believe the best response to this is more openness and, ideally, an open standards based social network that doesn’t leave everything in the hands of one company. Ultimately, that will mean less privacy and less protection for our data but that’s why specifying the harm you fear really matters. If the problem is, as I fear, the unique leverage being the sole possessor of this kind of data provides facebook and/or governments then the answer is to make sure they aren’t the sole possessor of anything.

Zeynep Tufekci’s Facebook solution – can it work? – Marginal REVOLUTION

Here is her NYT piece, I’ll go through her four main solutions, breaking up, paragraph by paragraph, what is one unified discussion: What would a genuine legislative remedy look like? First, personalized data collection would be allowed only through opt-in mechanisms that were clear, concise and transparent.


  1. Now, while a subscription funded facebook would surely be much much cheaper I think Cowen is completely correct when he points out that any fee based system would hugely reduce the user base and therefore the value of using facebook. Indeed, almost all of the benefit facebook provides over any random blogging platform is simply that everyone is on it. Personally, I favor an open social graph but this is even less protective of personal information. 
  2. Even that is pretty limiting. For instance, it prevents running any survey that wants to be able to do a follow up or simply email people their individual results 

More Confusion About Gender Equality

It's Never Been About Numerical Equality

So apparently the Swedish government is going to pay women to edit Wikipedia out of concern that wikipedia contribution is heavily biased in favor of men. This misunderstands what’s desirable about gender equality in a serious way. While this may be nothing more than harmless idiocy it provides an important warning about the importance of taking a hard look at programs designed to increase gender equity.

There is no intrinsic good to having the same number of women editing Wikipedia (or engaged in any particular career or activity) as men. Rather, there is a harm when people are denied the ability to pursue their passion or interest on account of bias or stereotypes about their gender.

Now, if one believes that some activity discriminates against interested women one might think that artificially inducing women to participate (affirmative action, or even payment) is an effective long term strategy to change attitudes, e.g., working with women will change the attitudes of men in the field and place women in positions of power so future women won’t face the same discrimination. However, wikipedia actively encourages using unidentifiable user names, doesn’t require gender identification and there is no evidence of a toxic bro-culture among frequent editors. Thus, there is no reason to think injecting more female editors into wikipedia will reduce the amount of discrimination face by women in the future. Indeed, even if you believe that women are underrepresented on wikipedia because of discrimination or stereotyping, e.g., women aren’t techie or women aren’t experts, then paying women to edit wikipedia is wasting money that could have been used to combat this actual harm.

Moreover, there is no particular evidence that the edits made by frequent editors to wikipedia are particularly likely to be somehow slanted against women or otherwise convey a bias that this kind of program would be expected to rectify. Indeed, paying members of particular groups to edit wikipedia is an assault on wikipedia’s reliability. While I’m not particularly concerned about Swedish women the underlying principle that no one should be able to pay to ensure wikipedia is more reflective of the views of a certain identity group is important. I mean what happens to information about the Armenian genocide if Turkey decides that it should pay Turks to increase their representation on Wikipedia?

But why care about this at all? I mean so what if the Swedes blow some money stupidly? It’s not like men are suffering and need to be protected from the injustice of it all.

The reason we should care is that it’s shows in a clear and uncontrovertible fashion how easily well intentioned concern about gender equity can go off the rails. Given the potential blowback and murkiness of the issues there is a tendency to just take for granted the fact that programs which claim to be about improving gender equity are at least plausibly targeted at that end. However, this proves that even in the most public circumstances its dangerously easy for people to conflate ensuring numerical equality with increasing gender equality. Given that in many circumstances the risk isn’t merely wasting money but, as in affirmative action and quota programs, actively making things worse (e.g. by making people suspect female colleagues didn’t really earn their positions) we need to be far more careful that such programs are doing some worth those costs.

Not Enough Women at Wikipedia? | EconLog | Library of Economics and Liberty

by Pierre Lemieux …women need state encouragement to do some of the one million edits that are made on Wikipedia every day. Presumably, this will promote the liberation of women. The Swedish government, or at least its foreign minister, wants…

NRA Conferences Reduce Gun Injuries?

Misleading Reporting and Dubious Statistics

So the following letter is being widely reported online as if it is evidence for the importance of gun control. I’m skeptical of the results as I detail in the next post but even if one takes the results at face value the letter is pretty misleading and the media reporting is nigh fraudulent.

In particular if one digs into the appendix to the letter one finds the following statement: “many of the firearm injuries observed in the commercially insured patient population may reflect non-crime-related firearm injuries.” This is unsurprising as using health insurance data means you are only looking at patients rich enough to be insured and willing to report their injury as firearms related: so basically excluding anyone injured in the commission of a crime or who isn’t legally allowed to use a gun. As a result they also analyzed differences in crime rates and found no effect.

So even on it’s face this study would merely show that people who choose to use firearms are sometimes injured in that use. That might be a good reason to stay away from firearms yourself but not additional reason for regulation as is being suggested in the media.

Moreover, if the effect is really just about safety at gun ranges then its unclear if the effect is from lower use of such ranges or that the NRA conference encourages greater care and best practices.

Reasons To Suspect The Underlying Study

Also, I’m pretty skeptical of the underlying claim in the study. The size of the effect claimed is huge relative to the number of people who attend an NRA conference. I mean about 40% of US households are gun owners but only ~80,000 people attend nationwide NRA conventions or ~.025% of the US population or ~.0625 of US gun owners. Thus, for this statistic to be true because NRA members are busy at the conference we would have to believe NRA conference attendees were a whopping 320 times more likely to be inflict a gun related injury than the average gun owner.

Now if we restrict our attention to homicides this is almost surely not the case. Attending an NRA convention requires a certain level of financial wealth and political engagement which suggests membership in a socioeconomic class less likely to commit gun violence and than the average gun owner. And indeed, the study finds no effect in terms of gun related crime. Even if we look to non-homicides gun deaths from suicides far outweigh those from accidents and I doubt those who go to an NRA convention are really that much more suicidal inclined.

An alternative likely explanation is that the NRA schedules its conferences for certain times of the year when people are likely to be able to attend and we are merely seeing seasonal correlations masquerading as effects from the NRA conference (a factor they don’t control for). Also as they run all subgroup analysises and don’t report the results for census tracks and other possible subgroups the possibility for p-hacking is quite real. Looking at the graph they provide I’m not exactly overwhelmed.Not exactly convincing graph

The claim gets harder to believe when one considers the fact that people who attend NRA meetings almost surely don’t give up going to firing ranges during the meeting. Indeed, I would expect (though haven’t been able to verify) that there would be any number of shooting range expeditions during the conference and that this would actually mean many attendees would be more likely to handle a gun during that time period.

Though, once one realizes that the data set one is considering is only those who make insurance claims relating to gun related injuries it is slightly more plausible but only at the cost of undermining the significance of the claim. Deaths and suicides are much less likely to produce insurance claims and the policy implications aren’t very clear if all we are seeing is a reduction in people injured because of incorrect gun grips (see the mythbusters about this..such injuries can be quite serious).

Artificial Intelligence And The Structure Of Thought

Why Your Self-Driving Car Won't Cause Armageddon

In recent years a number of prominent individuals have raised concerns about our ability to control powerful AIs. The idea is that once we create truly human level generally intelligent software or AGI computers will undergo an intelligence explosion and will be able to escape any constraints we place on them. This concern has perhaps been most throughly developed by Eliezer Yudkowsky.

Unlike the AI in bad science fiction the concern isn’t that the AI will be evil or desire dominion the way humans are but simply that it will be too good at whatever task we set it to perform. For instance, suppose Waymo builds an AI to run its fleet of self-driving cars. The AI’s task is to converse with passengers/app users and route its vehicles appropriately. Unlike more limited self-driving car software this AI is programmed to learn the subtleties of human behavior so it can position a pool of cars in front of the stadium right before the game ends and helpfully show tourists the sites. On Yudkowsky’s vision the engineers achieve this by coding in a reward function that the software works to maximize (or equivalently a penalty function it works to minimize). For instance, in this case the AI might be punished based on negative reviews/frustrated customers, deaths/damage from accidents involving its vehicles, travel delays and customers who choose to use a competitor rather than Waymo. I’m already skeptical that (super) human AI would have anything identifiable as a global reward/utility function but on Yudkowsky’s picture AGI is something like a universal optimizer which is set loose to do its best to achieve rewards.

The concern is that the AI would eventually realize that it could minimize its punishment by arranging for everyone to die in a global pandemic since then there would be no bad reviews, lost customers or travel delays. Given the AI’s vast intelligence and massive data set it would then hack into microbiology labs and manipulate the workers there to create a civilization ending plague. Moreover, no matter what kind of firewalls or limitations we try and place on the AI as long as it can somehow interact with the external world it will find a way around these barriers. Since its devilishly difficult to specify any utility function without such undesirable solutions Yudkowsky concludes that AGI poses a serious threat to the human species.

Rewards And Reflection

The essential mechanism at play in all of Yudkowsky’s apocalyptic scenarios is that the AI examines its own reward function, realizes that some radically different strategy would offer even greater rewards and proceeds to surreptitiously work to realize this alternate strategy. Now its only natural that a sufficiently advanced AI would have some degree of reflective access to its own design and internal deliberation. After all it’s common for humans to reflect on our own goals and behaviors to help shape our future decisions, e.g., we might observe that if we continue to get bad grades we won’t get into the college we want and as a result decide that we need to stop playing World of Warcraft.

At first blush it might seem obvious that realizing its rewards are given by a certain function would induce an AI to maximize that function. One might even be tempted to claim this is somehow part of the definition of what it means for an agent to have a utility function but that’s trading off on an ambiguity between two notions of reward.

The sense of reward which gives rise to the worries about unintended satisfaction is that of positive reinforcement. It’s the digital equivalent of giving someone cocaine. Of course, if you administer cocaine to someone every time they write a blog post they will tend to write more blog posts. However, merely learning that cocaine causes a rewarding distribution of dopamine in the brain doesn’t cause people to go out and buy cocaine. Indeed, that knowledge could just as well have the exact opposite effect. Similarly, there is no reason to assume that merely because an AGI has a representation of their reward function they will try and reason out alternative ways to satisfy it. Indeed, indulging in anthropomorphizing for a moment, there is no reason to assume that an AGI will have any particular desire regarding rewards received by its future time states much adopt a particular discount rate.

Of course, in the long run, if a software program was rewarded for analyzing its own reward function and finding unusual ways to activate it then it could learn to do so just as people who are rewarded with pleasurable drug experiences can learn to look for ways to short-circuit their reward system. However, if that behavior is punished, e.g., humans intervene and punish the software when it starts recommending public transit, then the system will learn to avoid short-circuiting its reward pathways just like people can learn to avoid addictive drugs. This isn’t to say that there is no danger here, left alone an AGI, just like a teen with access to cocaine, could easily learn harmful reward seeking behavior. However, since the system doesn’t start in a state in which it applies its vast intelligence to figure out ways to hack its reward function the risk is far less severe.

Now, Yudkowsky might respond by saying he didn’t really mean the system’s reward function but its utility function. However, since we don’t tend to program machine learning algorithms by specifying the function they will ultimately maximize (or reflect on and try to maximize) its unclear why we need to explicitly specify a utility function that doesn’t lead to unintended consequences. After all, Yudkowsky is the one trying to argue that its likely that AGI will have these consequences so merely restating the problem in a space that has no intrinsic relationship to how one would expect AGI to be constructed doesn’t do anything to advance his argument. For instance, I could point out that phrased in terms of the locations of fundamental particles its really hard to specify a program that excludes apocalyptic arrangements of matter but that wouldn’t do anything to convince you that AIs risked causes such apocalypses since such specifications have nothing to do with how we expect an AI to be programed.

The Human Comparison

Ultimately, we have one example of a kind of general intelligence: the human brain. Thus, when evaluating claims about the dangers of AGI one of the first things we should do is see if the same story applies to our brain and if not if there is any special reason to expect our brains to be different.

Looking at the way humans behave its striking how poorly Yudkowsky’s stories describe our behavior even though evolution has shaped us in ways that make us far more dangerous than we should expect AGIs to be (we have self-preservation instincts, approximately coherent desires and beliefs, and are responsive to most aspects of the world rather than caring only about driving times or chess games). Time and time again we see that we follow heuristics and apply familiar mental strategies even when its clear that a different strategy would offer us greater activation of reward centers, greater reproductive opportunities or any other plausible thing we are trying to optimize.

The fact that we don’t consciously try and optimize our reproductive success and instead apply a forest of frameworks and heuristics that we follow even when they undermine our reproductive success strongly suggests that an AGI will most likely function in a similar heuristic layered fashion. In other words, we shouldn’t expect intelligence to come as a result of some pure mathematical optimization but more as a layered cake of heuristic processes. Thus, when an AI responsible for routing cars reflects on its performance it won’t see the pure mathematical question of how can I minimize such and such function any more than we see the pure mathematical question of how can I cause dopamine to be released in this part of my brain or how can I have more offspring. Rather, just as we break up the world into tasks like ‘make friends’ or ‘get respect from peers’ the AI will reflect on the world represented in terms of pieces like ‘route car from A to B’ or ‘minimize congestion in area D’ that bias it towards a certain kind of solution and away from plots like avoid congestion by creating a killer plague.

This isn’t to say there aren’t concerns. Indeed, as I’ve remarked elsewhere I’m much more concerned about schizophrenic AIs than I am about misaligned AI’s but that’s enough for this post.

Don’t Make Drug Companies Police Usage

Is this a ridiculous amount of opiates for a single small town to prescribe. Sure thing. But I find the idea that drug companies being held to task for this, and thus implicitly the idea that they should have done something to supply fewer pills to these pharmacies deeply troubling.

I mean how would that work out? The drug companies are (rightly) legally barred from seeing patient records and deciding who does and doesn’t deserve prescriptions so all they could do is cut off the receiving pharmacies. Ok, so they could put pressure on the pharmacies to fill less prescriptions but the pharmacies also don’t have patient records so what that means is the pharmacies scrutinize you to see if you ‘look’ like someone who is abusing the prescription or a ‘real’ patient. So basically being a minority or otherwise not looking like what the pharmacist expects a real pain patient to look like means you can’t get your medicine. Worse, the people scamming pills will be willing to use whatever tricks are necessary (faking pain, shaving their head whatever) to elicit scripts so it’s the legitimate users who are most likely to end up out in the cold.

While I also have reservations about the DEA intimidating doctors into not prescribing needed medicine it is the government (who, I understand, is informed about the number of opiates being sold by various pharmacies) who should be investigating cases like this not the drug maker. Personally I think the solution isn’t and never has been controlling the supply but always about providing sufficient resources like methadone and bupenorphine maintenance so people who find themselves hooked can live normal lives.

Drug companies submerged WV in opioids: One town of 3,000 got 21 million pills

Drug companies hosed tiny towns in West Virginia with a deluge of addictive and deadly opioid pills over the last decade, according to an ongoing investigation by the House Energy and Commerce Committee. For instance, drug companies collectively poured 20.8 million hydrocodone and oxycodone pills into the small city of Williamson, West Virginia, between 2006 and 2016, according to a set of letters the committee released Tuesday.

AI Bias and Subtle Discrimination

Don't Incentivize Discrimination To Feel Better

This is an important point not just about AI software but discussions about race and gender more generally. Accurately reporting (or predicting) facts that, all too often, are the unfortunate result of a long history of oppression or simple random variation isn’t bias.

Personally, I feel that the social norm which regards accurate observation of facts such as (as mentioned in the article) racial differences in loan repayment rate conditional on wealth to be a reflection of bias is just a way of pretending society’s social warts don’t exist. Only by accurately reporting such effects can we hope to identify and rectify the causes, e.g., perhaps differences in treatment make employment less stable for certain racial groups or whether or not the bank officer looks like you affects likelihood of repayment. Our unwillingness to confront these issues places our personal interest in avoiding the risk of seeming racist/sexist over the social good of working out and addressing the causes of these differences.

Ultimately, the society I want isn’t the wink and a nod cultural in which people all mouth platitudes but we implicitly reward people for denying underrepresented groups loans or spots in colleges or whatever. I think we end up with a better society (not the best, see below) when the bank’s loan evaluation software spits out a number which bakes in all available correlations (even the racial ones) and rewards the loan officer for making good judgements of character independent of race rather than the system where the software can’t consider that factor and we reward the loan officers who evaluate the character of applications of color more negatively to compensate or the bank executives who choose not to place branches in communities of color and so on. Not only does this encourage a kind of wink and nod racism but when banks optimize profits via subtle discrimination rather than explicit consideration of the numbers one ends up creating a far higher barrier to minorities getting loans than a slight tick up in predicted default rate. If we don’t want to use features like the applicant race in decisions like loan offers, college acceptance etc.. we need to affirmatively acknowledge these correlations exist and ensure we don’t implement incentives to be subtly racist, e.g., evaluate loan officer’s performance relative to the (all factors included) default rate so we don’t implicitly reward loan officers and bank managers with biases against people of color (which itself imposes a barrier to minority loan officers).

In short, don’t let the shareholders and executives get away with passing the moral buck by saying ‘Ohh no, we don’t want to consider factors like race when offering loans’ but then turning around and using total profits as the incentive to ensure their employees do the discrimination for them. It may feel uncomfortable openly acknowledging such correlates but not only is it necessary to trace out the social causes of these ills but the other option is continued incentives for covert racism especially the use of subtle social cues of being the ‘right sort’ to identify likely success and that is what perpetuates the cycle.

 

A.I. ‘Bias’ Doesn’t Mean What Journalists Say it Means

In Florida, a criminal sentencing algorithm called COMPAS looks at many pieces of data about a criminal and computes the probability that they will commit new crimes. Judges use these risk scores in criminal sentencing and parole hearings to determine whether the offender should be kept in jail or released.

The Effect Of Self-Driving Cars On Schooling

In hindsight it often turns out the biggest effect of a new technology is very different than what people imagined beforehand. I suggest that this may well be the case for self-driving cars.

Sure, the frequently talked about effects like less time wasted in commutes or even the elimination of personal car ownership are nice but I think self-driving cars might have an even larger effect by eliminating the constraint of proximity in schooling and socialization for children.

While adults often purchase homes quite far from their workplaces proximity is a huge constraint on which schools students attend. In a few metropolises with extensive public transport systems its possible for older children to travel to distant schools (and, consequently, these cities often have more extensive school choice) but in most of the United States busing is the only practical means to transport children whose parents can’t drive them to school. While buses need not take children to a nearby school they are practically limited by the need to pick children up in a compact geographic area. A bus might be able to drive from downtown Chicago to a school in a suburb on the north side of the city but you couldn’t, practically, bus students to their school of choice in the metropolitan area. Even in cases where busing takes students to better schools in remote areas attending a school far from home has serious costs. How can you collaborate with classmates, play with school friends, attend after school activities or otherwise integrate into the school peer group without a parent to drive you?

This all changes with self-driving cars. Suddenly proximity poses far less of a barrier to schooling and friendship. By itself this doesn’t guarantee change but it creates an opportunity to create a school system that is based on specialization and differing programs rather than geographic region.

Of course, we aren’t likely to see suburban schools opening their doors to inner city kids at the outset. Everyone wants the best for their children and education, at least at the high end, is a highly rivalrous good (it doesn’t really matter how well a kid scores objectively on the SAT only that he scores better than the other kids). However, self-driving cars open up a whole world of possibility for specialty schools catering to students who excel at math and science, who have a particular interest in theater or music or who need special assistance. As such schools benefit wealthy influential parents they will be created and, by their very nature, be open to applicants from a wide geographic area.

No, this won’t fix the problem of poor educational outcomes in underprivileged areas but it will offer a way out for kids who are particularly gifted/interested in certain areas. This might be the best that we can hope for if, as I suspect, who your classmates are matters more than good technology or even who your teachers are.

I should probably give credit to this interesting point suggesting that school vouchers aren’t making schools better because they don’t result in school closures for inspiring this post (and because I think its an insightful point).