Thoughts on rationalism and the rationalist community from a skeptical perspective. The author rejects rationality in the sense that he believes it isn't a logically coherent concept, that the larger rationalism community is insufficiently critical of it's beliefs and that ELIEZER YUDKOWSKY IS NOT THE TRUE CALIF.
In hindsight it often turns out the biggest effect of a new technology is very different than what people imagined beforehand. I suggest that this may well be the case for self-driving cars.
Sure, the frequently talked about effects like less time wasted in commutes or even the elimination of personal car ownership are nice but I think self-driving cars might have an even larger effect by eliminating the constraint of proximity in schooling and socialization for children.
While adults often purchase homes quite far from their workplaces proximity is a huge constraint on which schools students attend. In a few metropolises with extensive public transport systems its possible for older children to travel to distant schools (and, consequently, these cities often have more extensive school choice) but in most of the United States busing is the only practical means to transport children whose parents can’t drive them to school. While buses need not take children to a nearby school they are practically limited by the need to pick children up in a compact geographic area. A bus might be able to drive from downtown Chicago to a school in a suburb on the north side of the city but you couldn’t, practically, bus students to their school of choice in the metropolitan area. Even in cases where busing takes students to better schools in remote areas attending a school far from home has serious costs. How can you collaborate with classmates, play with school friends, attend after school activities or otherwise integrate into the school peer group without a parent to drive you?
This all changes with self-driving cars. Suddenly proximity poses far less of a barrier to schooling and friendship. By itself this doesn’t guarantee change but it creates an opportunity to create a school system that is based on specialization and differing programs rather than geographic region.
Of course, we aren’t likely to see suburban schools opening their doors to inner city kids at the outset. Everyone wants the best for their children and education, at least at the high end, is a highly rivalrous good (it doesn’t really matter how well a kid scores objectively on the SAT only that he scores better than the other kids). However, self-driving cars open up a whole world of possibility for specialty schools catering to students who excel at math and science, who have a particular interest in theater or music or who need special assistance. As such schools benefit wealthy influential parents they will be created and, by their very nature, be open to applicants from a wide geographic area.
No, this won’t fix the problem of poor educational outcomes in underprivileged areas but it will offer a way out for kids who are particularly gifted/interested in certain areas. This might be the best that we can hope for if, as I suspect, who your classmates are matters more than good technology or even who your teachers are.
I should probably give credit to this interesting point suggesting that school vouchers aren’t making schools better because they don’t result in school closures for inspiring this post (and because I think its an insightful point).
An Ineffective Strategy With Worrying Implications
Wait what? We are launching a DDOS attack against North Korea. Could we do anything more stupid? Its not like North Korea uses the internet enough for this to represent a serious inconvenience to the nation while at the same time we legitimize the use of cyber attacks against civilian infrastructure as a way to settle international disputes. Dear god this is a bad idea!
As the US reportedly conducts a denial-of-service attack against North Korea’s access to the Internet, the regime of Kim Jong Un has gained another connection to help a select few North Koreans stay connected to the wider world-thanks to a Russian telecommunications provider.
Machine Learning, Sensitive Information and Prenatal Hormones
So there’s been some media attention recently to this study which found they were able to accurately predict sexual orientation with 91% for men and 83% for women. Sadly, everyone is focusing on the misleading idea that we can somehow use this algorithm to decloak who is gay and who isn’t rather than the really interesting fact that this is suggestive of some kind of hormonal or developmental cause of homosexuality.
Rather, given 5 pictures of a gay man and 5 pictures of a straight man 91% of the time it is able to correctly pick out the straight man. Those of us who remember basic statistics with all those questions about false positive rates should realize that, given the low rate of homosexuality in the population, this algorithm doesn’t actually give very strong evidence of homosexuality at all. Indeed, one would expect that, if turned loose on a social network, the vast majority of individuals judged to be gay would be false positives. However, in combination with learning based on other signals like your friends on social media one could potentially do a much better job. But at the moment there isn’t much of a real danger this tech could be used by anti-gay governments to identity and persecute individuals.
Also, I wish the media would be more careful about their terms. This kind of algorithm doesn’t reveal private information it reveals sensitive information inadvertently exposed publicly.
However, what I found particularly interesting was the claim in the paper that they were able to achieve a similar level of accuracy for photographs taken in a neutral setting. This, along with other aspects of the algorithm, strongly suggest the algorithm isn’t picking up on some kind of gay/straight difference in what kind of poses people find appealing. The researchers also generated a heat map of what parts of the image the algorithm is focusing on and while some of them do suggest grooming based information about hair, eyebrows or beard play some role the strong role that the nose, checks and corners of the mouth play suggests that relatively immutable characteristics are pretty helpful in predicting orientation.
The authors acknowledge that personality has been found to affect facial features in the long run so this is far from conclusive. I’d also add my own qualification that there might be some effect of the selection procedure that plays a role, e.g., if homosexuals are less willing to use a facial closeup on dating sites/facebook if they are ugly the algorithm could be picking up on that. However, it is at least interestingly suggestive evidence for the prenatal hormone theory (or other developmental theory) of homosexuality.
This is an interesting piece but I couldn’t disagree more with the title or the author’s obvious feeling that there must be a cynical explanation for techie’s distrust of government regulation.
Silicon valley types are simply classical pragmatic libertarians. They aren’t Ayn Rand quoting objectivists who believe government intervention is in principle unacceptable. Rather, they, like most academic economists, simply tend to feel that well-intentioned government regulation often has serious harmful side effects and isn’t particularly likely to accomplish the desired goals.
I think this kind of skepticism flows naturally from a certain kind of quantitative results oriented mindset and I expect you would find the same kind of beliefs (to varying degrees) among the academic physicists, civil engineers and others who share the same educational background and quantitative inclination as silicon valley techies. I’m sure that the particular history of poorly understood tech regulation like the original crypto wars in the 90s plays a role but I suspect it just amplified existing tendencies.
But by the 1990s, with the advent of the World Wide Web and the beginning of the tech industry’s march to the apex of the world’s economy, another Silicon Valley political narrative took root: techies as unapologetic libertarians, for whom the best government is a nearly nonexistent one.
So doesn’t this suggest that you can just hang around on Bitcoin forums/chats/etc and use that info to arbitrage your way into substantial sums (by using your info about likely resolution of bitcoin forking/update discussions to predict prices in ways that are already occupied in developed markets)?
I mean I suppose there is the limitation on leverage. You can borrow for stock trades using your stocks as collateral but I don’t believe you can do the same yet with bitcoin. But still seems like a good deal. Is there any other reason this won’t work/
The recent (highly damaging) Wcry ransomware worm is derived from NSA code recently disclosed by hackers. This has lead Microsoft (and others) to call on the government to disclose security vulnerabilities so they can be fixed rather than stockpiling them for use in offensive hacking operations. However, I think the lesson we should learn from this incident is exactly the opposite.
This debate about how to balance the NSA‘s two responsibilities: protecting US computer systems from infiltration and gathering intelligence from foreign systems is hardly new (and Bruce Schneier’s take on it is worth reading). The US government is very much aware of this tension and has a special process, the vulnerabilities equities process (VEP), to decide whether or not to disclose a particular vulnerability. Microsoft is arguing that recent events illustrate just how much harm is caused by stockpiled vulnerabilities and, analogizing this incident to the use of stolen conventional weaponry, suggesting the government needs to take responsibility by always choosing to report vulnerabilities to vendors so they can be patched.
However, if anything, this incident illustrates the limitations of reporting vulnerabilities to vendors. Rather than being 0-days the vulnerabilities used by the Wcry worm were already patched a month before the publication of the NSA exploits and the circumstances of the patch suggest that the NSA, aware that it had been compromised, reported these vulnerabilities to Microsoft. Thus, rather than illustrating the dangers of stockpiling vulnerabilities, this incident reveals the limitations of reporting vulnerabilities. Even once vulnerabilities are disclosed the difficulty convincing users to update and the lack of support for older operating systems leave a vast many users at risk. In contrast, once a patch is released (or even upon disclosure to a vendor) the vulnerability can no longer be used to collect intelligence from security aware targets, e.g., classified systems belonging to foreign governments.
It is difficult not to interpret Microsoft’s comments on this issue as an attempt to divert blame. After all, it is their code which is vulnerable and it was their choice to cease support for windows XP. However, to be fair, this is not the first time they have taken such a position publicly. Back in February Microsoft called for a “Digital Geneva Convention” under which governments would forswear “cyber-attacks that target the private sector or critical infrastructure or the use of hacking to steal intellectual property” and commit to reporting vulnerabilities rather than stockpiling them.
While there may an important role for international agreement to play in this field Microsoft’s proposal here seems hopelessly naive. There are good reasons why there has never been an effective international agreement barring spying and they all apply to this case as well. There is every incentive for signatories to such a treaty to loudly affirm it and then secretly continue to stockpile vulnerabilities and engage in offensive hacking. While at first glance one might think that we could at least leave the private sector out of this that ignores the fact that many technologies are dual purpose1 and that frequently the best way to access government secrets will be to compromise email accounts hosted by private companies as well as the uses big data can be put to by government actors. Indeed, the second that a government thought such a treaty was being followed they would move all their top secret correspondence to (in country version of) something like gmail.
Successful international agreements forswearing certain weapons or behaviors need to be verifiable and not (too) contrary to the interests of the great powers. The continued push to ban land mines is unlikely to be successful as long as they are seen as important to many powerful countries’ (including a majority of permanent security council members) military strategies2 and it is hard to believe that genuinely giving up stockpiling vulnerabilities and offensive hacking would be in the interests of Russia or China. Moreover, if a treaty isn’t verifiable there is no reason for countries not to defect and secretly fail to comply. While Microsoft proposes some kind of international cooperative effort to assign responsibility for attacks it is hard to see how this wouldn’t merely encourage false flag operations to trigger condemnation and sanctions against rivals. It is telling that the one aspect of such a treaty that would be verifiable, the provision banning theft of IP (at least for use by private companies rather than for national security purposes), is the only aspect Microsoft points to as having been the subject of a treaty (a 2015 US-China agreement).
While it isn’t uncommon for idealistic individuals and non-profit NGOs to act as if treaties can magic away the realities of state interests and real world incentives I have trouble believing Microsoft is this naive about this issue. I could very well be wrong on this point but it’s hard for me not to think their position on this issue is more about shifting blame for computer security problems than a thoughtful consideration of the costs and benefits.
Of course, none of this is to say that there isn’t room for improvement in how the government handles computer security vulnerabilities. For instance, I’m inclined to agree with most of the reforms mentioned here. As far as the more broad question of whether we should tip the scales more toward reporting vulnerabilities instead of stockpiling them I think that depends heavily on how frequently the vulnerabilities we find are the same as those found by our rivals and how quickly our intelligence services are able to discover what vulnerabilities are known to our rivals. As such information is undoubtedly classified (and for good reasons) it seems the best we can do is make sure congress exercises substantial oversight and use the political process to encourage presidents to install leadership at the NSA who understands these issues.
Facial recognition technology can be used to identify spies, code advertisers uses to surreptitiously identify and track customers is ideal for covert surveillance and the software the NSA uses to monitor it’s huge data streams was built by private sector companies using much of the same technology used to various kinds of search engines. ↩
A less idealistic treaty that recognize the role for land mines in major military operations probably could have done more to safe guard civilians from harm by, instead, banning persistent mines. As such a ban would actually favor the interests of the great powers (persistent mines are easier to make by low tech actors) they would have helped enforce it rather than providing cover for irresponsible use of landmines. ↩