Thoughts on rationalism and the rationalist community from a skeptical perspective. The author rejects rationality in the sense that he believes it isn't a logically coherent concept, that the larger rationalism community is insufficiently critical of it's beliefs and that ELIEZER YUDKOWSKY IS NOT THE TRUE CALIF.
Or Using Smartphones and Smart Speakers to Solve All Violent Crime
We are at the point, or very near it, that our technology could virtually eliminate unsolved violent crime. For instance, suppose we all constantly captured audio with all our smart devices and uploaded it to the cloud. Our devices could update the uploaded buffer when we log into our devices later and fail to report an emergency1. Unlike biological memory this could be reliably used in court and captured even by murder victims.
Why don’t we use this kind of tech to take a bite out of crime? Well, for the moment it still might strain our bandwidth and storage resources but 5G and ever cheaper storage make this a temporary issue but even people who can afford those resources aren’t so inclined. Now one might think it’s out of fear of technical loss of privacy. What if amazon, google or some hacker figures out how to access our buffered audio?
But that’s not really a convincing worry since it’s pretty easy to secure the buffered audio more securely than our devices themselves are secured. I mean anyone who can hack our cell phones and smart speakers can just enable listening in directly while we could split the secrets to decrypt the audio between multiple big tech companies. The real problem is that we are creating a record that can be subpoenaed and used against us or our intimates without our permission.
What we need to solve this problem is some digital legal analog of biological memory. That is a class of digital records that, like biological memory, need not be produced if the creator chooses not to. Of course, it’s actually a bit more subtle because we can all be compelled to testify but when we do so we not only maintain our 5th amendment protections we can also simply lie or evade. In conjunction this prevents fishing expeditions that a digital record would allow (e.g. I’m sure my husband did something unsavory during the last 48 hours let’s subpoena his audio record so we can use it against him in the divorce).
One possibility is to only use such records as a supercharger for testimony (unless the individual who created it has died). In other words the only access to such data would be by allowing it’s creator to review the tape and indicate what happened in a deposition that would be checked for perjury/accuracy by a third party (special master?) against the actual tape. Maybe it’s not the best solution but we need something that lets us treat our digital memories like our organic ones with respect to our control over their revelation.
One might naturally worry about the friends and family members who commit a great deal of violent crime scheming to impersonate you to clear the incriminating information but if we always keep a sufficiently long buffer so as to make the failure to report us missing during that time suspicious we can at least minimize the risk. ↩
Tyler Cowen provides a great analysis of one of the generic calls for regulating big data (and Facebook in particular). Putting this together with his previous post pointing out that it would cost us each ~$80/year to use facebook on a paid basis1. Taken together they make a compelling case that there is no appetite in the US for serious laws protecting data privacy and that whatever laws we do get will probably do more harm than good.
To expand on Cowen’s point a little bit let’s seriously consider for a moment what a world where the law granted individuals broad rights to control how their information was kept and used. That would be a world where it would suddenly be very hard to conduct a little poll on your blog. Scott Alexander came up with some interesting hypothesizes regarding brain functioning and trans-gender individuals by asking his readers to fill out a survey. But doing that survey meant collecting personal and medical information about his readers (their gender identification, age, other mental health diagnoses) and storing it for analysis. He certainly wouldn’t have bothered to do any such think if he was required to document regulatory compliance, include a mechanism for individuals to request their data be removed or navigate complex consent and disclosure rules (now you’ve gotta store emails and passwords making things worse and risk liability if you become unable to delete info). And what about the concerned parent afraid children in her town are getting sick too frequently. Will it now be so difficult for her to post a survey that we won’t discover the presence of environmental carcinogens?
One is tempted to respond that these cases are obviously different. These aren’t people using big data to track individuals but people choosing to share non-personally identifiable data on a survey. But how can we put that into a law and make it so obvious bloggers don’t feel any need to consult attorneys before running survey?
One might try and hang your hat on the fact that the surveys I described don’t record your email address or name2. However, if you don’t want repeated voting to be totally trivial that means recording an IP address. Enough questions and you’ll end up deanonymizing everyone and there is always a risk (Oops, turns out there is only one 45 year old Broglida). On the other hand google if it’s ok as long as you don’t deliberately request real world identifying information the regulation is toothless — google doesn’t really care what your name is they just want your age, politics, click history etc.. .
Well maybe it should only be about passively collected data. That’s damn hard to define already (why is a click on an ajax link in a form different than a click on a link to a story) and risks making normal http server logs illegal. Besides, it’s a huge benefit to consumers that startups are able to see which design or UI visitors prefer. So checking if users find a new theme or video controls preferable (say by serving it to 50% of them and seeing if they spend more time on the site) shouldn’t require corporate counsel be looped in or we make innovation and improvement hugely expensive. Moreover, users with special needs and other niche interests are likely to particularly suffer if there is no low cost hassle free way of trying out alternate page versions and evaluating user response.
Ultimately, we don’t really want the world that we could get by regulating data ownership. It’s not the world in which facebook doesn’t have scary power. It’s the world where companies like facebook have more scary power because they have the resources to hire legal counsel and lobby for regulatory changes to ensure their practices stay technically legal while the startups and potential competitors don’t have those advantages. Not only do we not want the world we would get by passing data ownership regulations I don’t think most people even have a clear idea why that would be a good thing. People just have a vague feeling of discomfort with companies like facebook not a clear conception of a particular harm to avoid and that’s a disastrous situation for regulation.
Having said this, I do fear the power of companies like facebook (and even governmental entities) to blackmail individuals based on the information they are able to uncover with big data. However, I believe the best response to this is more openness and, ideally, an open standards based social network that doesn’t leave everything in the hands of one company. Ultimately, that will mean less privacy and less protection for our data but that’s why specifying the harm you fear really matters. If the problem is, as I fear, the unique leverage being the sole possessor of this kind of data provides facebook and/or governments then the answer is to make sure they aren’t the sole possessor of anything.
Here is her NYT piece, I’ll go through her four main solutions, breaking up, paragraph by paragraph, what is one unified discussion: What would a genuine legislative remedy look like? First, personalized data collection would be allowed only through opt-in mechanisms that were clear, concise and transparent.
Now, while a subscription funded facebook would surely be much much cheaper I think Cowen is completely correct when he points out that any fee based system would hugely reduce the user base and therefore the value of using facebook. Indeed, almost all of the benefit facebook provides over any random blogging platform is simply that everyone is on it. Personally, I favor an open social graph but this is even less protective of personal information. ↩
Even that is pretty limiting. For instance, it prevents running any survey that wants to be able to do a follow up or simply email people their individual results ↩