A Norm Against Partisan Smearing?

Reading Reich’s book (Who We Are And How We Got Here) really drives home to me just how tempting it is to collapse into tribal based cheering (e.g. cheering on your genes/genetic history/etc as the best) and how important our norms against racism are in limiting this.

It makes me wonder if we couldn’t develop similarly strong norms about not cheering on your political/social tribe in the same manner. It’s a more delicate situation since we need to preserve the ability to disagree and offer useful criticism. However, it still seems to me that we might be able to cultivate a norm which strongly disapproved of trying to make the other side look bad or implying they are improperly motivated/biased.

I mean, of course, we won’t actually get rid of hypocrisy or self-serving beliefs but if it required the same kind of extreme caution to allege bad faith to the other ideologies that we require to make claims about racial differences it might make a big difference.

Failing Business 101

The Idiotic Idea Of Apple Competing With Intel

There is a rumor going around that apple may try and replace Intel chips in it’s computers with their own in-house chip. Now, it’s certainly conceivable that apple will offer a cheap low-end laptop based on the chips it uses for the iPhone and iPad. Indeed, that’s probably a great opportunity. However, the idea that apple might switch completely to their own in-house Silicon is such a bad business idea that I have to assume they won’t try.

I mean suppose for a moment that apple thought they could outdo Intel and AMD in designing high end processors. What should apple do? Well they could design processors in-house just for their own computers limiting their potential profits and assuming substantial risk if they turn out to be wrong. Alternatively, they could spin off a new processor design company (perhaps with some kind of cooperation agreement) which could sell their processors to all interested parties while limiting their risk exposure. Now, I think the later option is clearly preferable and as it seems pretty implausible to think Intel and AMD are so badly run as to make such a venture attractive so it would be even less attractive to try and compete with Intel in house.

Now why doesn’t this same argument apply to apple’s choice to design it’s own ARM chips for the iPhone? First, apple was able to buy state of the art IP to start from which wouldn’t be available in they were designing a high performance desktop/laptop CPU. Secondly, because of the high degree of integration in mobile devices there were real synergies apple could realize by designing the chip and phone in combination, e.g., implementing custom hardware to support various iphone functions. Considering desktops and highend laptops there are no such pressures. There is plenty of space to put any dedicated hardware in another chip and no special apple specific features that would be particularly valuable to implement in the CPU.

On the other hand a cheap(er) laptop that could run iPad apps could be a great deal. Just don’t expect them to replace Intel chips on the high end systems.

Apple is actively working on Macs that replace Intel CPUs

A new Bloomberg report claims Apple is working on its own CPUs for the Mac, with the intent to ultimately replace the Intel chips in its computers with those it designs in-house. According to Bloomberg’s sources, the project (which is internally called Kalamata) is in the very early planning stages, but it has been approved by executives at the company.

Team Rittenhouse

Or Why The Heros In Timeless Are Idiots

I just finished the first season of Timeless and I can’t help but feel the characters, and especially Lucy, are being irrational. (Some spoilers but only serious spoiler is in bullet 3 at the very end).

What they know about Rittenhouse is that every generation is initially totally horrified and appalled at the thought of it, wishes it was destroyed and eventually comes to see it as important and necessary once they have sufficient time to think and evaluate the evidence yet this gives them no pause as to whether they themselves might be in the wrong.

Indeed, the only defensible reason to be an ardent support of a democratic system is that the evidence of the past few centuries makes a super strong case that democracies are good places to live and produce far more utilities than dictatorships but if you found out that US had secretly been an oligarchy the whole time the evidence would actually point the other way and suggest skepticism of non-oligarchical rule (or well influence).

The show writers try hard to make sure we see Rittenhous as evil by making hereditary and talking about good strong bloodlines but once you’ve decided on a dictatorial system (or I guess technically a very small oligarchy) hereditary rule is probably not merely desirable but a necessity as otherwise the subsequent set of rulers will conflict with the children of the last. By having 50+ members they can smooth out the ups and downs of monarchical hereditary rule and even work expel the less able members creating positive selection pressure. As far as being from ‘good families’ well every government needs a mythos to legitimize themselves and inspire its members even if its small.

As far as the supposed bad acts that Rittenhouse is supposed to have committed in the show I have three comments.

  1. Those bad acts actually pale beside the genuine injustices and horrors the legitimate democratic government of our country committed at the same times. I mean WWI was basically a pointless slaughter of 400k US soldiers or more then there are thinks like Tuskeege, Japanese detentions etc.. etc..

    Even the recent ‘bad acts’ like killing Flynn’s family in the name of some kind of national necessity (or was it just impunity by some elements) doesn’t seem out of line compared to the (IMO often justified) use of drone strikes even given the occasional civilian casualties to protect our national interest. If that’s the price we have to pay to get the kind of America we have then it doesn’t seem particularly large.

    Especially when you compare that to the acts taken by the supposedly misguided but ultimately forgivable/noble Flynn in his campaign against them including attempts to assist nazis, commit massive terrorism with huge death tolls etc…

  2. Rittenhouse is (as I understand the extent of its power) is essentially analogous to the UK house of lords before the reforms which made almost all the positions non-hereditary and let commons pass laws without approval from lords after a 2 then 1 year wait. That always seemed like a pretty good system to me…focus and primary power reflects the people but you essentially (modulo a bit of self-interest) also have to convince a bunch of rando professional legislators who don’t have to please constituents.

  3. But even if Rittenhouse isn’t great at the moment at the end of the season Lucy has cleaned out all the old power structure and is basically being offered the keys to the kingdom. It’s totally irrational for her not just to accept and try and use that power to make the world a better place and turn Rittenhouse into a force for good.

NRA Conferences Reduce Gun Injuries?

Misleading Reporting and Dubious Statistics

So the following letter is being widely reported online as if it is evidence for the importance of gun control. I’m skeptical of the results as I detail in the next post but even if one takes the results at face value the letter is pretty misleading and the media reporting is nigh fraudulent.

In particular if one digs into the appendix to the letter one finds the following statement: “many of the firearm injuries observed in the commercially insured patient population may reflect non-crime-related firearm injuries.” This is unsurprising as using health insurance data means you are only looking at patients rich enough to be insured and willing to report their injury as firearms related: so basically excluding anyone injured in the commission of a crime or who isn’t legally allowed to use a gun. As a result they also analyzed differences in crime rates and found no effect.

So even on it’s face this study would merely show that people who choose to use firearms are sometimes injured in that use. That might be a good reason to stay away from firearms yourself but not additional reason for regulation as is being suggested in the media.

Moreover, if the effect is really just about safety at gun ranges then its unclear if the effect is from lower use of such ranges or that the NRA conference encourages greater care and best practices.

Reasons To Suspect The Underlying Study

Also, I’m pretty skeptical of the underlying claim in the study. The size of the effect claimed is huge relative to the number of people who attend an NRA conference. I mean about 40% of US households are gun owners but only ~80,000 people attend nationwide NRA conventions or ~.025% of the US population or ~.0625 of US gun owners. Thus, for this statistic to be true because NRA members are busy at the conference we would have to believe NRA conference attendees were a whopping 320 times more likely to be inflict a gun related injury than the average gun owner.

Now if we restrict our attention to homicides this is almost surely not the case. Attending an NRA convention requires a certain level of financial wealth and political engagement which suggests membership in a socioeconomic class less likely to commit gun violence and than the average gun owner. And indeed, the study finds no effect in terms of gun related crime. Even if we look to non-homicides gun deaths from suicides far outweigh those from accidents and I doubt those who go to an NRA convention are really that much more suicidal inclined.

An alternative likely explanation is that the NRA schedules its conferences for certain times of the year when people are likely to be able to attend and we are merely seeing seasonal correlations masquerading as effects from the NRA conference (a factor they don’t control for). Also as they run all subgroup analysises and don’t report the results for census tracks and other possible subgroups the possibility for p-hacking is quite real. Looking at the graph they provide I’m not exactly overwhelmed.Not exactly convincing graph

The claim gets harder to believe when one considers the fact that people who attend NRA meetings almost surely don’t give up going to firing ranges during the meeting. Indeed, I would expect (though haven’t been able to verify) that there would be any number of shooting range expeditions during the conference and that this would actually mean many attendees would be more likely to handle a gun during that time period.

Though, once one realizes that the data set one is considering is only those who make insurance claims relating to gun related injuries it is slightly more plausible but only at the cost of undermining the significance of the claim. Deaths and suicides are much less likely to produce insurance claims and the policy implications aren’t very clear if all we are seeing is a reduction in people injured because of incorrect gun grips (see the mythbusters about this..such injuries can be quite serious).

Artificial Intelligence And The Structure Of Thought

Why Your Self-Driving Car Won't Cause Armageddon

In recent years a number of prominent individuals have raised concerns about our ability to control powerful AIs. The idea is that once we create truly human level generally intelligent software or AGI computers will undergo an intelligence explosion and will be able to escape any constraints we place on them. This concern has perhaps been most throughly developed by Eliezer Yudkowsky.

Unlike the AI in bad science fiction the concern isn’t that the AI will be evil or desire dominion the way humans are but simply that it will be too good at whatever task we set it to perform. For instance, suppose Waymo builds an AI to run its fleet of self-driving cars. The AI’s task is to converse with passengers/app users and route its vehicles appropriately. Unlike more limited self-driving car software this AI is programmed to learn the subtleties of human behavior so it can position a pool of cars in front of the stadium right before the game ends and helpfully show tourists the sites. On Yudkowsky’s vision the engineers achieve this by coding in a reward function that the software works to maximize (or equivalently a penalty function it works to minimize). For instance, in this case the AI might be punished based on negative reviews/frustrated customers, deaths/damage from accidents involving its vehicles, travel delays and customers who choose to use a competitor rather than Waymo. I’m already skeptical that (super) human AI would have anything identifiable as a global reward/utility function but on Yudkowsky’s picture AGI is something like a universal optimizer which is set loose to do its best to achieve rewards.

The concern is that the AI would eventually realize that it could minimize its punishment by arranging for everyone to die in a global pandemic since then there would be no bad reviews, lost customers or travel delays. Given the AI’s vast intelligence and massive data set it would then hack into microbiology labs and manipulate the workers there to create a civilization ending plague. Moreover, no matter what kind of firewalls or limitations we try and place on the AI as long as it can somehow interact with the external world it will find a way around these barriers. Since its devilishly difficult to specify any utility function without such undesirable solutions Yudkowsky concludes that AGI poses a serious threat to the human species.

Rewards And Reflection

The essential mechanism at play in all of Yudkowsky’s apocalyptic scenarios is that the AI examines its own reward function, realizes that some radically different strategy would offer even greater rewards and proceeds to surreptitiously work to realize this alternate strategy. Now its only natural that a sufficiently advanced AI would have some degree of reflective access to its own design and internal deliberation. After all it’s common for humans to reflect on our own goals and behaviors to help shape our future decisions, e.g., we might observe that if we continue to get bad grades we won’t get into the college we want and as a result decide that we need to stop playing World of Warcraft.

At first blush it might seem obvious that realizing its rewards are given by a certain function would induce an AI to maximize that function. One might even be tempted to claim this is somehow part of the definition of what it means for an agent to have a utility function but that’s trading off on an ambiguity between two notions of reward.

The sense of reward which gives rise to the worries about unintended satisfaction is that of positive reinforcement. It’s the digital equivalent of giving someone cocaine. Of course, if you administer cocaine to someone every time they write a blog post they will tend to write more blog posts. However, merely learning that cocaine causes a rewarding distribution of dopamine in the brain doesn’t cause people to go out and buy cocaine. Indeed, that knowledge could just as well have the exact opposite effect. Similarly, there is no reason to assume that merely because an AGI has a representation of their reward function they will try and reason out alternative ways to satisfy it. Indeed, indulging in anthropomorphizing for a moment, there is no reason to assume that an AGI will have any particular desire regarding rewards received by its future time states much adopt a particular discount rate.

Of course, in the long run, if a software program was rewarded for analyzing its own reward function and finding unusual ways to activate it then it could learn to do so just as people who are rewarded with pleasurable drug experiences can learn to look for ways to short-circuit their reward system. However, if that behavior is punished, e.g., humans intervene and punish the software when it starts recommending public transit, then the system will learn to avoid short-circuiting its reward pathways just like people can learn to avoid addictive drugs. This isn’t to say that there is no danger here, left alone an AGI, just like a teen with access to cocaine, could easily learn harmful reward seeking behavior. However, since the system doesn’t start in a state in which it applies its vast intelligence to figure out ways to hack its reward function the risk is far less severe.

Now, Yudkowsky might respond by saying he didn’t really mean the system’s reward function but its utility function. However, since we don’t tend to program machine learning algorithms by specifying the function they will ultimately maximize (or reflect on and try to maximize) its unclear why we need to explicitly specify a utility function that doesn’t lead to unintended consequences. After all, Yudkowsky is the one trying to argue that its likely that AGI will have these consequences so merely restating the problem in a space that has no intrinsic relationship to how one would expect AGI to be constructed doesn’t do anything to advance his argument. For instance, I could point out that phrased in terms of the locations of fundamental particles its really hard to specify a program that excludes apocalyptic arrangements of matter but that wouldn’t do anything to convince you that AIs risked causes such apocalypses since such specifications have nothing to do with how we expect an AI to be programed.

The Human Comparison

Ultimately, we have one example of a kind of general intelligence: the human brain. Thus, when evaluating claims about the dangers of AGI one of the first things we should do is see if the same story applies to our brain and if not if there is any special reason to expect our brains to be different.

Looking at the way humans behave its striking how poorly Yudkowsky’s stories describe our behavior even though evolution has shaped us in ways that make us far more dangerous than we should expect AGIs to be (we have self-preservation instincts, approximately coherent desires and beliefs, and are responsive to most aspects of the world rather than caring only about driving times or chess games). Time and time again we see that we follow heuristics and apply familiar mental strategies even when its clear that a different strategy would offer us greater activation of reward centers, greater reproductive opportunities or any other plausible thing we are trying to optimize.

The fact that we don’t consciously try and optimize our reproductive success and instead apply a forest of frameworks and heuristics that we follow even when they undermine our reproductive success strongly suggests that an AGI will most likely function in a similar heuristic layered fashion. In other words, we shouldn’t expect intelligence to come as a result of some pure mathematical optimization but more as a layered cake of heuristic processes. Thus, when an AI responsible for routing cars reflects on its performance it won’t see the pure mathematical question of how can I minimize such and such function any more than we see the pure mathematical question of how can I cause dopamine to be released in this part of my brain or how can I have more offspring. Rather, just as we break up the world into tasks like ‘make friends’ or ‘get respect from peers’ the AI will reflect on the world represented in terms of pieces like ‘route car from A to B’ or ‘minimize congestion in area D’ that bias it towards a certain kind of solution and away from plots like avoid congestion by creating a killer plague.

This isn’t to say there aren’t concerns. Indeed, as I’ve remarked elsewhere I’m much more concerned about schizophrenic AIs than I am about misaligned AI’s but that’s enough for this post.

Don’t Make Drug Companies Police Usage

Is this a ridiculous amount of opiates for a single small town to prescribe. Sure thing. But I find the idea that drug companies being held to task for this, and thus implicitly the idea that they should have done something to supply fewer pills to these pharmacies deeply troubling.

I mean how would that work out? The drug companies are (rightly) legally barred from seeing patient records and deciding who does and doesn’t deserve prescriptions so all they could do is cut off the receiving pharmacies. Ok, so they could put pressure on the pharmacies to fill less prescriptions but the pharmacies also don’t have patient records so what that means is the pharmacies scrutinize you to see if you ‘look’ like someone who is abusing the prescription or a ‘real’ patient. So basically being a minority or otherwise not looking like what the pharmacist expects a real pain patient to look like means you can’t get your medicine. Worse, the people scamming pills will be willing to use whatever tricks are necessary (faking pain, shaving their head whatever) to elicit scripts so it’s the legitimate users who are most likely to end up out in the cold.

While I also have reservations about the DEA intimidating doctors into not prescribing needed medicine it is the government (who, I understand, is informed about the number of opiates being sold by various pharmacies) who should be investigating cases like this not the drug maker. Personally I think the solution isn’t and never has been controlling the supply but always about providing sufficient resources like methadone and bupenorphine maintenance so people who find themselves hooked can live normal lives.

Drug companies submerged WV in opioids: One town of 3,000 got 21 million pills

Drug companies hosed tiny towns in West Virginia with a deluge of addictive and deadly opioid pills over the last decade, according to an ongoing investigation by the House Energy and Commerce Committee. For instance, drug companies collectively poured 20.8 million hydrocodone and oxycodone pills into the small city of Williamson, West Virginia, between 2006 and 2016, according to a set of letters the committee released Tuesday.

There Is No Conflict Theory

Or Some People Are Just Wrong

So usually I find Scott Alexander’s posts pretty illuminating but, while his recent post on Conflict vs. Mistake Theories raises lots of interesting questions I think it fundamentally makes a mistake in trying to fit the type of extreme Marxist thinking he is describing into a framework of beliefs about the world and actions taken to advance those beliefs. While I think Scott appreciates this difficulty and attempts to wrestle with it, e.g., where he suggests the conflict theory take on is best exemplified by the “Baffler’s article saying that public choice theory is racist” ultimately his devotion to applying norms of charity to the other side leads him astray.

It’s not that there aren’t people like the conflict theorist Scott posits. I know there are a number of radical university professors who think to themselves, “Given the oppressive political structure and the power held by the elite the most effective way to bring about change isn’t to engage in rational argument but bring political or even physical force to bear.” However, for the most part the people Scott is trying to describe aren’t just like mistake theorists except they believe its intentional action by elites which makes the world bad rather than the difficulty of governing. No, such a theory would predict conflict theorists would retire back to their coffeehouses and perform cost-benefit calculations about the benefits of holding a particular protest or adopting a particular style of advocacy.

In other words conflict theorists aren’t mistake theorists who hide their true colors so as not to give the elites free ammunition but engage in the same kind of considerations as mistake theorists behind the scenes. No, fundamentally, most of the behavior Scott is seeking to describe is about emotional responses not a considered judgement that such emotional displays will best accomplish their ends.

Is The Foreign Corrupt Practices Act Appropriate?

Do We Really Respect A Nation's Soverignty When We Decide How Their Laws Should Be Understood And Enforced?

I’m rarely one to agree with Trump and disagree with Tyler Cowen but I’m inclined to think we should eliminate the Foreign Corrupt Practices Act rather than merely implement the minor changes he suggests. At a gut level I find the idea that we are imposing our norms about how law, public office etc.. should work on other countries unpalatable and at a more cerebral level feel that we shouldn’t put people in prison or even fine companies without a compelling reason to think it serves some important social good. My mind could be changed by substantial evidence this law improves the welfare in other countries but there is no apriori reason to think it will reduce, rather than increase, corruption overseas (e.g. the game theoretic aspects to providing insurance to the bribe taker that the bribe giver can’t turn them in).

If the question was whether we should help overwhelmed countries who find their anti-corruption efforts foiled by American companies wealth and global nature that would be a different matter and I support assigning DOJ resources to assist worthy local corruption prosecutions. I’d even favor a law which allowed the foreign government to request prosecution by a US court for bribery actions taken by US companies when they felt their own courts weren’t capable of doing the job. However, this law doesn’t leave the question of prosecution and appropriate sanctions to the jurisdiction where the crime took place but substitutes our American sensibilities about the badness of bribery and even the role of laws regarding bribery for local understanding.

One might object that this law is only triggered when the practice at hand is illegal under local law. That’s true, but all laws don’t mean the same thing. Imagine the UK had a law which imposed massive fines or a 10 year penalty for any UK citizen living in the states who intentionally ‘evaded’ local tax laws. Now it’s true that in the States we too regard many types of tax fraud as a bid deal….but there is a societal acceptance (perhaps a bad one) that evading state sales tax by ordering products from out of state isn’t a big deal the way cheating on income tax might be and any US enforcement of such laws will reflect that understanding but UK enforcement of a hypothetical foreign tax evasion law would not. Similar points could be made about laws with harsh penalties for consuming illegal drugs in a foreign country….often, as with decriminalization, what the laws on a country’s books say and how the society there understands the system to work don’t always agree.

In short, respecting the sovereignty1 of other states requires letting them decide on the relation between their explicitly codified law and actual enforcement/social understanding and the framework for this law seems to coopt that understanding.

Also, it’s not even clear if the net effect of this law will be to reduce bribery and corruption abroad. If, as might be expected, in corrupt countries foreign companies tend to be less corrupt than the locals the net effect of such a law might be to favor local companies that face no deterrence from the US government. As the structure of the law punishes bribes paid by US companies or the corporations they control or hire but not bribes paid by customers or local companies engaged in arms length transactions it creates an incentive for the most corrupt locals to start businesses they wouldn’t have otherwise2. Indeed, the very fact that a socially accepted system of bribery imposes a barrier that keeps US firms out of the market may even make public corruption reforms unpopular for protectionist reasons.

Finally, one needs to ask what the game theoretic effect is of a law. A really effective way to clamp down on bribery is to turn people who have given bribes into witnesses against the official (e.g. by offering them immunity in this or another case or even a reward) and often the bribe taker will be in far more jeopardy than the bribe giver potentially putting them at jeopardy of blackmail. However, add the potential for charges by the US government which the local prosecutor can’t bargain away but are unlikely to be brought at all as long as no one local rats and you have a very nice game theoretic mechanism of ensuring that your bribe giver doesn’t rat you out (threaten to report any company that bribed you to the US DOJ if you get convicted…might even be a good way of ensuring someone pays for your defense).

Whether or not any of these actually work out in the real world is anyone’s guess but that fact alone should be enough justification not to be (even potentially) handing out 20 year prison sentences. That is in addition to all the other problems of the law that Cowen notes.

How to Root Out Corruption Without Introducing More

Are you worried there may be corruption in the American executive branch today, yet also fearful that the tools for rooting out such malfeasance may be abused? If so, welcome to the dilemmas surrounding the Foreign Corrupt Practices Act.


  1. Now ultimately, of course, I only believe in respect for the sovereignty of countries (or the right of self-determination) because it’s a good heuristic for positive results but it’s a pretty damn good one. Not only do attempts to intervene in foreign cultures rarely work they tend to create a good deal or resentment against whoever does the intervention. This is true whether the intervention is a direct political one or merely a cultural one such as refusing to do business in the customary way in that place and the FCPA certainly raises that possibility. Even disregarding resentment when it comes to legal regimes there is a much stronger reason to respect sovereignty: penalties imposed by a foreign potentate rarely provoke compliance or respect so drag along all the harms of law enforcement plus without the social benefits that make those costs worth paying. 
  2. For instance, maybe a certain degree of bribery is expected to win contracts to supply the government with services and all local companies bidding simply pay the expected bribes. If US suppliers simply entered the market and complied with these necessary rules for doing business locally there would be little effect on corruption. On the other hand if they refuse but offer superior products at better prices there is a strong incentive for those locals who are best at greasing palms to set up a company which simply subcontracts the work to the Americans after bribing the government (without telling the Americans about the bribe) and since such a company is literally in the business of bribing and faces competitors in that buisness it is more likely to increase local corruption than if the US company had simply played along. 

Keep Executing The Innocent

4% Innocent Executions Is A Price Worth Paying For A Spotlight On Injustice

Since 4% of death row inmates are innocent we should keep the theoretical death penalty. I don’t want to execute anyone and ideally everyone on death row dies of old age before being executed but it’s only because of the death penalty that we have so much information about and public interest in the failings of our criminal justice system. I think that it’s only as a result of safeguards introduced because of worries about incorrect executions (DNA retesting) or fixes to forensic or legal failings that we don’t have more innocent people spending huge fractions of their lives in prison.

I don’t think being executed is much worse than life in a US prison. Given a choice between a coin flip between going free and being executed and the certainty of life in prison I’d take the coin flip. Hell, I’d take 9:1 odds. Other people’s preferences differ but I find it hard to believe that the death penalty is more than 10 times as bad as life in prison (and this is clouded by our irrationally strong drive to survive rather than a pure utility judgement) and a very small percentage of all convicts get the death penalty.

Without the death penalty there won’t be any ‘oh shit, was that guy innocent’ moment or double checks before execution. At any point the justice system will just delay or avoid reconsidering any issues and without the spotlight presented by arguments of innocence made by death penalty convicts people will just assume what’s easy to believe: our criminal justice system gets it right and the innocent are rarely convicted. If the price to pay for this is just 4% incorrect executions I think it’s worth paying.

Of course, it should be noted that the rate of innocent people being convicted will be much higher outside of death penalty cases. Inmates on death row had cases that not only went to trial but received far more scrutiny than even those non-death penalty cases that do go to trial. Personally, I wouldn’t be surprised if the true rate of people in prison for crimes they didn’t commit was as high as 10-20%

Study suggests that 4% of the people we put on death row are innocent

How well does the US justice system work? Given that many states still carry out the death penalty, it’s a rather significant question. Some biostatisticians have teamed up with lawyers in an attempt to provide a scientific answer to the question. Based on their figures, at least 4.1 percent of the individuals sentenced to death will eventually be exonerated.

  Category: Law
  Comments: None

AI Bias and Subtle Discrimination

Don't Incentivize Discrimination To Feel Better

This is an important point not just about AI software but discussions about race and gender more generally. Accurately reporting (or predicting) facts that, all too often, are the unfortunate result of a long history of oppression or simple random variation isn’t bias.

Personally, I feel that the social norm which regards accurate observation of facts such as (as mentioned in the article) racial differences in loan repayment rate conditional on wealth to be a reflection of bias is just a way of pretending society’s social warts don’t exist. Only by accurately reporting such effects can we hope to identify and rectify the causes, e.g., perhaps differences in treatment make employment less stable for certain racial groups or whether or not the bank officer looks like you affects likelihood of repayment. Our unwillingness to confront these issues places our personal interest in avoiding the risk of seeming racist/sexist over the social good of working out and addressing the causes of these differences.

Ultimately, the society I want isn’t the wink and a nod cultural in which people all mouth platitudes but we implicitly reward people for denying underrepresented groups loans or spots in colleges or whatever. I think we end up with a better society (not the best, see below) when the bank’s loan evaluation software spits out a number which bakes in all available correlations (even the racial ones) and rewards the loan officer for making good judgements of character independent of race rather than the system where the software can’t consider that factor and we reward the loan officers who evaluate the character of applications of color more negatively to compensate or the bank executives who choose not to place branches in communities of color and so on. Not only does this encourage a kind of wink and nod racism but when banks optimize profits via subtle discrimination rather than explicit consideration of the numbers one ends up creating a far higher barrier to minorities getting loans than a slight tick up in predicted default rate. If we don’t want to use features like the applicant race in decisions like loan offers, college acceptance etc.. we need to affirmatively acknowledge these correlations exist and ensure we don’t implement incentives to be subtly racist, e.g., evaluate loan officer’s performance relative to the (all factors included) default rate so we don’t implicitly reward loan officers and bank managers with biases against people of color (which itself imposes a barrier to minority loan officers).

In short, don’t let the shareholders and executives get away with passing the moral buck by saying ‘Ohh no, we don’t want to consider factors like race when offering loans’ but then turning around and using total profits as the incentive to ensure their employees do the discrimination for them. It may feel uncomfortable openly acknowledging such correlates but not only is it necessary to trace out the social causes of these ills but the other option is continued incentives for covert racism especially the use of subtle social cues of being the ‘right sort’ to identify likely success and that is what perpetuates the cycle.

 

A.I. ‘Bias’ Doesn’t Mean What Journalists Say it Means

In Florida, a criminal sentencing algorithm called COMPAS looks at many pieces of data about a criminal and computes the probability that they will commit new crimes. Judges use these risk scores in criminal sentencing and parole hearings to determine whether the offender should be kept in jail or released.