Thoughts on rationalism and the rationalist community from a skeptical perspective. The author rejects rationality in the sense that he believes it isn't a logically coherent concept, that the larger rationalism community is insufficiently critical of it's beliefs and that ELIEZER YUDKOWSKY IS NOT THE TRUE CALIF.

Ethereum Eschatology and Bitcoin Bankruptcy

Regulatory Arbitrage and Governmental Support For Cryptocurrency Alternatives

So I’ve been thinking a bit about cryptocurrencies lately and I don’t think the future is very promising for Bitcoin, Ethereum and other pure cryptocurrencies. I’ve always been a big fan of these currencies (though don’t get me started on the idiocy of companies using blockchain everywhere) but I think they are doomed in the not to distant future. However, this is only because I am convinced it won’t be long before we have the option to realize all (or at least most of) the major benefits of cryptocurrencies without the kludge and overhead of the blockchain, the dangerous price volatility and the unreliability/general sleaziness of many cryptocurrency exchanges.

Now lots of cryptocurrency value is currently the result of pure speculative interest. People are making a big bet that Bitcoin or Ethereum will take off and surge in value. While I highly recommend this Last Week Tonight episode mocking the HODL gang and other idiocy in cryptocurrency investing it’s not a fundamentally unreasonable bet. Just an extremely high risk bet that eventually non-speculators1 will buy out the speculators at well above (enough balance the risk) the current market price. It’s a bet that the currency will prove to be (at least) so useful/desirable that normal economic actors will see fit to hold far more value in the cryptocurrency than it’s current market capitalization of $151 billion BTC/$63 billion ETH. Given that $5 trillion is being held in physical currency and $60 trillion is held in bank accounts if you think there is a decent chance that Bitcoin or Ethereum will be adopted as the global currency then it’s valuation might not be absurd.

However, let’s ask what it is that cryptocurrencies offer the non-speculator. It seems to me there are several attributes that make them desirable.

  1. Cryptocurrencies offer finality in payments, e.g., unlike credit cards you don’t need to worry the payment you received will be cancelled by the payor or reversed as fraudulent.
  2. Cryptocurrencies let you pay people who wouldn’t (or can’t be bothered) be get paypal merchant accounts or US bank accounts.
  3. Relative freedom from government monitoring.
  4. Smart contracts. I can enter into cryptocurrency contracts that are enforced regardless of what a court thinks and even if local law enforcement is non-existent.
  5. Cryptocurrency schemes don’t require any kind of trust in government currency or a government system.

Frankly, 5 isn’t a serious consideration. It matters to a few people who want to show off their crypto-anarchists credentials but generally having a central bank behind one’s money is an advantage (stability etc..). So much so that other cryptocurrencies are trying to build in similar systems. If your concern is a hedge against inflation or governmental collapse you are better buying gold which a desperate government can’t try and attack (a combination legal and technical attack by a motivated government would seriously threaten any cryptocurrency). Besides, you can still use it if the internet fails.

But notice that, excepting 5, really all these advantages are really just avoidance of regulation. I don’t think there would be much demand for cryptocurrencies if it was legal to make a version of paypal where payments were completely final (even if they later turned out to be fraudulent), all records of transfers were immediately deleted, no one was turned away (marijuana growers, people in countries with sanctions and even conmen all got to keep their accounts) and the government couldn’t easily monitor accounts or determine whose account was whose.

Now some of this is just about enabling illegal activity (which also has value insofar as it lets individuals replace organized crime in the drug trade) however, strange as it might seem there is really substantial value in monetary exchanges with less protections against fraud and theft. In high-trust, relatively low value transactions in countries with strong legal systems such protections are a bonus but they make it virtually impossible to do make deals in low trust situations or when the seller can’t absorb a loss. For instance, as a tourist I couldn’t buy a high value good (say a found meteorite) from a villager I encounter because even if he could accept credit card payments he doesn’t have the means to contest a claim of fraud I might later make so, without cash, we can’t reach a mutually beneficial deal.

What puts current cryptocurrencies at risk is the fact that at any point any of the hundreds of sovereign governments on Earth could choose to offer an alternative digital payment system capturing most of these benefits. At any time Montenegro could sit down with Goldman-Sachs and some IT guys and launch Montenegro digital cash. Individuals from around the world could open up numbered accounts on the MontCash website and transfer money in or out of these accounts using credit cards or bank transfers. The MontCash app (or api) would then function exactly as paypal does today except that it would have numbered accounts (instead or as well as accounts in individual names), wouldn’t allow chargebacks or canceled transactions (absent a final court judgement) or require troublesome certifications to accept money at scale. In other words MontCash would just be a trusted bookkeeper maintaining a list of account balances.

Of course, diplomatic pressure would ensure that no government offered a completely untraceable totally anonymous system like this but for 99% of users it would be just as good (indeed better in some respects than Bitcoin’s publicly trackable transactions) if MontCash only released the accounts linked to certain payments, deposits etc.. only in response to a subpeona/warrant or for use in terrorism cases. While many governments might not particularly like the fact that accounts are simply numbered and can be used by whoever has the right credentials if it appears that real cryptocurrencies are gaining serious adoption (as necessary to vindicate their current valuation) then a system like MontCash would start to seem like an appealing alternative. After all, unlike Bitcoin, MontCash would still allow accounts to be seized with valid court orders, be more convenient to subpoena for transfers to/from given credit cards/bank accounts than the fluctuating legion of cryptocurrency exchanges and, most importantly, offer the carrot of secret counterterrorism access. After all, 99% of users wouldn’t care that much if the NSA/GCHQ etc.. got some degree of secret access to the financial data feed provided it wasn’t shared with tax collectors or drug dealers while the counterterrorism/intel benefits of having not only all transactions and accounts used to purchase or sell MontCash but also log details of where the app/api was used on what kind of device etc.. would be invaluable.

Even though it might not be universally loved the potential for massive profit by whichever country decides to give this a go is a very strong incentive. Not only could they collect a tiny percent of each transaction but they would earn huge amounts of interest on their total deposits. Also, they would have a compelling reason to allow numbered accounts not associated with any individual since they would get to keep all funds in such accounts when the owner losses their password (or cryptographic key or whatever). It’s hard to imagine that no country would take up this opportunity if they already see a true cryptocurrency gaining legitimacy. A system like MontCash would be far more attractive to most normal users as it could offer accounts denominated in various stable currencies (dollars, Euros etc..), greater user friendliness and more flexibility (you could potentially set daily transaction limits for your account, give up some degree of anonymity for password recovery options etc..) not to mention solving the long transaction times and high overhead costs (paid for in fees rewarded to miners) in cryptocurrencies.

In short, it’s hard to imagine that cryptocurrencies will win the day when for everyone but the hardcore technoanarchist their needs can be better met by a system that governments would see as less bad and can bring into being at any time.


  1. It’s not possible to maintain a rate of return substantially outpacing global economic growth indefinitely and eventually even the most irrational speculators will realize the good times are over and either liquidate their investments to speculate elsewhere or store their value in a safe asset. If, at this point, there isn’t sufficient non-speculative investment in the cryptocurrency to support it’s price the price will crash as speculators race to sell. 

Politician’s Incentives Regarding Facebook

God I hope not but sounds plausible.

The Peltzman Model of Regulation and the Facebook Hearings – Marginal REVOLUTION

If you want understand the Facebook hearings it’s useful to think not about privacy or technology but about what politicians want. In the Peltzman model of regulation, politicians use regulation to tradeoff profits (wanted by firms) and lower prices (wanted by constituents) to maximize what politicians want, reelection.

Failing Business 101

The Idiotic Idea Of Apple Competing With Intel

There is a rumor going around that apple may try and replace Intel chips in it’s computers with their own in-house chip. Now, it’s certainly conceivable that apple will offer a cheap low-end laptop based on the chips it uses for the iPhone and iPad. Indeed, that’s probably a great opportunity. However, the idea that apple might switch completely to their own in-house Silicon is such a bad business idea that I have to assume they won’t try.

I mean suppose for a moment that apple thought they could outdo Intel and AMD in designing high end processors. What should apple do? Well they could design processors in-house just for their own computers limiting their potential profits and assuming substantial risk if they turn out to be wrong. Alternatively, they could spin off a new processor design company (perhaps with some kind of cooperation agreement) which could sell their processors to all interested parties while limiting their risk exposure. Now, I think the later option is clearly preferable and as it seems pretty implausible to think Intel and AMD are so badly run as to make such a venture attractive so it would be even less attractive to try and compete with Intel in house.

Now why doesn’t this same argument apply to apple’s choice to design it’s own ARM chips for the iPhone? First, apple was able to buy state of the art IP to start from which wouldn’t be available in they were designing a high performance desktop/laptop CPU. Secondly, because of the high degree of integration in mobile devices there were real synergies apple could realize by designing the chip and phone in combination, e.g., implementing custom hardware to support various iphone functions. Considering desktops and highend laptops there are no such pressures. There is plenty of space to put any dedicated hardware in another chip and no special apple specific features that would be particularly valuable to implement in the CPU.

On the other hand a cheap(er) laptop that could run iPad apps could be a great deal. Just don’t expect them to replace Intel chips on the high end systems.

Apple is actively working on Macs that replace Intel CPUs

A new Bloomberg report claims Apple is working on its own CPUs for the Mac, with the intent to ultimately replace the Intel chips in its computers with those it designs in-house. According to Bloomberg’s sources, the project (which is internally called Kalamata) is in the very early planning stages, but it has been approved by executives at the company.

Artificial Intelligence And The Structure Of Thought

Why Your Self-Driving Car Won't Cause Armageddon

In recent years a number of prominent individuals have raised concerns about our ability to control powerful AIs. The idea is that once we create truly human level generally intelligent software or AGI computers will undergo an intelligence explosion and will be able to escape any constraints we place on them. This concern has perhaps been most throughly developed by Eliezer Yudkowsky.

Unlike the AI in bad science fiction the concern isn’t that the AI will be evil or desire dominion the way humans are but simply that it will be too good at whatever task we set it to perform. For instance, suppose Waymo builds an AI to run its fleet of self-driving cars. The AI’s task is to converse with passengers/app users and route its vehicles appropriately. Unlike more limited self-driving car software this AI is programmed to learn the subtleties of human behavior so it can position a pool of cars in front of the stadium right before the game ends and helpfully show tourists the sites. On Yudkowsky’s vision the engineers achieve this by coding in a reward function that the software works to maximize (or equivalently a penalty function it works to minimize). For instance, in this case the AI might be punished based on negative reviews/frustrated customers, deaths/damage from accidents involving its vehicles, travel delays and customers who choose to use a competitor rather than Waymo. I’m already skeptical that (super) human AI would have anything identifiable as a global reward/utility function but on Yudkowsky’s picture AGI is something like a universal optimizer which is set loose to do its best to achieve rewards.

The concern is that the AI would eventually realize that it could minimize its punishment by arranging for everyone to die in a global pandemic since then there would be no bad reviews, lost customers or travel delays. Given the AI’s vast intelligence and massive data set it would then hack into microbiology labs and manipulate the workers there to create a civilization ending plague. Moreover, no matter what kind of firewalls or limitations we try and place on the AI as long as it can somehow interact with the external world it will find a way around these barriers. Since its devilishly difficult to specify any utility function without such undesirable solutions Yudkowsky concludes that AGI poses a serious threat to the human species.

Rewards And Reflection

The essential mechanism at play in all of Yudkowsky’s apocalyptic scenarios is that the AI examines its own reward function, realizes that some radically different strategy would offer even greater rewards and proceeds to surreptitiously work to realize this alternate strategy. Now its only natural that a sufficiently advanced AI would have some degree of reflective access to its own design and internal deliberation. After all it’s common for humans to reflect on our own goals and behaviors to help shape our future decisions, e.g., we might observe that if we continue to get bad grades we won’t get into the college we want and as a result decide that we need to stop playing World of Warcraft.

At first blush it might seem obvious that realizing its rewards are given by a certain function would induce an AI to maximize that function. One might even be tempted to claim this is somehow part of the definition of what it means for an agent to have a utility function but that’s trading off on an ambiguity between two notions of reward.

The sense of reward which gives rise to the worries about unintended satisfaction is that of positive reinforcement. It’s the digital equivalent of giving someone cocaine. Of course, if you administer cocaine to someone every time they write a blog post they will tend to write more blog posts. However, merely learning that cocaine causes a rewarding distribution of dopamine in the brain doesn’t cause people to go out and buy cocaine. Indeed, that knowledge could just as well have the exact opposite effect. Similarly, there is no reason to assume that merely because an AGI has a representation of their reward function they will try and reason out alternative ways to satisfy it. Indeed, indulging in anthropomorphizing for a moment, there is no reason to assume that an AGI will have any particular desire regarding rewards received by its future time states much adopt a particular discount rate.

Of course, in the long run, if a software program was rewarded for analyzing its own reward function and finding unusual ways to activate it then it could learn to do so just as people who are rewarded with pleasurable drug experiences can learn to look for ways to short-circuit their reward system. However, if that behavior is punished, e.g., humans intervene and punish the software when it starts recommending public transit, then the system will learn to avoid short-circuiting its reward pathways just like people can learn to avoid addictive drugs. This isn’t to say that there is no danger here, left alone an AGI, just like a teen with access to cocaine, could easily learn harmful reward seeking behavior. However, since the system doesn’t start in a state in which it applies its vast intelligence to figure out ways to hack its reward function the risk is far less severe.

Now, Yudkowsky might respond by saying he didn’t really mean the system’s reward function but its utility function. However, since we don’t tend to program machine learning algorithms by specifying the function they will ultimately maximize (or reflect on and try to maximize) its unclear why we need to explicitly specify a utility function that doesn’t lead to unintended consequences. After all, Yudkowsky is the one trying to argue that its likely that AGI will have these consequences so merely restating the problem in a space that has no intrinsic relationship to how one would expect AGI to be constructed doesn’t do anything to advance his argument. For instance, I could point out that phrased in terms of the locations of fundamental particles its really hard to specify a program that excludes apocalyptic arrangements of matter but that wouldn’t do anything to convince you that AIs risked causes such apocalypses since such specifications have nothing to do with how we expect an AI to be programed.

The Human Comparison

Ultimately, we have one example of a kind of general intelligence: the human brain. Thus, when evaluating claims about the dangers of AGI one of the first things we should do is see if the same story applies to our brain and if not if there is any special reason to expect our brains to be different.

Looking at the way humans behave its striking how poorly Yudkowsky’s stories describe our behavior even though evolution has shaped us in ways that make us far more dangerous than we should expect AGIs to be (we have self-preservation instincts, approximately coherent desires and beliefs, and are responsive to most aspects of the world rather than caring only about driving times or chess games). Time and time again we see that we follow heuristics and apply familiar mental strategies even when its clear that a different strategy would offer us greater activation of reward centers, greater reproductive opportunities or any other plausible thing we are trying to optimize.

The fact that we don’t consciously try and optimize our reproductive success and instead apply a forest of frameworks and heuristics that we follow even when they undermine our reproductive success strongly suggests that an AGI will most likely function in a similar heuristic layered fashion. In other words, we shouldn’t expect intelligence to come as a result of some pure mathematical optimization but more as a layered cake of heuristic processes. Thus, when an AI responsible for routing cars reflects on its performance it won’t see the pure mathematical question of how can I minimize such and such function any more than we see the pure mathematical question of how can I cause dopamine to be released in this part of my brain or how can I have more offspring. Rather, just as we break up the world into tasks like ‘make friends’ or ‘get respect from peers’ the AI will reflect on the world represented in terms of pieces like ‘route car from A to B’ or ‘minimize congestion in area D’ that bias it towards a certain kind of solution and away from plots like avoid congestion by creating a killer plague.

This isn’t to say there aren’t concerns. Indeed, as I’ve remarked elsewhere I’m much more concerned about schizophrenic AIs than I am about misaligned AI’s but that’s enough for this post.

AI Bias and Subtle Discrimination

Don't Incentivize Discrimination To Feel Better

This is an important point not just about AI software but discussions about race and gender more generally. Accurately reporting (or predicting) facts that, all too often, are the unfortunate result of a long history of oppression or simple random variation isn’t bias.

Personally, I feel that the social norm which regards accurate observation of facts such as (as mentioned in the article) racial differences in loan repayment rate conditional on wealth to be a reflection of bias is just a way of pretending society’s social warts don’t exist. Only by accurately reporting such effects can we hope to identify and rectify the causes, e.g., perhaps differences in treatment make employment less stable for certain racial groups or whether or not the bank officer looks like you affects likelihood of repayment. Our unwillingness to confront these issues places our personal interest in avoiding the risk of seeming racist/sexist over the social good of working out and addressing the causes of these differences.

Ultimately, the society I want isn’t the wink and a nod cultural in which people all mouth platitudes but we implicitly reward people for denying underrepresented groups loans or spots in colleges or whatever. I think we end up with a better society (not the best, see below) when the bank’s loan evaluation software spits out a number which bakes in all available correlations (even the racial ones) and rewards the loan officer for making good judgements of character independent of race rather than the system where the software can’t consider that factor and we reward the loan officers who evaluate the character of applications of color more negatively to compensate or the bank executives who choose not to place branches in communities of color and so on. Not only does this encourage a kind of wink and nod racism but when banks optimize profits via subtle discrimination rather than explicit consideration of the numbers one ends up creating a far higher barrier to minorities getting loans than a slight tick up in predicted default rate. If we don’t want to use features like the applicant race in decisions like loan offers, college acceptance etc.. we need to affirmatively acknowledge these correlations exist and ensure we don’t implement incentives to be subtly racist, e.g., evaluate loan officer’s performance relative to the (all factors included) default rate so we don’t implicitly reward loan officers and bank managers with biases against people of color (which itself imposes a barrier to minority loan officers).

In short, don’t let the shareholders and executives get away with passing the moral buck by saying ‘Ohh no, we don’t want to consider factors like race when offering loans’ but then turning around and using total profits as the incentive to ensure their employees do the discrimination for them. It may feel uncomfortable openly acknowledging such correlates but not only is it necessary to trace out the social causes of these ills but the other option is continued incentives for covert racism especially the use of subtle social cues of being the ‘right sort’ to identify likely success and that is what perpetuates the cycle.

 

A.I. ‘Bias’ Doesn’t Mean What Journalists Say it Means

In Florida, a criminal sentencing algorithm called COMPAS looks at many pieces of data about a criminal and computes the probability that they will commit new crimes. Judges use these risk scores in criminal sentencing and parole hearings to determine whether the offender should be kept in jail or released.

The Effect Of Self-Driving Cars On Schooling

In hindsight it often turns out the biggest effect of a new technology is very different than what people imagined beforehand. I suggest that this may well be the case for self-driving cars.

Sure, the frequently talked about effects like less time wasted in commutes or even the elimination of personal car ownership are nice but I think self-driving cars might have an even larger effect by eliminating the constraint of proximity in schooling and socialization for children.

While adults often purchase homes quite far from their workplaces proximity is a huge constraint on which schools students attend. In a few metropolises with extensive public transport systems its possible for older children to travel to distant schools (and, consequently, these cities often have more extensive school choice) but in most of the United States busing is the only practical means to transport children whose parents can’t drive them to school. While buses need not take children to a nearby school they are practically limited by the need to pick children up in a compact geographic area. A bus might be able to drive from downtown Chicago to a school in a suburb on the north side of the city but you couldn’t, practically, bus students to their school of choice in the metropolitan area. Even in cases where busing takes students to better schools in remote areas attending a school far from home has serious costs. How can you collaborate with classmates, play with school friends, attend after school activities or otherwise integrate into the school peer group without a parent to drive you?

This all changes with self-driving cars. Suddenly proximity poses far less of a barrier to schooling and friendship. By itself this doesn’t guarantee change but it creates an opportunity to create a school system that is based on specialization and differing programs rather than geographic region.

Of course, we aren’t likely to see suburban schools opening their doors to inner city kids at the outset. Everyone wants the best for their children and education, at least at the high end, is a highly rivalrous good (it doesn’t really matter how well a kid scores objectively on the SAT only that he scores better than the other kids). However, self-driving cars open up a whole world of possibility for specialty schools catering to students who excel at math and science, who have a particular interest in theater or music or who need special assistance. As such schools benefit wealthy influential parents they will be created and, by their very nature, be open to applicants from a wide geographic area.

No, this won’t fix the problem of poor educational outcomes in underprivileged areas but it will offer a way out for kids who are particularly gifted/interested in certain areas. This might be the best that we can hope for if, as I suspect, who your classmates are matters more than good technology or even who your teachers are.

I should probably give credit to this interesting point suggesting that school vouchers aren’t making schools better because they don’t result in school closures for inspiring this post (and because I think its an insightful point).

Why Are We DDOSing North Korea?

An Ineffective Strategy With Worrying Implications

Wait what? We are launching a DDOS attack against North Korea. Could we do anything more stupid? Its not like North Korea uses the internet enough for this to represent a serious inconvenience to the nation while at the same time we legitimize the use of cyber attacks against civilian infrastructure as a way to settle international disputes. Dear god this is a bad idea!

As US launches DDoS attacks, N. Korea gets more bandwidth-from Russia

As the US reportedly conducts a denial-of-service attack against North Korea’s access to the Internet, the regime of Kim Jong Un has gained another connection to help a select few North Koreans stay connected to the wider world-thanks to a Russian telecommunications provider.

Algorithmic Gaydar

Machine Learning, Sensitive Information and Prenatal Hormones

So there’s been some media attention recently to this study which found they were able to accurately predict sexual orientation with 91% for men and 83% for women. Sadly, everyone is focusing on the misleading idea that we can somehow use this algorithm to decloak who is gay and who isn’t rather than the really interesting fact that this is suggestive of some kind of hormonal or developmental cause of homosexuality.

Rather, given 5 pictures of a gay man and 5 pictures of a straight man 91% of the time it is able to correctly pick out the straight man. Those of us who remember basic statistics with all those questions about false positive rates should realize that, given the low rate of homosexuality in the population, this algorithm doesn’t actually give very strong evidence of homosexuality at all. Indeed, one would expect that, if turned loose on a social network, the vast majority of individuals judged to be gay would be false positives. However, in combination with learning based on other signals like your friends on social media one could potentially do a much better job. But at the moment there isn’t much of a real danger this tech could be used by anti-gay governments to identity and persecute individuals.

Also, I wish the media would be more careful about their terms. This kind of algorithm doesn’t reveal private information it reveals sensitive information inadvertently exposed publicly.

However, what I found particularly interesting was the claim in the paper that they were able to achieve a similar level of accuracy for photographs taken in a neutral setting. This, along with other aspects of the algorithm, strongly suggest the algorithm isn’t picking up on some kind of gay/straight difference in what kind of poses people find appealing. The researchers also generated a heat map of what parts of the image the algorithm is focusing on and while some of them do suggest grooming based information about hair, eyebrows or beard play some role the strong role that the nose, checks and corners of the mouth play suggests that relatively immutable characteristics are pretty helpful in predicting orientation.

The authors acknowledge that personality has been found to affect facial features in the long run so this is far from conclusive. I’d also add my own qualification that there might be some effect of the selection procedure that plays a role, e.g., if homosexuals are less willing to use a facial closeup on dating sites/facebook if they are ugly the algorithm could be picking up on that. However, it is at least interestingly suggestive evidence for the prenatal hormone theory (or other developmental theory) of homosexuality.

Silicon Valley Politics

This is an interesting piece but I couldn’t disagree more with the title or the author’s obvious feeling that there must be a cynical explanation for techie’s distrust of government regulation.

Silicon valley types are simply classical pragmatic libertarians. They aren’t Ayn Rand quoting objectivists who believe government intervention is in principle unacceptable. Rather, they, like most academic economists, simply tend to feel that well-intentioned government regulation often has serious harmful side effects and isn’t particularly likely to accomplish the desired goals.

I think this kind of skepticism flows naturally from a certain kind of quantitative results oriented mindset and I expect you would find the same kind of beliefs (to varying degrees) among the academic physicists, civil engineers and others who share the same educational background and quantitative inclination as silicon valley techies. I’m sure that the particular history of poorly understood tech regulation like the original crypto wars in the 90s plays a role but I suspect it just amplified existing tendencies.

Silicon Valley’s Politics Revealed: Mostly Far Left (With a Twist)

But by the 1990s, with the advent of the World Wide Web and the beginning of the tech industry’s march to the apex of the world’s economy, another Silicon Valley political narrative took root: techies as unapologetic libertarians, for whom the best government is a nearly nonexistent one.

Bitcoin Arbitrage

So doesn’t this suggest that you can just hang around on Bitcoin forums/chats/etc and use that info to arbitrage your way into substantial sums (by using your info about likely resolution of bitcoin forking/update discussions to predict prices in ways that are already occupied in developed markets)?

I mean I suppose there is the limitation on leverage. You can borrow for stock trades using your stocks as collateral but I don’t believe you can do the same yet with bitcoin. But still seems like a good deal. Is there any other reason this won’t work/