Thoughts on rationalism and the rationalist community from a skeptical perspective. The author rejects rationality in the sense that he believes it isn't a logically coherent concept, that the larger rationalism community is insufficiently critical of it's beliefs and that ELIEZER YUDKOWSKY IS NOT THE TRUE CALIF.
So the following letter is being widely reported online as if it is evidence for the importance of gun control. I’m skeptical of the results as I detail in the next post but even if one takes the results at face value the letter is pretty misleading and the media reporting is nigh fraudulent.
In particular if one digs into the appendix to the letter one finds the following statement: “many of the firearm injuries observed in the commercially insured patient population may reflect non-crime-related firearm injuries.” This is unsurprising as using health insurance data means you are only looking at patients rich enough to be insured and willing to report their injury as firearms related: so basically excluding anyone injured in the commission of a crime or who isn’t legally allowed to use a gun. As a result they also analyzed differences in crime rates and found no effect.
So even on it’s face this study would merely show that people who choose to use firearms are sometimes injured in that use. That might be a good reason to stay away from firearms yourself but not additional reason for regulation as is being suggested in the media.
Moreover, if the effect is really just about safety at gun ranges then its unclear if the effect is from lower use of such ranges or that the NRA conference encourages greater care and best practices.
Reasons To Suspect The Underlying Study
Also, I’m pretty skeptical of the underlying claim in the study. The size of the effect claimed is huge relative to the number of people who attend an NRA conference. I mean about 40% of US households are gun owners but only ~80,000 people attend nationwide NRA conventions or ~.025% of the US population or ~.0625 of US gun owners. Thus, for this statistic to be true because NRA members are busy at the conference we would have to believe NRA conference attendees were a whopping 320 times more likely to be inflict a gun related injury than the average gun owner.
Now if we restrict our attention to homicides this is almost surely not the case. Attending an NRA convention requires a certain level of financial wealth and political engagement which suggests membership in a socioeconomic class less likely to commit gun violence and than the average gun owner. And indeed, the study finds no effect in terms of gun related crime. Even if we look to non-homicides gun deaths from suicides far outweigh those from accidents and I doubt those who go to an NRA convention are really that much more suicidal inclined.
An alternative likely explanation is that the NRA schedules its conferences for certain times of the year when people are likely to be able to attend and we are merely seeing seasonal correlations masquerading as effects from the NRA conference (a factor they don’t control for). Also as they run all subgroup analysises and don’t report the results for census tracks and other possible subgroups the possibility for p-hacking is quite real. Looking at the graph they provide I’m not exactly overwhelmed.
The claim gets harder to believe when one considers the fact that people who attend NRA meetings almost surely don’t give up going to firing ranges during the meeting. Indeed, I would expect (though haven’t been able to verify) that there would be any number of shooting range expeditions during the conference and that this would actually mean many attendees would be more likely to handle a gun during that time period.
Though, once one realizes that the data set one is considering is only those who make insurance claims relating to gun related injuries it is slightly more plausible but only at the cost of undermining the significance of the claim. Deaths and suicides are much less likely to produce insurance claims and the policy implications aren’t very clear if all we are seeing is a reduction in people injured because of incorrect gun grips (see the mythbusters about this..such injuries can be quite serious).
In recent years a number of prominent individuals have raised concerns about our ability to control powerful AIs. The idea is that once we create truly human level generally intelligent software or AGI computers will undergo an intelligence explosion and will be able to escape any constraints we place on them. This concern has perhaps been most throughly developed by Eliezer Yudkowsky.
Unlike the AI in bad science fiction the concern isn’t that the AI will be evil or desire dominion the way humans are but simply that it will be too good at whatever task we set it to perform. For instance, suppose Waymo builds an AI to run its fleet of self-driving cars. The AI’s task is to converse with passengers/app users and route its vehicles appropriately. Unlike more limited self-driving car software this AI is programmed to learn the subtleties of human behavior so it can position a pool of cars in front of the stadium right before the game ends and helpfully show tourists the sites. On Yudkowsky’s vision the engineers achieve this by coding in a reward function that the software works to maximize (or equivalently a penalty function it works to minimize). For instance, in this case the AI might be punished based on negative reviews/frustrated customers, deaths/damage from accidents involving its vehicles, travel delays and customers who choose to use a competitor rather than Waymo. I’m already skeptical that (super) human AI would have anything identifiable as a global reward/utility function but on Yudkowsky’s picture AGI is something like a universal optimizer which is set loose to do its best to achieve rewards.
The concern is that the AI would eventually realize that it could minimize its punishment by arranging for everyone to die in a global pandemic since then there would be no bad reviews, lost customers or travel delays. Given the AI’s vast intelligence and massive data set it would then hack into microbiology labs and manipulate the workers there to create a civilization ending plague. Moreover, no matter what kind of firewalls or limitations we try and place on the AI as long as it can somehow interact with the external world it will find a way around these barriers. Since its devilishly difficult to specify any utility function without such undesirable solutions Yudkowsky concludes that AGI poses a serious threat to the human species.
Rewards And Reflection
The essential mechanism at play in all of Yudkowsky’s apocalyptic scenarios is that the AI examines its own reward function, realizes that some radically different strategy would offer even greater rewards and proceeds to surreptitiously work to realize this alternate strategy. Now its only natural that a sufficiently advanced AI would have some degree of reflective access to its own design and internal deliberation. After all it’s common for humans to reflect on our own goals and behaviors to help shape our future decisions, e.g., we might observe that if we continue to get bad grades we won’t get into the college we want and as a result decide that we need to stop playing World of Warcraft.
At first blush it might seem obvious that realizing its rewards are given by a certain function would induce an AI to maximize that function. One might even be tempted to claim this is somehow part of the definition of what it means for an agent to have a utility function but that’s trading off on an ambiguity between two notions of reward.
The sense of reward which gives rise to the worries about unintended satisfaction is that of positive reinforcement. It’s the digital equivalent of giving someone cocaine. Of course, if you administer cocaine to someone every time they write a blog post they will tend to write more blog posts. However, merely learning that cocaine causes a rewarding distribution of dopamine in the brain doesn’t cause people to go out and buy cocaine. Indeed, that knowledge could just as well have the exact opposite effect. Similarly, there is no reason to assume that merely because an AGI has a representation of their reward function they will try and reason out alternative ways to satisfy it. Indeed, indulging in anthropomorphizing for a moment, there is no reason to assume that an AGI will have any particular desire regarding rewards received by its future time states much adopt a particular discount rate.
Of course, in the long run, if a software program was rewarded for analyzing its own reward function and finding unusual ways to activate it then it could learn to do so just as people who are rewarded with pleasurable drug experiences can learn to look for ways to short-circuit their reward system. However, if that behavior is punished, e.g., humans intervene and punish the software when it starts recommending public transit, then the system will learn to avoid short-circuiting its reward pathways just like people can learn to avoid addictive drugs. This isn’t to say that there is no danger here, left alone an AGI, just like a teen with access to cocaine, could easily learn harmful reward seeking behavior. However, since the system doesn’t start in a state in which it applies its vast intelligence to figure out ways to hack its reward function the risk is far less severe.
Now, Yudkowsky might respond by saying he didn’t really mean the system’s reward function but its utility function. However, since we don’t tend to program machine learning algorithms by specifying the function they will ultimately maximize (or reflect on and try to maximize) its unclear why we need to explicitly specify a utility function that doesn’t lead to unintended consequences. After all, Yudkowsky is the one trying to argue that its likely that AGI will have these consequences so merely restating the problem in a space that has no intrinsic relationship to how one would expect AGI to be constructed doesn’t do anything to advance his argument. For instance, I could point out that phrased in terms of the locations of fundamental particles its really hard to specify a program that excludes apocalyptic arrangements of matter but that wouldn’t do anything to convince you that AIs risked causes such apocalypses since such specifications have nothing to do with how we expect an AI to be programed.
The Human Comparison
Ultimately, we have one example of a kind of general intelligence: the human brain. Thus, when evaluating claims about the dangers of AGI one of the first things we should do is see if the same story applies to our brain and if not if there is any special reason to expect our brains to be different.
Looking at the way humans behave its striking how poorly Yudkowsky’s stories describe our behavior even though evolution has shaped us in ways that make us far more dangerous than we should expect AGIs to be (we have self-preservation instincts, approximately coherent desires and beliefs, and are responsive to most aspects of the world rather than caring only about driving times or chess games). Time and time again we see that we follow heuristics and apply familiar mental strategies even when its clear that a different strategy would offer us greater activation of reward centers, greater reproductive opportunities or any other plausible thing we are trying to optimize.
The fact that we don’t consciously try and optimize our reproductive success and instead apply a forest of frameworks and heuristics that we follow even when they undermine our reproductive success strongly suggests that an AGI will most likely function in a similar heuristic layered fashion. In other words, we shouldn’t expect intelligence to come as a result of some pure mathematical optimization but more as a layered cake of heuristic processes. Thus, when an AI responsible for routing cars reflects on its performance it won’t see the pure mathematical question of how can I minimize such and such function any more than we see the pure mathematical question of how can I cause dopamine to be released in this part of my brain or how can I have more offspring. Rather, just as we break up the world into tasks like ‘make friends’ or ‘get respect from peers’ the AI will reflect on the world represented in terms of pieces like ‘route car from A to B’ or ‘minimize congestion in area D’ that bias it towards a certain kind of solution and away from plots like avoid congestion by creating a killer plague.
This isn’t to say there aren’t concerns. Indeed, as I’ve remarked elsewhere I’m much more concerned about schizophrenic AIs than I am about misaligned AI’s but that’s enough for this post.
Is this a ridiculous amount of opiates for a single small town to prescribe. Sure thing. But I find the idea that drug companies being held to task for this, and thus implicitly the idea that they should have done something to supply fewer pills to these pharmacies deeply troubling.
I mean how would that work out? The drug companies are (rightly) legally barred from seeing patient records and deciding who does and doesn’t deserve prescriptions so all they could do is cut off the receiving pharmacies. Ok, so they could put pressure on the pharmacies to fill less prescriptions but the pharmacies also don’t have patient records so what that means is the pharmacies scrutinize you to see if you ‘look’ like someone who is abusing the prescription or a ‘real’ patient. So basically being a minority or otherwise not looking like what the pharmacist expects a real pain patient to look like means you can’t get your medicine. Worse, the people scamming pills will be willing to use whatever tricks are necessary (faking pain, shaving their head whatever) to elicit scripts so it’s the legitimate users who are most likely to end up out in the cold.
While I also have reservations about the DEA intimidating doctors into not prescribing needed medicine it is the government (who, I understand, is informed about the number of opiates being sold by various pharmacies) who should be investigating cases like this not the drug maker. Personally I think the solution isn’t and never has been controlling the supply but always about providing sufficient resources like methadone and bupenorphine maintenance so people who find themselves hooked can live normal lives.
Drug companies hosed tiny towns in West Virginia with a deluge of addictive and deadly opioid pills over the last decade, according to an ongoing investigation by the House Energy and Commerce Committee. For instance, drug companies collectively poured 20.8 million hydrocodone and oxycodone pills into the small city of Williamson, West Virginia, between 2006 and 2016, according to a set of letters the committee released Tuesday.
So usually I find Scott Alexander’s posts pretty illuminating but, while his recent post on Conflict vs. Mistake Theories raises lots of interesting questions I think it fundamentally makes a mistake in trying to fit the type of extreme Marxist thinking he is describing into a framework of beliefs about the world and actions taken to advance those beliefs. While I think Scott appreciates this difficulty and attempts to wrestle with it, e.g., where he suggests the conflict theory take on is best exemplified by the “Baffler’s article saying that public choice theory is racist” ultimately his devotion to applying norms of charity to the other side leads him astray.
It’s not that there aren’t people like the conflict theorist Scott posits. I know there are a number of radical university professors who think to themselves, “Given the oppressive political structure and the power held by the elite the most effective way to bring about change isn’t to engage in rational argument but bring political or even physical force to bear.” However, for the most part the people Scott is trying to describe aren’t just like mistake theorists except they believe its intentional action by elites which makes the world bad rather than the difficulty of governing. No, such a theory would predict conflict theorists would retire back to their coffeehouses and perform cost-benefit calculations about the benefits of holding a particular protest or adopting a particular style of advocacy.
In other words conflict theorists aren’t mistake theorists who hide their true colors so as not to give the elites free ammunition but engage in the same kind of considerations as mistake theorists behind the scenes. No, fundamentally, most of the behavior Scott is seeking to describe is about emotional responses not a considered judgement that such emotional displays will best accomplish their ends.
Do We Really Respect A Nation's Soverignty When We Decide How Their Laws Should Be Understood And Enforced?
I’m rarely one to agree with Trump and disagree with Tyler Cowen but I’m inclined to think we should eliminate the Foreign Corrupt Practices Act rather than merely implement the minor changes he suggests. At a gut level I find the idea that we are imposing our norms about how law, public office etc.. should work on other countries unpalatable and at a more cerebral level feel that we shouldn’t put people in prison or even fine companies without a compelling reason to think it serves some important social good. My mind could be changed by substantial evidence this law improves the welfare in other countries but there is no apriori reason to think it will reduce, rather than increase, corruption overseas (e.g. the game theoretic aspects to providing insurance to the bribe taker that the bribe giver can’t turn them in).
If the question was whether we should help overwhelmed countries who find their anti-corruption efforts foiled by American companies wealth and global nature that would be a different matter and I support assigning DOJ resources to assist worthy local corruption prosecutions. I’d even favor a law which allowed the foreign government to request prosecution by a US court for bribery actions taken by US companies when they felt their own courts weren’t capable of doing the job. However, this law doesn’t leave the question of prosecution and appropriate sanctions to the jurisdiction where the crime took place but substitutes our American sensibilities about the badness of bribery and even the role of laws regarding bribery for local understanding.
One might object that this law is only triggered when the practice at hand is illegal under local law. That’s true, but all laws don’t mean the same thing. Imagine the UK had a law which imposed massive fines or a 10 year penalty for any UK citizen living in the states who intentionally ‘evaded’ local tax laws. Now it’s true that in the States we too regard many types of tax fraud as a bid deal….but there is a societal acceptance (perhaps a bad one) that evading state sales tax by ordering products from out of state isn’t a big deal the way cheating on income tax might be and any US enforcement of such laws will reflect that understanding but UK enforcement of a hypothetical foreign tax evasion law would not. Similar points could be made about laws with harsh penalties for consuming illegal drugs in a foreign country….often, as with decriminalization, what the laws on a country’s books say and how the society there understands the system to work don’t always agree.
In short, respecting the sovereignty1 of other states requires letting them decide on the relation between their explicitly codified law and actual enforcement/social understanding and the framework for this law seems to coopt that understanding.
Also, it’s not even clear if the net effect of this law will be to reduce bribery and corruption abroad. If, as might be expected, in corrupt countries foreign companies tend to be less corrupt than the locals the net effect of such a law might be to favor local companies that face no deterrence from the US government. As the structure of the law punishes bribes paid by US companies or the corporations they control or hire but not bribes paid by customers or local companies engaged in arms length transactions it creates an incentive for the most corrupt locals to start businesses they wouldn’t have otherwise2. Indeed, the very fact that a socially accepted system of bribery imposes a barrier that keeps US firms out of the market may even make public corruption reforms unpopular for protectionist reasons.
Finally, one needs to ask what the game theoretic effect is of a law. A really effective way to clamp down on bribery is to turn people who have given bribes into witnesses against the official (e.g. by offering them immunity in this or another case or even a reward) and often the bribe taker will be in far more jeopardy than the bribe giver potentially putting them at jeopardy of blackmail. However, add the potential for charges by the US government which the local prosecutor can’t bargain away but are unlikely to be brought at all as long as no one local rats and you have a very nice game theoretic mechanism of ensuring that your bribe giver doesn’t rat you out (threaten to report any company that bribed you to the US DOJ if you get convicted…might even be a good way of ensuring someone pays for your defense).
Whether or not any of these actually work out in the real world is anyone’s guess but that fact alone should be enough justification not to be (even potentially) handing out 20 year prison sentences. That is in addition to all the other problems of the law that Cowen notes.
Are you worried there may be corruption in the American executive branch today, yet also fearful that the tools for rooting out such malfeasance may be abused? If so, welcome to the dilemmas surrounding the Foreign Corrupt Practices Act.
Now ultimately, of course, I only believe in respect for the sovereignty of countries (or the right of self-determination) because it’s a good heuristic for positive results but it’s a pretty damn good one. Not only do attempts to intervene in foreign cultures rarely work they tend to create a good deal or resentment against whoever does the intervention. This is true whether the intervention is a direct political one or merely a cultural one such as refusing to do business in the customary way in that place and the FCPA certainly raises that possibility. Even disregarding resentment when it comes to legal regimes there is a much stronger reason to respect sovereignty: penalties imposed by a foreign potentate rarely provoke compliance or respect so drag along all the harms of law enforcement plus without the social benefits that make those costs worth paying. ↩
For instance, maybe a certain degree of bribery is expected to win contracts to supply the government with services and all local companies bidding simply pay the expected bribes. If US suppliers simply entered the market and complied with these necessary rules for doing business locally there would be little effect on corruption. On the other hand if they refuse but offer superior products at better prices there is a strong incentive for those locals who are best at greasing palms to set up a company which simply subcontracts the work to the Americans after bribing the government (without telling the Americans about the bribe) and since such a company is literally in the business of bribing and faces competitors in that buisness it is more likely to increase local corruption than if the US company had simply played along. ↩
4% Innocent Executions Is A Price Worth Paying For A Spotlight On Injustice
Since 4% of death row inmates are innocent we should keep the theoretical death penalty. I don’t want to execute anyone and ideally everyone on death row dies of old age before being executed but it’s only because of the death penalty that we have so much information about and public interest in the failings of our criminal justice system. I think that it’s only as a result of safeguards introduced because of worries about incorrect executions (DNA retesting) or fixes to forensic or legal failings that we don’t have more innocent people spending huge fractions of their lives in prison.
I don’t think being executed is much worse than life in a US prison. Given a choice between a coin flip between going free and being executed and the certainty of life in prison I’d take the coin flip. Hell, I’d take 9:1 odds. Other people’s preferences differ but I find it hard to believe that the death penalty is more than 10 times as bad as life in prison (and this is clouded by our irrationally strong drive to survive rather than a pure utility judgement) and a very small percentage of all convicts get the death penalty.
Without the death penalty there won’t be any ‘oh shit, was that guy innocent’ moment or double checks before execution. At any point the justice system will just delay or avoid reconsidering any issues and without the spotlight presented by arguments of innocence made by death penalty convicts people will just assume what’s easy to believe: our criminal justice system gets it right and the innocent are rarely convicted. If the price to pay for this is just 4% incorrect executions I think it’s worth paying.
Of course, it should be noted that the rate of innocent people being convicted will be much higher outside of death penalty cases. Inmates on death row had cases that not only went to trial but received far more scrutiny than even those non-death penalty cases that do go to trial. Personally, I wouldn’t be surprised if the true rate of people in prison for crimes they didn’t commit was as high as 10-20%
How well does the US justice system work? Given that many states still carry out the death penalty, it’s a rather significant question. Some biostatisticians have teamed up with lawyers in an attempt to provide a scientific answer to the question. Based on their figures, at least 4.1 percent of the individuals sentenced to death will eventually be exonerated.
This is an important point not just about AI software but discussions about race and gender more generally. Accurately reporting (or predicting) facts that, all too often, are the unfortunate result of a long history of oppression or simple random variation isn’t bias.
Personally, I feel that the social norm which regards accurate observation of facts such as (as mentioned in the article) racial differences in loan repayment rate conditional on wealth to be a reflection of bias is just a way of pretending society’s social warts don’t exist. Only by accurately reporting such effects can we hope to identify and rectify the causes, e.g., perhaps differences in treatment make employment less stable for certain racial groups or whether or not the bank officer looks like you affects likelihood of repayment. Our unwillingness to confront these issues places our personal interest in avoiding the risk of seeming racist/sexist over the social good of working out and addressing the causes of these differences.
Ultimately, the society I want isn’t the wink and a nod cultural in which people all mouth platitudes but we implicitly reward people for denying underrepresented groups loans or spots in colleges or whatever. I think we end up with a better society (not the best, see below) when the bank’s loan evaluation software spits out a number which bakes in all available correlations (even the racial ones) and rewards the loan officer for making good judgements of character independent of race rather than the system where the software can’t consider that factor and we reward the loan officers who evaluate the character of applications of color more negatively to compensate or the bank executives who choose not to place branches in communities of color and so on. Not only does this encourage a kind of wink and nod racism but when banks optimize profits via subtle discrimination rather than explicit consideration of the numbers one ends up creating a far higher barrier to minorities getting loans than a slight tick up in predicted default rate. If we don’t want to use features like the applicant race in decisions like loan offers, college acceptance etc.. we need to affirmatively acknowledge these correlations exist and ensure we don’t implement incentives to be subtly racist, e.g., evaluate loan officer’s performance relative to the (all factors included) default rate so we don’t implicitly reward loan officers and bank managers with biases against people of color (which itself imposes a barrier to minority loan officers).
In short, don’t let the shareholders and executives get away with passing the moral buck by saying ‘Ohh no, we don’t want to consider factors like race when offering loans’ but then turning around and using total profits as the incentive to ensure their employees do the discrimination for them. It may feel uncomfortable openly acknowledging such correlates but not only is it necessary to trace out the social causes of these ills but the other option is continued incentives for covert racism especially the use of subtle social cues of being the ‘right sort’ to identify likely success and that is what perpetuates the cycle.
In Florida, a criminal sentencing algorithm called COMPAS looks at many pieces of data about a criminal and computes the probability that they will commit new crimes. Judges use these risk scores in criminal sentencing and parole hearings to determine whether the offender should be kept in jail or released.
A number of people have raised about intentionally trying to make contact with extraterrestrials. Most famously, Stephen Hawking famously warned that based on the history of first-contacts on Earth we should fear enslavement, exploitation or annihilation by more advanced aliens and the METI proposal to beam high powered signals into space has drawn controversy as well as criticism from David Brin for METI’s failure to engage in consultation with a broad range of experts. However, I’ve noticed a distinct lack of consideration of the potential benefits to alien life as a result of such contact.
For instance, while the proposal to send the google servers might limit our ability to trade in the future it also potentially provides the aliens with whatever benefits they might get from our scientific insights or our historical experiences. For instance, if we were to receive a detailed account of alien society’s struggle with climate change on their planet that second piece of data could be invaluable in choosing our own course not to mention the benefit scientific advancements could offer.
Indeed, if, as many people seem to think, there is some extinction level disaster waiting for civilizations once they reach, or slightly surpass, our current level of technology then such preemptive broadcasts might be the only serious hope of getting at least one sapient species through this Great Filter. While it might be pretty unlikely that our transmission would start the chain of records from doomed civilizations that will eventually push one species past the filter the returns to utility from such an outcome are so massive that such considerations might well outweigh any effect on humanity in the utility calculus.
Anyway, given the huge potential upside (even if unlikely) of an intervention which might improve life across the entire galaxy (even if at very low probability) I was wondering if anyone has done even back of the envelope calculations to estimate how funding projects trying to transmit useful data to extraterrestrials compares to the cost effectiveness of more earthly projects.
So my understanding (which might be wrong) is that (with a few rare exceptions) the paleontological value of fossil bones is entirely a function of their 3D shape (and perhaps a small sample of the material they are made of) and the information about where and in what conditions they are found.
Given that we now have 3D scanners shouldn’t museums and universities be selling off the originals to finance more research? Or am I missing something?
I’d add that the failure to have greater funding for new expeditions means we are constantly losing potential fossils to erosion, looters, damage etc… It’s crazy to think that the optimal overall scientific end is served by selling none of the fossils in institutional collections (even the low value ones) while knowing that there are probably high value fossils being lost because we aren’t finding them before they are damaged or that land is developed or whatever.
Also, one could simply include buy-back, borrowing or sampling clauses in any sale. Thus, at worst, when the museum wants to do later sampling it must buy back or partially compensate the current private owner putting them in a strictly better situation.
I think something that is missing in recent conversations about sexual harassment is the fact that this is part of a larger phenomena in which those with power can genuinely believe that their harassing behavior is ‘just good fun’ and that their victim doesn’t really mind.
It is the same thing we see when bullies (of either sex) tease their victims or when more popular friends denigrate the social failings of their less popular friends. Indeed, we see this in any number of contexts.
I think its important to understand this for a couple of reasons. First, if we want to actually fix the problem we need to understand that this isn’t just a matter of being a good person. Unless good people actively watch for this phenomena it seems they are psychologically vulnerable to thinking they are behaving appropriately despite causing real pain.
It’s also important because we need to recognize this kind of bullying and mean treatment causes pain regardless of whether it has sexual overtones. There are extra concerns when sexual issues are thrown into the mix but the basic problem remains the same. Also, by recognizing it as part of a larger non-gender specific problem helps remove the distracting gender war aspect from the problem and let people of both genders focus on what makes things better rather than how to demonize and blame the other sex.
Also, personally I’d love to know what underlies this tendency. Despite being someone who has been very much the victim of this kind of behavior its disgustingly easy to slip into it myself without noticing. Its like there is a kind of intoxication of social status that inclines one to ignore the feelings and concerns of those with less status than ourselves.
But if all the recent social changes accomplish is to raise the relative social status of women as a group without engaging in systematic change to make this behavior less common all we will achieve in the long run is changing who is treated badly rather than actually making the world a substantially better place….and the next group on the bottom may not have the kind of internal cohesion and social power to bring the issue to public attention again.