Reply to Criticism on my EA Post

My previous post, “EA Has A Lying Problem”, received a lot of criticism, and I’d like to address some of it here.

I was very impressed by what I learned about EA discourse norms from preparing this post and responding to comments on it. I’m appreciating anew that this is a community where people really do the thing of responding directly to arguments, updating on evidence, and continuing to practice discourse instead of collapsing into verbal fights.  I’m going to try to follow that norm in this post.

Structurally, what I did in my previous post was

  • quote some EAs making comments on forums and Facebook
  • interpret what I think is the attitude behind those quotes
  • claim that the quotes show a pervasive problem in which the EA community doesn’t value honesty enough.

There are three possible points of failure to this argument:

  • The quotes don’t mean what I took them to mean
  • The views I claimed EAs hold are not actually bad
  • The quotes aren’t evidence of a broader problem in EA.

There’s also a possible prudential issue: that I may have, indeed, called attention to a real problem, but that my tone was too extreme or my language too sloppy, and that this is harmful.

I’m going to address each of these possibilities separately.

Possibility 1: The quotes don’t mean what I took them to mean

Case 1: Ben Todd’s Quotes on Criticism

I described Ben Todd as asking people to consult with EA orgs before criticizing them, and as heavily implying that it’s more useful for orgs to prioritize growth over engaging with the kind of highly critical people who are frequent commenters on EA debates.

I went on to claim that this underlying attitude is going to favor growth over course correction, and prioritize “movement-building” by gaining appeal among uncritical EA fans, while ignoring real concerns.

I said,

Essentially, this maps to a policy of “let’s not worry over-much about internally critiquing whether we’re going in the right direction; let’s just try to scale up, get a bunch of people to sign on with us, move more money, grow our influence.”  An uncharitable way of reading this is “c’mon, guys, our marketing doesn’t have to satisfy you, it’s for the marks!”  

This is a pretty large extrapolation from Todd’s actual comments, and I think I was putting words in his mouth that are much more extreme than anything he’d agree with. The quotes I pulled didn’t come close to proving that Todd actually wants to ignore criticism and pander to an uncritical audience.  It was wrong of me to give the impression that he’s deliberately pursuing a nefarious strategy.

And in the comments, he makes it clear that this wasn’t his intent and that he’s actually made a point of engaging with criticism:

Hi Sarah,

The 80,000 Hours career guide says what we think. That’s true even when it comes to issues that could make us look bad, such as our belief in the importance of the risks from artificial intelligence, or when are issues could be offputtingly complex, such as giving now vs. giving later and the pros and cons of earning to give. This is the main way we engage with users, and it’s honest.

As an organisation, we welcome criticism, and we post detailed updates on our progress, including our mistakes:

https://80000hours.org/about/credibility/evaluations/

https://80000hours.org/about/credibility/evaluations/mistakes/

I regret that my comments might have had the effect of discouraging important criticism.

My point was that public criticism has costs, which need to be weighed against the benefits of the criticism (whether or not you’re an act utilitarian). In extreme cases, organisations have been destroyed by memorable criticism that turned out to be false or unfounded. These costs, however, can often be mitigated with things like talking to the organisation first – this isn’t to seek permission, but to do things like check whether they’ve already written something on the topic, and whether your understanding of the issues is correct. For instance, GiveWell runs their charity reviews past the charity before posting, but that doesn’t mean their reports are always to the charity’s liking. I’d prefer a movement where people bear these considerations in mind as well, but it seems like they’re often ignored.

None of this is to deny that criticism is often extremely valuable.

I think this is plainly enough to show that Ben Todd is not anti-criticism. I’m also impressed that 80,000 Hours has a “mistakes page” in which they describe past failures (which is an unusual and especially praiseworthy sign of transparency in an organization.)

Todd did, in his reply to my post, reiterate that he thinks criticism should face a high burden of proof because “organisations have been destroyed by memorable criticism that turned out to be false or unfounded.” I’m not sure this is a good policy; Ben Hoffman articulates some problems with it here.

But I was wrong to conflate this with an across-the-board opposition to criticism.  It’s probably fairer to say that Todd opposes adversarial criticism and prefers cooperative or friendly criticism (for example, he thinks critics should privately ask organizations to change their policies rather than publicly lambasting them for having bad policies.)

I still think this is a mistake on his part, but when I framed it as “EA Leader says criticizing EA orgs is harmful to the movement”, I was exaggerating for effect, and I probably shouldn’t have done that.

Case 2: Robert Wiblin on Promises

I quoted Robert Wiblin on his interpretation of the Giving What We Can pledge, and interpreted Wiblin’s words to mean that he doesn’t think the pledge is morally binding.

I think this is pretty clear-cut and I interpreted Wiblin correctly.

The context there was that Alyssa Vance, in the original post, had said that many people might rationally choose not to take the pledge because unforeseen financial circumstances might make it inadvisable in future. She said that Wiblin had previously claimed that this was not a problem, because he didn’t view the pledge as binding on his future self:

pledge taker Rob Wiblin said that, if he changed his mind about donating 10% every year being the best choice, he would simply un-take the pledge.”  

Wiblin doesn’t think that “maybe I won’t be able to afford to give 10% of my income in future” is a good enough reason for people to choose not to pledge 10% of their lifetime income, because if they ever did become poor, they could just stop giving.

Some commenters claimed that Wiblin doesn’t have a cavalier attitude towards promises, he just thinks that in extreme cases it’s okay to break them.  In the Jewish ritual law, it’s permissible to break a commandment if it’s necessary to save a human life, but that doesn’t mean that the overall attitude to the commandments is casual.

However, I think it does imply a cavalier attitude towards promises to say that you shouldn’t hesitate to make them on the grounds that you might not want to keep them.  If you don’t think, before making a lifelong pledge, that people should think “hmm, am I prepared to make this big a commitment?” and in some cases answer “no”, then you clearly don’t think that the pledge is a particularly strong commitment.

Case 3: Robert Wiblin on Autism

Does Robert Wiblin actually mean it as a pejorative when he speculates that maybe the reason some people are especially hesitant to commit to the GWWC pledge is that they’re on the autism spectrum?

Some people (including the person he said it to, who is autistic), didn’t take it as a negative.  And, in principle, if we aren’t biased against disabled people, “autistic” should be a neutral descriptive word, not a pejorative.

But we do live in a society where people throw around “autistic” as an insult to refer to anybody who supposedly has poor social skills, so in context, Wiblin’s comment does have a pejorative connotation.

Moreover, Wiblin was using the accusation of autism as a reason to dismiss the concerns of people who are especially serious about keeping promises. It’s equivalent to saying “your beliefs are the result of a medical condition, so I don’t have to take them seriously.”  He’s medicalizing the beliefs of those who disagree with him.  Even if his opponents are autistic, if he respected them, he’d take their disagreement seriously.

Case 4: Jacy Reese on Evidence from Intuition

I quoted Jacy Reese responding to criticism about his cost-effectiveness estimates by saying that the evidence base in favor of leafleting includes his intuition and studies that are better described as “evidence against the effectiveness of leafleting.”

His, and ACE’s, defense of the leafleting studies as merely “weak evidence” for leafleting, is a matter of public record in many places. He definitely believes this.

Does he really think that his intuition is evidence, or did he just use ambiguous wording? I don’t know, and I’d be willing to concede that this isn’t a big deal.

Possibility 2: The views I accused EAs of holding are not actually bad.

Case 1: Dishonesty for the greater good might sometimes be worthwhile.

A number of people in the comments to my previous post are making the argument that I need to weigh the harms of dishonest or misleading information against its benefits.

First of all, the fact that people are making these arguments at least partly belies the notion that all EAs oppose lying across the board; I’ll say more about the prevalence of these views in the community in the next section.

Holly Elmore:

What if, for the sake of argument, it *was* better to persuade easy marks to take the pledge and give life-saving donations than to persuade fewer people more gently and (as she perceives it) respectfully? How many lives is extra respect worth? She’s acting like this isn’t even an argument.

This is a more general problem I’ve had with Sarah’s writing and medical ethics in general– the fixation on meticulously informed consent as if it’s the paramount moral issue.

Gleb Tsipursky:

If you do not lie, that’s fine, but don’t pretend that you care about doing the most good, please. Just don’t. You care about being as transparent and honest as possible over doing the most good.

I’m including Gleb here, even though he’s been kicked out of the EA community, because he is saying the same things as Holly Elmore, who is a respected member of the community.  There may be more EAs out there sharing the same views.

So, cards on the table: I am not an act-utilitarian. I am a eudaimonistic virtue ethicist. What that means is that I believe:

  • The point of behaving ethically is to have a better life for yourself.
  • Dishonesty will predictably damage your life.
  • If you find yourself tempted to be dishonest because it seems like a good idea, you should instead trust that the general principle of “be honest” is more reliable than your guess that lying is a good idea in this particular instance.

(Does this apply to me and my lapses in honesty?  YOU BET.  Whenever it seems like a good idea at the time for me to deceive, I wind up suffering for it later.)

I also believe consent is really important.

I believe that giving money to charitable causes is a non-obligatory personal decision, while respecting consent to a high standard is not.

Are these significant values differences with many EAs? Yes, they are.

I wasn’t honest enough in my previous post about this, and I apologize for that. I should have owned my beliefs more candidly.

I also exaggerated for effect in my previous post, and that was misleading, and I apologize for that. Furthermore, in the comments, I claimed that I intended to exaggerate for effect; that was an emotional outburst and isn’t what I really believe. I don’t endorse dishonesty “for a good cause”, and on occasions  when I’ve gotten upset and yielded to temptation, it has always turned out to be a bad idea that came back to bite me.

I do think that even if you are a more conventional utilitarian, there are arguments in favor of being honest always and not just when the local benefits outweigh the costs.

Eliezer Yudkowsky and Paul Christiano have written about why utilitarians should still have integrity.

One way of looking at this is rule-utilitarianism: there are gains from being known to be reliably trustworthy.

Another way of looking at this is the comment about “financial bubbles” I made in my previous post.  If utilitarians take their best guess about what action is the Most Good, and inflate public perceptions of its goodness so that more people will take that action, and encourage the public to further inflate perceptions of its goodness, then errors in people’s judgments of the good will expand without bound.  A highly popular bad idea will end up dominating most mindshare and charity dollars. However, if utilitarians critique each other’s best guesses about which actions do the most good, then bad ideas will get shot down quickly, to make room for good ones.

Case 2: It’s better to check in with EA orgs before criticizing them

Ben Todd, and some people in the comments, have argued that it’s better to run critical blog posts by EA orgs before making those criticisms public.  This rhymes a little with the traditional advice to admonish friends in private but praise them in public, in order to avoid causing them to lose face.  The idea seems to be that public criticism will be less accurate and also that it will draw negative attention to the movement.

Now, some of the issues I criticized in my blog post have also been brought up by others, both publicly and privately, which is where I first heard about them. But I don’t agree with the basic premise in the first place.

Journalists check quotes with sources, that’s true, and usually get quotes from the organizations they’re reporting on. But bloggers are not journalists, first of all.  A blog post is more like engaging in an extended conversation than reporting the news. Some of that conversation is with EA orgs and their leaders — this post, and the discussions it pulls from, are drawn from discussions about writings of various degrees of “officialness” coming from EA orgs.  I think the public record of discussion is enough of a “source” for this purpose; we know what was said, by whom, and when, and there’s no ambiguity about whether the comments were really made.

What we don’t necessarily know without further discussion is what leaders of EA orgs mean, and what they say behind closed doors. It may be that their quotes don’t represent their intent.  I think this is the gist of what people saying “talk to the orgs in private” mean — if we talked to them, we’d understand that they’re already working on the problem, or that they don’t really have the problematic views they seem to have, etc.

However, I think this is an unfair standard.  “Talk to us first to be sure you’re getting the real story” is extra work for both the blogger and the EA org (do you really have to check in with GWWC every time you discuss the pledge?)

And it’s trying to massage the discussion away from sharp, adversarial critics. A journalist who got his stories about politics almost entirely from White House sources, and relied very heavily on his good relationship with the White House, would have a conflict of interest and would probably produce biased reporting. You don’t want all the discussion of EA to be coming from people who are cozy with EA orgs.  You don’t necessarily want all discussion to be influenced by “well, I talked to this EA leader, and I’m confident his heart’s in the right place.”

There’s something valuable about having a conversation going on in public. It’s useful for transparency and it’s useful for common knowledge. EA orgs like GiveWell and 80K are unusually transparent already; they’re engaging in open dialogue with their readers and donors, rather than funneling all communication through a narrow, PR-focused information stream.  That’s a remarkable and commendable choice.

But it’s also a risky one; because they’re talking a lot, they can incur reputational damage if they’re quoted unfavorably (as I did in my previous post).  So they’re asking us, the EA and EA-adjacent community, to do some work in guarding their reputation.

I think this is not necessarily a fair thing to expect from everyone discussing an EA topic. Some people are skeptical of EA as a whole, and thus don’t have a reason to protect its reputation. Some people, like Alyssa in her post on the GWWC pledge, aren’t even accusing an org of doing anything wrong, just discussing a topic of interest to EAs like “who should and shouldn’t take the pledge?” She couldn’t reasonably have foreseen that this would be perceived as an attack on GWWC’s reputation.

I think, if an EA org says or does something in public that people find problematic, they should expect to be criticized in public, and not necessarily get a chance to check over the criticism first.

Possibility 3: The quotes I pulled are not strong evidence of a big problem in EA.

I picked quotes that I and a few friends had noticed offhand as unusually bad.  So, obviously, it’s not the same thing as a survey of EA-wide attitudes.

On the other hand, “picking egregious examples” is a perfectly fine way to suggest that there may be a broader trend. If you know that a few people in a Presidential administration have made racist remarks, for instance, it’s not out of line to suggest that the administration has a racism problem.

So, I stand behind cherry-picked examples as a way to highlight trends, in the context of “something suspicious is going on here, maybe we should pay attention to it.”

The fact that people are, in response to my post, defending the practice of lying for the greater good is also evidence that these aren’t entirely isolated cases.

Of course, it’s possible that the quotes I picked aren’t egregiously bad, but I think I’ve covered my views on that in the previous two sections.

I think that, given the Intentional Insights scandal, it’s reasonable to ask the question “was this just one guy, or is the EA community producing a climate that shelters bullshit artists?”  And I think there’s enough evidence to be suspicious that the latter is true.

Possibility 4: My point stands, but my tactics were bad

I did not handle this post like a pro.

I used the title “EA Has a Lying Problem”, which is inflammatory, and also (worse, in my view), not quite the right word. None of the things I quoted were lies. They were defenses of dishonest behavior, or, in Ben Todd’s case, what I thought was a bias against transparency and open debate. I probably should have called it “dishonesty” rather than “lying.”

In general, I think I was inflammatory in a careless rather than a pointed way. I do think it’s important to make bad things look bad, but I didn’t take care to avoid discrediting a vast swath of innocents, and that was wrong of me.

Then, I got emotional in the comments section, and expressed an attitude of “I’m a bad person who does bad things on purpose”, which is rude, untrue, and not a good look on me.

I definitely think these were mistakes on my part.

It’s also been pointed out to me that I could have raised my criticisms privately, within EA orgs, rather than going public with a potentially reputation-damaging post (damaging to my own reputation or to the EA movement.)

I don’t think that would have been a good idea in my case.

When it comes to my own reputation, for better or for worse I’m a little reckless.  I don’t have a great deal of ability to consciously control how I’m perceived — things tend to slip out impulsively — so I try not to worry about it too much.  I’ll live with how I’m judged.

When it comes to EA’s reputation, I think it’s possible I should have been more careful. Some of the organizations I’ve criticized have done really good work promoting causes I care about.  I should have thought of that, and perhaps worded my post in a way that produced less risk of scandal.

On the other hand, I never had a close relationship with any EA orgs, and I don’t think internal critique would have been a useful avenue for me.
In general, I think I want to sanity-check my accusatory posts with more beta readers in future.  My blog is supposed to represent a pretty close match to my true beliefs, not just my more emotional impulses, and I should be more circumspect before posting stuff.

EA Has A Lying Problem

I am currently writing up a response to criticism of this post and will have it up shortly.

Why hold EA to a high standard?

“Movement drama” seems to be depressingly common  — whenever people set out to change the world, they inevitably pick fights with each other, usually over trivialities.  What’s the point, beyond mere disagreeableness, of pointing out problems in the Effective Altruism movement? I’m about to start some movement drama, and so I think it behooves me to explain why it’s worth paying attention to this time.

Effective Altruism is a movement that claims that we can improve the world more effectively with empirical research and explicit reasoning. The slogan of the Center for Effective Altruism is “Doing Good Better.”

This is a moral claim (they say they are doing good) and a claim of excellence (they say that they offer ways to do good better.)

EA is also a proselytizing movement. It tries to raise money, for EA organizations as well as for charities; it also tries to “build the movement”, increase attendance at events like the EA Global conference, get positive press, and otherwise get attention for its ideas.

The Atlantic called EA “generosity for nerds”, and I think that’s a fair assessment. The “target market” for EA is people like me and my friends: young, educated, idealistic, Silicon Valley-ish.

The origins of EA are in academic philosophy. Peter Singer and Toby Ord were the first to promote the idea that people have an obligation to help the developing world and reduce animal suffering, on utilitarian grounds.  The leaders of the Center for Effective Altruism, Giving What We Can, 80,000 Hours, The Life You Can Save, and related EA orgs, are drawn heavily from philosophy professors and philosophy majors.

What this means, first of all, is that we can judge EA activism by its own standards. These people are philosophers who claim to be using objective methods to assess how to do good; so it’s fair to ask “Are they being objective? Are they doing good? Is their philosophy sound?”  It’s admittedly hard for young organizations to prove they have good track records, and that shouldn’t count against them; but honesty, transparency, and sound arguments are reasonable to expect.

Second of all, it means that EA matters.  I believe that individuals and small groups who produce original thinking about big-picture issues have always had outsize historical importance. Philosophers and theorists who capture mindshare have long-term influence.  Young people with unusual access to power and interest in “changing the world” stand a good chance of affecting what happens in the coming decades.

So it matters if there are problems in EA. If kids at Stanford or Harvard or Oxford are being misled or influenced for the worse, that’s a real problem. They actually are, as the cliche goes, “tomorrow’s leaders.” And EA really seems to be prominent among the ideologies competing for the minds of the most elite and idealistic young people.  If it’s fundamentally misguided or vulnerable to malfeasance, I think that’s worth talking about.

Lying for the greater good

Imagine that you are a perfect act-utilitarian. You want to produce the greatest good for the greatest number, and, magically, you know exactly how to do it.

Wouldn’t a pretty plausible course of action be “accumulate as much power and resources as possible, so you can do even more good”?

Taken to an extreme, this would look indistinguishable from the actions of someone who just wants to acquire as much power as possible for its own sake.  Actually building Utopia is always something to get around to later; for now you have to build up your strength, so that the future utopia will be even better.

Lying and hurting people in order to gain power can never be bad, because you are always aiming at the greater good down the road, so anything that makes you more powerful should promote the Good, right?

Obviously, this is a terrible failure mode. There’s a reason J.K. Rowling gave her Hitler-like figure Grindelwald the slogan “For the Greater Good.”  Ordinary, children’s-story morality tells us that when somebody is lying or hurting people “for the greater good”, he’s a bad guy.

A number of prominent EA figures have made statements that seem to endorse lying “for the greater good.”  Sometimes these statements are arguably reasonable, taken in isolation. But put together, there starts to be a pattern.  It’s not quite storybook-villain-level, but it has something of the same flavor.

There are people who are comfortable sacrificing honesty in order to promote EA’s brand.  After all, if EA becomes more popular, more people will give to charity, and that charity will do good, and that good may outweigh whatever harm comes from deception.

The problem with this reasoning should be obvious. The argument would work just as well if EA did no good at all, and only claimed to do good.

Arbitrary or unreliable claims of moral superiority function like bubbles in economic markets. If you never check the value of a stock against some kind of ground-truth reality, if everyone only looks at its current price and buys or sells based on that, we’ll see prices being inflated based on no reason at all.  If you don’t insist on honesty in people’s claims of “for the greater good”, you’ll get hijacked into helping people who aren’t serving the greater good at all.

I think it’s worth being suspicious of anybody who says “actually, lying is a good idea” and has a bunch of intelligence and power and moral suasion on their side.

It’s a problem if a movement is attracting smart, idealistic, privileged young people who want to “do good better” and teaching them that the way to do the most good is to lie.  It’s arguably even more of a problem than, say, lobbyists taking young Ivy League grads under their wing and teaching them to practice lucrative corruption.  The lobbyists are appealing to the most venal among the youthful elite.  The nominally-idealistic movement is appealing to the most ethical, and corrupting them.

The quotes that follow are going to look almost reasonable. I expect some people to argue that they are in fact reasonable and innocent and I’ve misrepresented them. That’s possible, and I’m going to try to make a case that there’s actually a problem here; but I’d also like to invite my readers to take the paranoid perspective for a moment. If you imagine mistrusting these nice, clean-cut, well-spoken young men, or mistrusting Something that speaks through them, could you see how these quotes would seem less reasonable?

Criticizing EA orgs is harmful to the movement

In response to an essay on the EA forums criticizing the Giving What We Can pledge (a promise to give 10% of one’s income to charity), Ben Todd, the CEO  of 80,000 Hours, said:

Topics like this are sensitive and complex, so it can take a long time to write them up well. It’s easy to get misunderstood or make the organisation look bad.

At the same time, the benefits might be slight, because (i) it doesn’t directly contribute to growth (if users have common questions, then add them to the FAQ and other intro materials) or (ii) fundraising (if donors have questions, speak to them directly).

Remember that GWWC is getting almost 100 pledges per month atm, and very few come from places like this forum. More broadly, there’s a huge number of pressing priorities. There’s lots of other issues GWWC could write about but hasn’t had time to as well.

If you’re wondering whether GWWC has thought about these kinds of questions, you can also just ask them. They’ll probably respond, and if they get a lot of requests to answer the same thing, they’ll probably write about it publicly.

With figuring out strategy (e.g. whether to spend more time on communication with the EA community or something else) GWWC writes fairly lengthy public reviews every 6-12 months.

He also said:

None of these criticisms are new to me. I think all of them have been discussed in some depth within CEA.

This makes me wonder if the problem is actually a failure of communication. Unfortunately, issues like this are costly to communicate outside of the organisation, and it often doesn’t seem like the best use of time, but maybe that’s wrong.

Given this, I think it also makes sense to run critical posts past the organisation concerned before posting. They might have already dealt with the issue, or have plans to do so, in which posting the criticism is significantly less valuable (because it incurs similar costs to the org but with fewer benefits). It also helps the community avoid re-treading the same ground.

In other words: the CEO of 80,000 Hours thinks that people should “run critical posts past the organization concerned before posting”, but also thinks that it might not be worth it for GWWC to address such criticisms because they don’t directly contribute to growth or fundraising, and addressing criticisms publicly might “make the organization look bad.”

This cashes out to saying “we don’t want to respond to your criticism, and we also would prefer you didn’t make it in public.”

It’s normal for organizations not to respond to every criticism — the Coca-Cola company doesn’t have to respond to every internet comment that says Coke is unhealthy — but Coca-Cola’s CEO doesn’t go around shushing critics either.

Todd seems to be saying that the target market of GWWC is not readers of the EA forum or similar communities, which is why answering criticism is not a priority. (“Remember that GWWC is getting almost 100 pledges per month atm, and very few come from places like this forum.”) Now, “places like this forum” seems to mean communities where people identify as “effective altruists”, geek out about the details of EA, spend a lot of time researching charities and debating EA strategy, etc.  Places where people might question, in detail, whether pledging 10% of one’s income to charity for life is actually a good idea or not.  Todd seems to be implying that answering the criticisms of these people is not useful — what’s useful is encouraging outsiders to donate more to charity.

Essentially, this maps to a policy of “let’s not worry over-much about internally critiquing whether we’re going in the right direction; let’s just try to scale up, get a bunch of people to sign on with us, move more money, grow our influence.”  An uncharitable way of reading this is “c’mon, guys, our marketing doesn’t have to satisfy you, it’s for the marks!”  Jane Q. Public doesn’t think about details, she doesn’t nitpick, she’s not a nerd; we tell her about the plight of the poor, she feels moved, and she gives.  That’s who we want to appeal to, right?

The problem is that it’s not quite fair to Jane Q. Public to treat her as a patsy rather than as a peer.

You’ll see echoes of this attitude come up frequently in EA contexts — the insinuation that criticism is an inconvenience that gets in the way of movement-building, and movement-building means obtaining the participation of the uncritical.

In responding to a criticism of a post on CEA fundraising, Ben Todd said:

I think we should hold criticism to a higher standard, because criticism has more costs. Negative things are much more memorable than positive things. People often remember criticism, perhaps just on a gut level, even if it’s shown to be wrong later in the thread.

This misses the obvious point that criticism of CEA has costs to CEA, but possibly has benefits to other people if CEA really has flaws.  It’s a sort of “EA, c’est moi” narcissism: what’s good for CEA is what’s good for the Movement, which is what’s good for the world.

Keeping promises is a symptom of autism

In the same thread criticizing the Giving What We Can pledge, Robert Wiblin, the director of research at 80,000 Hours, said:

Firstly: I think we should use the interpretation of the pledge that produces the best outcome. The use GWWC and I apply is completely mainstream use of the term pledge (e.g. you ‘pledge’ to stay with the person you marry, but people nonetheless get divorced if they think the marriage is too harmful to continue).

A looser interpretation is better because more people will be willing to participate, and each person gain from a smaller and more reasonable push towards moral behaviour. We certainly don’t want people to be compelled to do things they think are morally wrong – that doesn’t achieve an EA goal. That would be bad. Indeed it’s the original complaint here.

Secondly: An “evil future you” who didn’t care about the good you can do through donations probably wouldn’t care much about keeping promises made by a different kind of person in the past either, I wouldn’t think.

Thirdly: The coordination thing doesn’t really matter here because you are only ‘cooperating’ with your future self, who can’t really reject you because they don’t exist yet (unlike another person who is deciding whether to help you).

One thing I suspect is going on here is that people on the autism spectrum interpret all kinds of promises to be more binding than neurotypical people do (e.g. https://www.reddit.com/r/aspergers/comments/46zo2s/promises/). I don’t know if that applies to any individual here specifically, but I think it explains how some of us have very different intuitions. But I expect we will be able to do more good if we apply the neurotypical intuitions that most people share.

Of course if you want to make it fully binding for yourself, then nobody can really stop you.

In other words: Rob Wiblin thinks that promising to give 10% of income to charity for the rest of your life, which the Giving What We Can website describes as “a promise, or oath, to be made seriously and with every expectation of keeping it”, does not literally mean committing to actually do that. It means that you can quit any time you feel like it.

He thinks that you should interpret words with whatever interpretation will “do the most good”, instead of as, you know, what the words actually mean.

If you respond to a proposed pledge with “hm, I don’t know, that’s a really big commitment”, you must just be a silly autistic who doesn’t understand that you could just break your commitment when it gets tough to follow!  The movement doesn’t depend on weirdos like you, it needs to market to normal people!

I don’t know whether to be more frustrated with the ableism or the pathologization of integrity.

Once again, there is the insinuation that the growth of EA depends on manipulating the public — acquiring the dollars of the “normal” people who don’t think too much and can’t keep promises.

Jane Q. Public is stupid, impulsive, and easily led.  That’s why we want her.

“Because I Said So” is evidence

Jacy Reese, a prominent animal-rights-focused EA, responded to some criticism of Animal Charity Evaluators’ top charities on Facebook as follows:

Just to note, we (or at least I) agree there are serious issues with our leafleting estimate and hope to improve it in the near future. Unfortunately, there are lots of things that fit into this category and we just don’t have enough research staff time for all of them.

I spent a good part of 2016 helping make significant improvements to our outdated online ads quantitative estimate, which now aggregates evidence from intuition, experiments, non-animal-advocacy social science, and veg pledge rates to come up with the “veg-years per click” estimate. I’d love to see us do something similar with the leafleting estimate, and strongly believe we should keep trying, rather than throwing our hands in the air and declaring weak evidence is “no evidence.”

For context here, the “leafleting estimate” refers to the rate at which pro-vegan leaflets cause people to eat less meat (and hence the impact of leafleting advocacy at reducing animal suffering.)  The studies ACE used to justify the effectiveness of leafleting actually showed that leafleting was ineffective: an uncontrolled study of 486 college students shown a pro-vegetarianism leaflet found that only one student (0.2%) went vegetarian, while a controlled study conducted by ACE itself found that consumption of animal products was no lower in the leafleted group than the control group.  The criticisms of ACE’s leafleting estimate were not merely that it was flawed, but that it literally fabricated numbers based on a “hypothetical.”  ACE publishes “top charities” that it claims are effective at saving animal lives; the leafleting effectiveness estimates are used to justify why people should give money to certain veganism-activist charities.  A made-up reason to support a charity isn’t “weak evidence”, it’s lying.

In that context, it’s exceptionally shocking to hear Reese talking about “evidence from intuition,” which is…not evidence.

Reese continues:

Intuition is certainly evidence in this sense. If I have to make quick decisions, like in the middle of a conversation where I’m trying to inspire someone to help animals, would I be more successful on average if I flipped a coin for my responses or went with my intuition?

But that’s not the point.  Obviously, my intuition is valuable to me in making decisions on the fly.  But my intuition is not a reason why anybody else should follow my lead. For that, I’d have to give, y’know, reasons.

This is what the word “objectivity” means. It is the ability to share data between people, so that each can independently judge for themselves.

Reese is making the same kind of narcissistic fallacy we saw before. Reese is forgetting that his readers are not Jacy Reese and therefore “Jacy Reese thinks so” is not a compelling reason to them.  Or perhaps he’s hoping that his donors can be “inspired” to give money to organizations run by his friends, simply because he tells them to.

In a Facebook thread on Harrison Nathan’s criticism of leafleting estimates, Jacy Reese said:

I have lots of demands on my time, and like others have said, engaging with you seems particularly unlikely to help us move forward as a movement and do more for animals.

Nobody is obligated to spend time replying to anyone else, and it may be natural to get a little miffed at criticism, but I’d like to point out the weirdness of saying that criticism doesn’t “help us move forward as a movement.”  If a passenger in your car says “hey, you just missed your exit”, you don’t complain that he’s keeping you from moving forward. That’s the whole point. You might be moving in the wrong direction.

In the midst of this debate somebody commented,

“Sheesh, so much grenade throwing over a list of charities!  I think it’s a great list!”

This is a nice, Jane Q. Public, kind of sentiment.  Why, indeed, should we argue so much about charities? Giving to charity is a nice thing to do.  Why can’t we all just get along and promote charitable giving?

The point is, though — it’s giving to a good cause that’s a praiseworthy thing to do.  Giving to an arbitrary cause is not a good thing to do.

The whole point of the “effective” in “Effective Altruism” is that we, ideally, care about whether our actions actually have good consequences or not. We’d like to help animals or the sick or the poor, in real life. You don’t promote good outcomes if you oppose objectivity.

So what? The issue of exploitative marketing

These are informal comments by EAs, not official pronouncements.  And the majority of discussion of EA topics I’ve seen is respectful, thoughtful, and open to criticism.  So what’s the big deal if some EAs say problematic things?

There are some genuine scandals within the EA movement that pertain to deceptive marketing.  Intentional Insights, a supposed “EA” organization led by history professor Gleb Tsipursky, used astroturfing, paid for likes and positive comments, made false claims about his social media popularity, falsely claimed affiliation with other EA organizations, and may have required his employees to “volunteer” large amounts of unpaid labor for him.

To their credit, CEA repudiated Intentional Insights; Will McAskill’s excellent post on the topic argued that EA needs to clarify shared values and guard against people co-opting the EA brand to do unethical things.  One of the issues he brought up was

People engaging in or publicly endorsing ‘ends justify the means’ reasoning (for example involving plagiarism or dishonesty)

which is a perfect description of Tsipursky’s behavior.

I would argue that the problem goes beyond Tsipursky.  ACE’s claims about leafleting, and the way ACE’s advocates respond to criticism about it, are very plausibly examples of dishonesty defended with “ends justify the means” rhetoric.

More subtly, the most central effective altruism organizations and the custodians of the “Effective Altruism” brand are CEA and its offshoots (80,000 Hours and Giving What We Can), which are primarily focused on movement-building. And sometimes the way they do movement-building risks promoting an exploitative rather than cooperative relationship with the public.

What do I mean by that?

When you communicate cooperatively with a peer, you give them “news they can use.”  Cooperative advertising is a benefit to the consumer — if I didn’t know that there are salons in my neighborhood that specialize in cutting curly hair, then you, as the salon, are helping me by informing me about your services. If you argue cooperatively in favor of an action, you are telling your peer “hey, you might succeed better at your goals if you did such-and-such,” which is helpful information. Even making a request can be cooperative; if you care about me, you might want to know how best to make me happy, so when I express my preferences, I’m offering you helpful information.

When you communicate exploitatively with someone, you’re trying to gain from their weaknesses. Some of the sleazier forms of advertising are good examples of exploitation; if you make it very difficult to unsubscribe from your service, or make spammy websites whose addresses are misspellings of common website names, or make the “buy” button large and the “no thanks” button tiny, you’re trying to get money out of people’s forgetfulness or clumsiness rather than their actual interest in your product.  If you back a woman into an enclosed space and try to kiss her, you’re trying to get sexual favors as a result of her physical immobility rather than her actual willingness.

Exploitativeness is treating someone like a mark; cooperativeness is treating them like a friend.

A remarkable amount of EA discourse is framed cooperatively.  It’s about helping each other figure out how best to do good.  That’s one of the things I find most impressive about the EA movement — compared to other ideologies and movements, it’s unusually friendly, exploratory, and open to critical thinking.

However, if there are signs that EA orgs, as they grow and professionalize, are deliberately targeting growth among less-critical, less-intellectually-engaged, lower-integrity donors, while being dismissive towards intelligent and serious critics, which I think some of the discussions I’ve quoted on the GWWC pledge suggest, then it makes me worry that they’re trying to get money out of people’s weaknesses rather than gaining from their strengths.

Intentional Insights used the traditional tactics of scammy, lowest-common-denominator marketing. To a sophisticated reader, their site would seem lame, even if you didn’t know about their ethical lapses. It’s buzzwordy, clickbaity, and unoriginal.  And this isn’t an accident, any more than it’s an accident that spam emails have poor grammar. People who are fussy about quality aren’t the target market for exploitative marketing. The target market for exploitative marketing is and always has been the exceptionally unsophisticated.  Old people who don’t know how to use the internet; people too disorganized to cancel their subscriptions; people too impulsive to resist clicking on big red buttons; sometimes even literal bots.

The opposite approach, if you don’t want to drift towards a pattern of exploitative marketing, is to target people who seek out hard-to-fake signals of quality.  In EA, this would mean paying attention to people who have high standards in ethics and accuracy, and treating them as the core market, rather than succumbing to the temptation to farm metrics of engagement from whomever it’s easiest to recruit in the short-term.

Using “number of people who sign the GWWC pledge” as a metric of engagement in EA is nowhere near as shady as paying for Facebook likes, but I think there’s a similar flavor of exploitability between them.  You don’t want to be measuring how good you are at “doing good” by counting how many people make a symbolic or trivial gesture.  (And the GWWC pledge isn’t trivial or symbolic for most people…but it might become so if people keep insisting it’s not meant as a real promise.)

EAs can fight the forces of sleaze by staying cooperative — engaging with those who make valid criticisms, refusing the temptation to make strong but misleading advertising claims, respecting rather than denigrating people of high integrity, and generally talking to the public like we’re reasonable people.

CORRECTION

A previous version of this post used the name and linked to a Facebook comment by a volunteer member of an EA organization. He isn’t an official employee of any EA organization, and his views are his own, so he viewed this as an invasion of his privacy, and he’s right. I’ve retracted his name and the link.

Ain’t Gonna Study War No More

Epistemic Status: Personal

People are often confused when I say I’m an anarchist, and it takes a while to explain what I mean, so I think it may be worth posting about.

The thing is, I believe in peace.

Yep, this is the old-fashioned non-aggression principle.  Consent. Voluntariness. Mutual benefit. Live and let live. All that jazz.

Nation shall not lift up sword against nation, neither shall they learn war any more. 

For me, it’s not an abstract formalism set up to justify why taxes should be lower or something.  Peace is a state that you can be at, or not, with others. Peace is what happens when two cats take a nap next to each other in a square of sunlight, and they don’t bother each other, but are okay in each other’s company. Peace is a state of deep, secure, calm nonintrusion.  Peace is what my husband and I try to practice in our marriage, for instance.

Peace means that nobody is giving orders, implicitly or explicitly. It means when you speak to a person you’re saying “I think you would value hearing this”, not “I’m going to try to alter you.”  It’s a kind of politeness and respect for privacy.  To get it exactly right takes a lot of work, and everyone makes mistakes at it.  But it’s a beautiful thing when it can be achieved.

Peace is the precondition for individual perception or creation. You have to be left alone for long enough to have a mind of your own. A child who gets enough time to play and dream will start making things. Poking and prodding interrupts that process.

Adults also have to be left alone to make things, if we are ever going to have nice things.  If you don’t let people make factories or houses or drugs, we won’t have any.

And cruelty hurts. Harshness hurts. Normal sympathy tells us that.  All things being equal, being mean is bad. I don’t care that it sounds childish, that’s what I actually believe.

Ok, but non-peace is everywhere. The world contains wars and governments, and pushy assholes, and probably always will as long as there are people. And there may be necessary evils, situations where aggression is unavoidable. Isn’t it naive of me to just stand here saying “peace is good”?

This is the point when I have to make clear that I’m talking about a stance rather than a system.  The question is always “what do I do?”, “where do I place the Sarah in the world?”  I don’t have a God’s-eye view; my understanding literally comes out of my own brain, which is embedded in one person, who exists in the world.  So there’s no real principled separation between believing and doing.

What defines you, as an agent with bounded computation, is what you focus on and what simplifying heuristics you use. Defining the “typical case” vs “outliers” is a form of frame control that is inevitable, an ineluctable form of choice, so you may as well do it intentionally.

My stance is that my attention belongs on the win-win, peaceful, productive parts of the world. My stance is to place myself on the side of aspiring to higher standards and aiming for joyful wins.  I think that outlook is both well suited to me and significantly undervalued in the public.  We may need people in this world who are all about making harsh tradeoffs, protecting against tail risks, guarding the worst off, being leaders or guardians or sheepdogs — but that’s a completely different frame, a different stance, and I don’t think it’s really possible to see the world through both simultaneously.

In the frame I find healthiest, the “typical case” is that people are individual persons who fare best when their consent is respected, that by default “doing well” and “doing good” correlate, that “do your own thing, seek growth and happiness, don’t aggress upon others” is usually the best bet, and that cases where that’s not possible are the exceptions and aberrations, to be worked around as best as possible.  Peace is the main course, force is a bitter condiment that we must occasionally taste.

This is the lens through which Leopold Bloom sees the world:

But it’s no use, says he. Force, hatred, history, all that. That’s not life for men and women, insult and hatred. And everybody knows that it’s the very opposite of that that is really life.

— What? says Alf.

— Love, says Bloom. I mean the opposite of hatred.

I’ve often noticed a hope deficit — people are very quick to try to decide which necessary evil they have to accept, and it doesn’t occur to them to ask “what would it look like if things were just straightforwardly good?”  There’s a creative power in optimism that people don’t seem to appreciate enough.

I think I’m not going to do that much more digging into social-science topics, because they aren’t as amenable to finding peaceful wins.  The framing puts me in the position of asking “do I support inflicting this harm or that harm on millions of people?”  And this is ridiculous; I, personally, will never do such a thing, and don’t want to.  If I ever make a valuable contribution to the world it will almost certainly be through making, not ruling. So anything that sets things up as the question of “what would I do, if I were a ruler” is corrosive.  The world is a mixture of peace and war, but I want my part in it to be peaceful.

Haidt-Love Relationship

Epistemic status: personal, exhortatory, expressive

Jonathan Haidt has an ideology.  In his academic life, he poses positive questions, but he definitely has a normative position as well. And you can see this most clearly in his speeches to young people, which are sermons on Haidtism.

Here is an example.

In it, he contrasts “Coddle U” with “Strengthen U,” two archetypal colleges. He’s clearly arguing in favor of psychological resilience, and against fragility. Let’s leave aside the question of whether feminists and other activists are actually oversensitive weenies, and whether trigger warnings are actually coddling, and engage with his main point, that it is better not to be an oversensitive weenie.

Haidt seems to see this as self-evident. The emotionally weak are to be mocked; the emotionally strong are to be respected.

I don’t find it as obvious.

Fragility can have a certain charm. Sensitive, romantic, tender spirits can be quite attractive.  The soft-hearted can be quick to show kindness. The easily-bruised can be alert to problems that more thick-skinned folks ignore.  We usually trust people’s sincerity more when they are moved to strong emotion.  A frail, innocent person is often a lovable person.  And who wouldn’t want to be lovable?

“Do you want to be strong or do you want to be fragile?” takes us back to Nietzsche’s old conflict of Herrenmoral and Sklavmoral.  Is it good to be successful, skilled, strong, powerful (as opposed to weak, cowardly, unhealthy, contemptible)?   Or is it good to be innocent, pure, gentle, kind (as opposed to oppressive, selfish, cruel)?

Of course it’s possible to be both kind and strong.  Herrenmoral and Sklavmoral are both pre-rational viewpoints, more like aesthetics than actual ethics.  It’s a question of whether you want to be this:
1920px-john_everett_millais_-_ophelia_-_google_art_project

or this:

Ultimately, the consideration in favor of strength is simply that the world contains threats.  Fragility may make you lovable, but it can also make you dead.  You don’t get to appreciate the benefits of sensitivity and tenderness if you’re dead.

Being strong enough to do well at the practicalities of the world — physical safety and health, economic security, enough emotional stability not to put yourself or others at risk — is, up to a point, an unalloyed good.

Think of it as a gambler’s ruin situation. You have to win or save enough to stay in the game.  Strength helps you stay in the game.

And because strength is necessary for survival, there’s something to respect in the pro-strength aesthetic.

From the outside, it can seem kind of mean and elitist. You’re scorning people for failure and pain? You think you’re better than the rest of us, just because you’re pretty or smart or tough or happy?

But another way of looking at it is having respect for the necessities of life.  If you consider that starvation is a thing, you’ll remember that food is valuable, and you’ll feel gratitude to the farmers who grow it. In the same way, you can have respect for intelligence, respect for competence, respect for toughness, respect for all skills.  You can be glad for them, because human skill drives out the darkness of death, the hard vacuum of space that surrounds us, and excellent humans are pinpricks of flame in the dark.  You can love that hard brilliance.

And if respect can tinge into love, love can shade into enjoyment. You can enjoy being awesome, or knowing people who are awesome.  It can be exhilarating.  It can be a high and heady pleasure.

And from that vantage point, it’s possible to empathize with someone who, like Haidt, scorns weakness. Maybe, once you’ve been paying attention to the high points of human ability, anything else seems rather dingy.  Maybe you think “It’s so much nicer here upon the heights, why would you want to be down in the valley?”  Maybe some of the people who seem “elitist” actually just want to be around the people who light them up, and have developed high standards for that.

Not to say that there doesn’t exist shallow, vindictive status-grabbing.  But there are also people who aren’t like that, who just prefer the excellent to the mediocre.

Or, on a smaller scale, there are those who seek out “positive people” and avoid “toxic people” — they’re orienting towards success and away from failure, towards strength and away from weakness, and this is an understandable thing to do.

An addict trying to get her life together would try hard to avoid weakness, temptation, backsliding — and this would be a good thing, and any decent person would cheer for her.  That kind of motivation is the healthy thing that drives people to choose strength over fragility.

So Haidt’s basic premise — that you want to be more strong than fragile — is believable.

His prescriptions for achieving that are risk tolerance and minimizing the negative.

I’m going to reframe his ideas somewhat so they refer to individuals.  He’s talking about a top-down perspective — how schools can make students stronger. I have an issue with that, because I think that “improving” people against their will is ethically questionable, and especially trying to “make people tough” by exposing them to adversity, if they have no intrinsic desire to toughen and no input into the type of “adversity” involved, is probably counterproductive.  However, people self-improve all the time, they make themselves tougher, and that’s a more fruitful perspective, in my opinion.

Risk tolerance is the self-motivated version of “we’re not going to coddle you.” It would mean seeking out challenges, looking for criticism, engaging with “hard truths”, going on adventures.  Trying things to test your mettle.

It’s pretty obvious why this works: small amounts of damage cause you to develop stronger defenses. Exercise produces micro-tears in muscle, so it grows back stronger.  Vaccines made of weakened virus stimulate immunity to that virus.  Intermittent, all-out efforts against fear or failure are good for you.

(You’re still playing to stay in the game, so an adversity that takes you out of the game altogether is not good for you. This is why I think it works much better if the individual’s judgment and motivation is engaged.  Voluntary choice is important. Authorities trying to “toughen kids up” against their will can kill them. )

Minimizing the negative means mentally shrinking the sources of your distress. Haidt cites Marcus Aurelius, Boethius, the Buddha, and the tenets of cognitive behavioral therapy as pointing at the same universal truths.

Now, of course, Stoicism, Buddhism, and modern psychology have very different visions of the good life. The ideal Stoic is a good citizen; the ideal Buddhist is an ascetic; the ideal psychological subject is “well.”  The ideal Stoic is protective of his soul; the ideal Buddhist is aware that his “self” does not exist.  Trying to be a serious Stoic is quite different from trying to be a serious Buddhist, and it’s not clear what it would even mean to try to be the “ideal person” by the standards of cognitive behavioral therapy.

What these philosophies have in common is a lot simpler than that: it’s just “Don’t sweat the small stuff.”

Don’t freak out over trivial shit. Remember that it’s trivial.

Stoicism and Buddhism both use meditation as a tactic; both suggest focusing on impermanence and even death, to remind oneself that trivial shit will pass.  CBT’s tactic is disputation — arguing with your fears and frustrations, telling yourself that the problem is not that big a deal.

Marcus Aurelius in particular uses pride a lot as a tactic, encouraging you to view getting upset as beneath the dignity of your soul.

Of course, “Don’t sweat the small stuff” imposed from without is a bit insulting.  Who are you, authority figure, to say what is and isn’t important?  Aren’t you telling me to ignore real problems and injustices?

But seen from within, “don’t sweat the small stuff” is just another perspective on “focus on your goals and values.”

You want to stay in the game, remember? So you can win, whatever that means to you.  So survival matters, robustness matters, because that keeps you in the game.  Freaking out takes you hors de combat.

Haidt tends not to push too hard on Christianity, perhaps because his audience is secular, but it is a very common source of comfort that does, empirically, make people happier.  My impression of Christian positivity, from a non-theological perspective, is that it says the good outweighs the bad. The bad exists; but the good is stronger and bigger and wins in the end.  And this is another way of not freaking out over trivial shit, which is quite different in aesthetic from the others, and maybe underappreciated by secular people.  Instead of trying to shrink your troubles by minimizing or disputing them, you can make them seem less important by contrast to something vast and Good. In a similar, albeit secular, spirit, there’s Camus’ famous line, “In the midst of winter, I found there was, within me, an invincible summer.”

Stripped of the sneering and the political angle and the paternalism, what we have here is a pretty solid message.

It’s a good idea to become stronger; in order to do that, try hard things, and don’t freak out about trivial shit.

Now, I immediately imagine a dialogue with my Weenie Self resisting this idea.

But…that sounds AWFUL!  I don’t want to!

Well, the thing is, “I’m not currently doing X” is not a valid argument against doing X. If it were, nobody would ever have a reason to change their behavior.  We’d all just follow the gradients of our current stimuli wherever they led.  There’s no choice in that world, no deliberate behavior. “But I’m currently freaking out about trivial shit!” doesn’t actually mean that you shouldn’t want to freak out less in future.

I know. It’s weird.  This is a way of thinking about things consciously and explicitly, even when they feel kind of awkward and wrong.

How can it be right when it doesn’t feel right?!  I am currently experiencing a sense of certainty that this is a bad idea! You want me to trust a verbal argument over this overwhelming feeling of certainty?

This, believe it or not, is what people mean when they talk about reason!

Trusting an argument that is correct as far as you can tell, over your feelings, even very strong feelings.  Being consciously aware that a thing is a good idea, and doing it, even when it’s awkward and unnatural and feels wrong.  You’re not used to doing things this way, because you usually discipline yourself with more feelings — guilt or fear, usually.  But there’s a way of making yourself do hard things that starts, simply, with recognizing intellectually that the hard thing is a good idea.

You can make yourself like things that you don’t currently like!  You can make yourself feel things that you aren’t currently feeling!

This bizarre, robotic, abstract business of making decisions on the basis of thoughts rather than feelings is a lot less crazy than it, um, feels.  It’s a tremendous power.

Some people luck into it by being naturally phlegmatic. The rest of us look at them and think “Man, that would suck, having practically no feelings.  Feelings are the spice of life!”  But we can steal a bit of their power, with time and effort, without necessarily becoming prosaic ourselves.

My overall instinctive response to Haidtism is negative.  The ideology initially comes across as smug and superficial.  But upon reflection, I have come to believe that it is right to aim towards self-transcendence, to do hard things and not sweat the small stuff. And I’m resolving to be more charitable towards people who support that creed even when they rub me the wrong way stylistically.  Ultimately, I want to do the things that are good ideas, even when that means awkward, deliberate change.