EA Has A Lying Problem

I am currently writing up a response to criticism of this post and will have it up shortly.

Why hold EA to a high standard?

“Movement drama” seems to be depressingly common  — whenever people set out to change the world, they inevitably pick fights with each other, usually over trivialities.  What’s the point, beyond mere disagreeableness, of pointing out problems in the Effective Altruism movement? I’m about to start some movement drama, and so I think it behooves me to explain why it’s worth paying attention to this time.

Effective Altruism is a movement that claims that we can improve the world more effectively with empirical research and explicit reasoning. The slogan of the Center for Effective Altruism is “Doing Good Better.”

This is a moral claim (they say they are doing good) and a claim of excellence (they say that they offer ways to do good better.)

EA is also a proselytizing movement. It tries to raise money, for EA organizations as well as for charities; it also tries to “build the movement”, increase attendance at events like the EA Global conference, get positive press, and otherwise get attention for its ideas.

The Atlantic called EA “generosity for nerds”, and I think that’s a fair assessment. The “target market” for EA is people like me and my friends: young, educated, idealistic, Silicon Valley-ish.

The origins of EA are in academic philosophy. Peter Singer and Toby Ord were the first to promote the idea that people have an obligation to help the developing world and reduce animal suffering, on utilitarian grounds.  The leaders of the Center for Effective Altruism, Giving What We Can, 80,000 Hours, The Life You Can Save, and related EA orgs, are drawn heavily from philosophy professors and philosophy majors.

What this means, first of all, is that we can judge EA activism by its own standards. These people are philosophers who claim to be using objective methods to assess how to do good; so it’s fair to ask “Are they being objective? Are they doing good? Is their philosophy sound?”  It’s admittedly hard for young organizations to prove they have good track records, and that shouldn’t count against them; but honesty, transparency, and sound arguments are reasonable to expect.

Second of all, it means that EA matters.  I believe that individuals and small groups who produce original thinking about big-picture issues have always had outsize historical importance. Philosophers and theorists who capture mindshare have long-term influence.  Young people with unusual access to power and interest in “changing the world” stand a good chance of affecting what happens in the coming decades.

So it matters if there are problems in EA. If kids at Stanford or Harvard or Oxford are being misled or influenced for the worse, that’s a real problem. They actually are, as the cliche goes, “tomorrow’s leaders.” And EA really seems to be prominent among the ideologies competing for the minds of the most elite and idealistic young people.  If it’s fundamentally misguided or vulnerable to malfeasance, I think that’s worth talking about.

Lying for the greater good

Imagine that you are a perfect act-utilitarian. You want to produce the greatest good for the greatest number, and, magically, you know exactly how to do it.

Wouldn’t a pretty plausible course of action be “accumulate as much power and resources as possible, so you can do even more good”?

Taken to an extreme, this would look indistinguishable from the actions of someone who just wants to acquire as much power as possible for its own sake.  Actually building Utopia is always something to get around to later; for now you have to build up your strength, so that the future utopia will be even better.

Lying and hurting people in order to gain power can never be bad, because you are always aiming at the greater good down the road, so anything that makes you more powerful should promote the Good, right?

Obviously, this is a terrible failure mode. There’s a reason J.K. Rowling gave her Hitler-like figure Grindelwald the slogan “For the Greater Good.”  Ordinary, children’s-story morality tells us that when somebody is lying or hurting people “for the greater good”, he’s a bad guy.

A number of prominent EA figures have made statements that seem to endorse lying “for the greater good.”  Sometimes these statements are arguably reasonable, taken in isolation. But put together, there starts to be a pattern.  It’s not quite storybook-villain-level, but it has something of the same flavor.

There are people who are comfortable sacrificing honesty in order to promote EA’s brand.  After all, if EA becomes more popular, more people will give to charity, and that charity will do good, and that good may outweigh whatever harm comes from deception.

The problem with this reasoning should be obvious. The argument would work just as well if EA did no good at all, and only claimed to do good.

Arbitrary or unreliable claims of moral superiority function like bubbles in economic markets. If you never check the value of a stock against some kind of ground-truth reality, if everyone only looks at its current price and buys or sells based on that, we’ll see prices being inflated based on no reason at all.  If you don’t insist on honesty in people’s claims of “for the greater good”, you’ll get hijacked into helping people who aren’t serving the greater good at all.

I think it’s worth being suspicious of anybody who says “actually, lying is a good idea” and has a bunch of intelligence and power and moral suasion on their side.

It’s a problem if a movement is attracting smart, idealistic, privileged young people who want to “do good better” and teaching them that the way to do the most good is to lie.  It’s arguably even more of a problem than, say, lobbyists taking young Ivy League grads under their wing and teaching them to practice lucrative corruption.  The lobbyists are appealing to the most venal among the youthful elite.  The nominally-idealistic movement is appealing to the most ethical, and corrupting them.

The quotes that follow are going to look almost reasonable. I expect some people to argue that they are in fact reasonable and innocent and I’ve misrepresented them. That’s possible, and I’m going to try to make a case that there’s actually a problem here; but I’d also like to invite my readers to take the paranoid perspective for a moment. If you imagine mistrusting these nice, clean-cut, well-spoken young men, or mistrusting Something that speaks through them, could you see how these quotes would seem less reasonable?

Criticizing EA orgs is harmful to the movement

In response to an essay on the EA forums criticizing the Giving What We Can pledge (a promise to give 10% of one’s income to charity), Ben Todd, the CEO  of 80,000 Hours, said:

Topics like this are sensitive and complex, so it can take a long time to write them up well. It’s easy to get misunderstood or make the organisation look bad.

At the same time, the benefits might be slight, because (i) it doesn’t directly contribute to growth (if users have common questions, then add them to the FAQ and other intro materials) or (ii) fundraising (if donors have questions, speak to them directly).

Remember that GWWC is getting almost 100 pledges per month atm, and very few come from places like this forum. More broadly, there’s a huge number of pressing priorities. There’s lots of other issues GWWC could write about but hasn’t had time to as well.

If you’re wondering whether GWWC has thought about these kinds of questions, you can also just ask them. They’ll probably respond, and if they get a lot of requests to answer the same thing, they’ll probably write about it publicly.

With figuring out strategy (e.g. whether to spend more time on communication with the EA community or something else) GWWC writes fairly lengthy public reviews every 6-12 months.

He also said:

None of these criticisms are new to me. I think all of them have been discussed in some depth within CEA.

This makes me wonder if the problem is actually a failure of communication. Unfortunately, issues like this are costly to communicate outside of the organisation, and it often doesn’t seem like the best use of time, but maybe that’s wrong.

Given this, I think it also makes sense to run critical posts past the organisation concerned before posting. They might have already dealt with the issue, or have plans to do so, in which posting the criticism is significantly less valuable (because it incurs similar costs to the org but with fewer benefits). It also helps the community avoid re-treading the same ground.

In other words: the CEO of 80,000 Hours thinks that people should “run critical posts past the organization concerned before posting”, but also thinks that it might not be worth it for GWWC to address such criticisms because they don’t directly contribute to growth or fundraising, and addressing criticisms publicly might “make the organization look bad.”

This cashes out to saying “we don’t want to respond to your criticism, and we also would prefer you didn’t make it in public.”

It’s normal for organizations not to respond to every criticism — the Coca-Cola company doesn’t have to respond to every internet comment that says Coke is unhealthy — but Coca-Cola’s CEO doesn’t go around shushing critics either.

Todd seems to be saying that the target market of GWWC is not readers of the EA forum or similar communities, which is why answering criticism is not a priority. (“Remember that GWWC is getting almost 100 pledges per month atm, and very few come from places like this forum.”) Now, “places like this forum” seems to mean communities where people identify as “effective altruists”, geek out about the details of EA, spend a lot of time researching charities and debating EA strategy, etc.  Places where people might question, in detail, whether pledging 10% of one’s income to charity for life is actually a good idea or not.  Todd seems to be implying that answering the criticisms of these people is not useful — what’s useful is encouraging outsiders to donate more to charity.

Essentially, this maps to a policy of “let’s not worry over-much about internally critiquing whether we’re going in the right direction; let’s just try to scale up, get a bunch of people to sign on with us, move more money, grow our influence.”  An uncharitable way of reading this is “c’mon, guys, our marketing doesn’t have to satisfy you, it’s for the marks!”  Jane Q. Public doesn’t think about details, she doesn’t nitpick, she’s not a nerd; we tell her about the plight of the poor, she feels moved, and she gives.  That’s who we want to appeal to, right?

The problem is that it’s not quite fair to Jane Q. Public to treat her as a patsy rather than as a peer.

You’ll see echoes of this attitude come up frequently in EA contexts — the insinuation that criticism is an inconvenience that gets in the way of movement-building, and movement-building means obtaining the participation of the uncritical.

In responding to a criticism of a post on CEA fundraising, Ben Todd said:

I think we should hold criticism to a higher standard, because criticism has more costs. Negative things are much more memorable than positive things. People often remember criticism, perhaps just on a gut level, even if it’s shown to be wrong later in the thread.

This misses the obvious point that criticism of CEA has costs to CEA, but possibly has benefits to other people if CEA really has flaws.  It’s a sort of “EA, c’est moi” narcissism: what’s good for CEA is what’s good for the Movement, which is what’s good for the world.

Keeping promises is a symptom of autism

In the same thread criticizing the Giving What We Can pledge, Robert Wiblin, the director of research at 80,000 Hours, said:

Firstly: I think we should use the interpretation of the pledge that produces the best outcome. The use GWWC and I apply is completely mainstream use of the term pledge (e.g. you ‘pledge’ to stay with the person you marry, but people nonetheless get divorced if they think the marriage is too harmful to continue).

A looser interpretation is better because more people will be willing to participate, and each person gain from a smaller and more reasonable push towards moral behaviour. We certainly don’t want people to be compelled to do things they think are morally wrong – that doesn’t achieve an EA goal. That would be bad. Indeed it’s the original complaint here.

Secondly: An “evil future you” who didn’t care about the good you can do through donations probably wouldn’t care much about keeping promises made by a different kind of person in the past either, I wouldn’t think.

Thirdly: The coordination thing doesn’t really matter here because you are only ‘cooperating’ with your future self, who can’t really reject you because they don’t exist yet (unlike another person who is deciding whether to help you).

One thing I suspect is going on here is that people on the autism spectrum interpret all kinds of promises to be more binding than neurotypical people do (e.g. https://www.reddit.com/r/aspergers/comments/46zo2s/promises/). I don’t know if that applies to any individual here specifically, but I think it explains how some of us have very different intuitions. But I expect we will be able to do more good if we apply the neurotypical intuitions that most people share.

Of course if you want to make it fully binding for yourself, then nobody can really stop you.

In other words: Rob Wiblin thinks that promising to give 10% of income to charity for the rest of your life, which the Giving What We Can website describes as “a promise, or oath, to be made seriously and with every expectation of keeping it”, does not literally mean committing to actually do that. It means that you can quit any time you feel like it.

He thinks that you should interpret words with whatever interpretation will “do the most good”, instead of as, you know, what the words actually mean.

If you respond to a proposed pledge with “hm, I don’t know, that’s a really big commitment”, you must just be a silly autistic who doesn’t understand that you could just break your commitment when it gets tough to follow!  The movement doesn’t depend on weirdos like you, it needs to market to normal people!

I don’t know whether to be more frustrated with the ableism or the pathologization of integrity.

Once again, there is the insinuation that the growth of EA depends on manipulating the public — acquiring the dollars of the “normal” people who don’t think too much and can’t keep promises.

Jane Q. Public is stupid, impulsive, and easily led.  That’s why we want her.

“Because I Said So” is evidence

Jacy Reese, a prominent animal-rights-focused EA, responded to some criticism of Animal Charity Evaluators’ top charities on Facebook as follows:

Just to note, we (or at least I) agree there are serious issues with our leafleting estimate and hope to improve it in the near future. Unfortunately, there are lots of things that fit into this category and we just don’t have enough research staff time for all of them.

I spent a good part of 2016 helping make significant improvements to our outdated online ads quantitative estimate, which now aggregates evidence from intuition, experiments, non-animal-advocacy social science, and veg pledge rates to come up with the “veg-years per click” estimate. I’d love to see us do something similar with the leafleting estimate, and strongly believe we should keep trying, rather than throwing our hands in the air and declaring weak evidence is “no evidence.”

For context here, the “leafleting estimate” refers to the rate at which pro-vegan leaflets cause people to eat less meat (and hence the impact of leafleting advocacy at reducing animal suffering.)  The studies ACE used to justify the effectiveness of leafleting actually showed that leafleting was ineffective: an uncontrolled study of 486 college students shown a pro-vegetarianism leaflet found that only one student (0.2%) went vegetarian, while a controlled study conducted by ACE itself found that consumption of animal products was no lower in the leafleted group than the control group.  The criticisms of ACE’s leafleting estimate were not merely that it was flawed, but that it literally fabricated numbers based on a “hypothetical.”  ACE publishes “top charities” that it claims are effective at saving animal lives; the leafleting effectiveness estimates are used to justify why people should give money to certain veganism-activist charities.  A made-up reason to support a charity isn’t “weak evidence”, it’s lying.

In that context, it’s exceptionally shocking to hear Reese talking about “evidence from intuition,” which is…not evidence.

Reese continues:

Intuition is certainly evidence in this sense. If I have to make quick decisions, like in the middle of a conversation where I’m trying to inspire someone to help animals, would I be more successful on average if I flipped a coin for my responses or went with my intuition?

But that’s not the point.  Obviously, my intuition is valuable to me in making decisions on the fly.  But my intuition is not a reason why anybody else should follow my lead. For that, I’d have to give, y’know, reasons.

This is what the word “objectivity” means. It is the ability to share data between people, so that each can independently judge for themselves.

Reese is making the same kind of narcissistic fallacy we saw before. Reese is forgetting that his readers are not Jacy Reese and therefore “Jacy Reese thinks so” is not a compelling reason to them.  Or perhaps he’s hoping that his donors can be “inspired” to give money to organizations run by his friends, simply because he tells them to.

In a Facebook thread on Harrison Nathan’s criticism of leafleting estimates, Jacy Reese said:

I have lots of demands on my time, and like others have said, engaging with you seems particularly unlikely to help us move forward as a movement and do more for animals.

Nobody is obligated to spend time replying to anyone else, and it may be natural to get a little miffed at criticism, but I’d like to point out the weirdness of saying that criticism doesn’t “help us move forward as a movement.”  If a passenger in your car says “hey, you just missed your exit”, you don’t complain that he’s keeping you from moving forward. That’s the whole point. You might be moving in the wrong direction.

In the midst of this debate somebody commented,

“Sheesh, so much grenade throwing over a list of charities!  I think it’s a great list!”

This is a nice, Jane Q. Public, kind of sentiment.  Why, indeed, should we argue so much about charities? Giving to charity is a nice thing to do.  Why can’t we all just get along and promote charitable giving?

The point is, though — it’s giving to a good cause that’s a praiseworthy thing to do.  Giving to an arbitrary cause is not a good thing to do.

The whole point of the “effective” in “Effective Altruism” is that we, ideally, care about whether our actions actually have good consequences or not. We’d like to help animals or the sick or the poor, in real life. You don’t promote good outcomes if you oppose objectivity.

So what? The issue of exploitative marketing

These are informal comments by EAs, not official pronouncements.  And the majority of discussion of EA topics I’ve seen is respectful, thoughtful, and open to criticism.  So what’s the big deal if some EAs say problematic things?

There are some genuine scandals within the EA movement that pertain to deceptive marketing.  Intentional Insights, a supposed “EA” organization led by history professor Gleb Tsipursky, used astroturfing, paid for likes and positive comments, made false claims about his social media popularity, falsely claimed affiliation with other EA organizations, and may have required his employees to “volunteer” large amounts of unpaid labor for him.

To their credit, CEA repudiated Intentional Insights; Will McAskill’s excellent post on the topic argued that EA needs to clarify shared values and guard against people co-opting the EA brand to do unethical things.  One of the issues he brought up was

People engaging in or publicly endorsing ‘ends justify the means’ reasoning (for example involving plagiarism or dishonesty)

which is a perfect description of Tsipursky’s behavior.

I would argue that the problem goes beyond Tsipursky.  ACE’s claims about leafleting, and the way ACE’s advocates respond to criticism about it, are very plausibly examples of dishonesty defended with “ends justify the means” rhetoric.

More subtly, the most central effective altruism organizations and the custodians of the “Effective Altruism” brand are CEA and its offshoots (80,000 Hours and Giving What We Can), which are primarily focused on movement-building. And sometimes the way they do movement-building risks promoting an exploitative rather than cooperative relationship with the public.

What do I mean by that?

When you communicate cooperatively with a peer, you give them “news they can use.”  Cooperative advertising is a benefit to the consumer — if I didn’t know that there are salons in my neighborhood that specialize in cutting curly hair, then you, as the salon, are helping me by informing me about your services. If you argue cooperatively in favor of an action, you are telling your peer “hey, you might succeed better at your goals if you did such-and-such,” which is helpful information. Even making a request can be cooperative; if you care about me, you might want to know how best to make me happy, so when I express my preferences, I’m offering you helpful information.

When you communicate exploitatively with someone, you’re trying to gain from their weaknesses. Some of the sleazier forms of advertising are good examples of exploitation; if you make it very difficult to unsubscribe from your service, or make spammy websites whose addresses are misspellings of common website names, or make the “buy” button large and the “no thanks” button tiny, you’re trying to get money out of people’s forgetfulness or clumsiness rather than their actual interest in your product.  If you back a woman into an enclosed space and try to kiss her, you’re trying to get sexual favors as a result of her physical immobility rather than her actual willingness.

Exploitativeness is treating someone like a mark; cooperativeness is treating them like a friend.

A remarkable amount of EA discourse is framed cooperatively.  It’s about helping each other figure out how best to do good.  That’s one of the things I find most impressive about the EA movement — compared to other ideologies and movements, it’s unusually friendly, exploratory, and open to critical thinking.

However, if there are signs that EA orgs, as they grow and professionalize, are deliberately targeting growth among less-critical, less-intellectually-engaged, lower-integrity donors, while being dismissive towards intelligent and serious critics, which I think some of the discussions I’ve quoted on the GWWC pledge suggest, then it makes me worry that they’re trying to get money out of people’s weaknesses rather than gaining from their strengths.

Intentional Insights used the traditional tactics of scammy, lowest-common-denominator marketing. To a sophisticated reader, their site would seem lame, even if you didn’t know about their ethical lapses. It’s buzzwordy, clickbaity, and unoriginal.  And this isn’t an accident, any more than it’s an accident that spam emails have poor grammar. People who are fussy about quality aren’t the target market for exploitative marketing. The target market for exploitative marketing is and always has been the exceptionally unsophisticated.  Old people who don’t know how to use the internet; people too disorganized to cancel their subscriptions; people too impulsive to resist clicking on big red buttons; sometimes even literal bots.

The opposite approach, if you don’t want to drift towards a pattern of exploitative marketing, is to target people who seek out hard-to-fake signals of quality.  In EA, this would mean paying attention to people who have high standards in ethics and accuracy, and treating them as the core market, rather than succumbing to the temptation to farm metrics of engagement from whomever it’s easiest to recruit in the short-term.

Using “number of people who sign the GWWC pledge” as a metric of engagement in EA is nowhere near as shady as paying for Facebook likes, but I think there’s a similar flavor of exploitability between them.  You don’t want to be measuring how good you are at “doing good” by counting how many people make a symbolic or trivial gesture.  (And the GWWC pledge isn’t trivial or symbolic for most people…but it might become so if people keep insisting it’s not meant as a real promise.)

EAs can fight the forces of sleaze by staying cooperative — engaging with those who make valid criticisms, refusing the temptation to make strong but misleading advertising claims, respecting rather than denigrating people of high integrity, and generally talking to the public like we’re reasonable people.

CORRECTION

A previous version of this post used the name and linked to a Facebook comment by a volunteer member of an EA organization. He isn’t an official employee of any EA organization, and his views are his own, so he viewed this as an invasion of his privacy, and he’s right. I’ve retracted his name and the link.

131 thoughts on “EA Has A Lying Problem

  1. There was a recent article posted by Dominic Cummings, who was the lead strategist of the Vote Leave (pro-Brexit) campaign in the UK. His analysis of the Brexit campaign is extremely interesting both for his insights into how to approach the common people on issues, but also with the “flavor” of his thinking. His thought patterns are not exactly LW Approved Rationality™ but they clearly draw from many of the same pools of thought that most of us draw from. I’ll link it at the bottom, but fair warning it is long (but very worth it).

    The pro-Brexit campaign was successful. It also strategically used some figures and methods that were debatably dishonest, in that if you fully wanted to understand the situation they are not the figures that you would personally land on to think with (I’m specifically referring to the gross £350M per week that the UK pays the EU, which should be netted against funds received from the EU). EA organizations are in essentially a similar situation to that of the Vote Leave campaign. They want to convince the great mass of humanity to undertake some action, but John Q Public is not interested and perhaps not even capable of understanding the nuances involved. In some cases things are so complicated that no one fully understands them. Ultimately with our current state of knowledge and individual intellectual capacity, there is no way to get people to be fully-informed any any topic, especially one as abstract and irrelevant to normal daily living as the EA movement.

    Convincing the masses that don’t understand a thing that it is still a good idea has a long, long history. That it is easiest to do this with careful deception is unfortunate, but ultimately you have to work with the reality that you have. (That’s the story people tell themselves to feel better)

    Effective altruism is not like binary political decisions. Morally flexible EAs see that short-term strategies work in other political work, and they think they are building a movement that’s similar, but not all things that rhyme are the same. Right now, effective altruism doesn’t have winners and losers, it has winners and people who don’t give a shit. As long as there aren’t people who lose out by you giving your money to charity, EA is in the long-term game, where you need scrupulous honesty and credibility building, and using short-term deceptive tactics screws the whole movement. Some EAs are pattern matching wrongly, or are discounting the future too highly, however, and they are screwing everything up by thinking they are in the short-term game where tactical dishonesty wins.

    I worry that with the trend towards more explicitly political action / activism from EAs (see InIn’s new plan, or Open Philanthropy’s focus area on political topics, and ACE to an extent), it screws up this whole dynamic, however, and makes genuine truth-seeking no longer a relevant or important part of EA. Once we leave the world of “I’m doing good with my money” to the world of “how can we convince other people to pay for the things I value” philosophy becomes a lot harder and just generally more screwed up.

    I’m not an expert in EA, and I don’t participate in any EA communities, so this is just a quasi-outsider’s perspective and should be taken with a grain of salt.

    Here’s the link I referenced, read it: https://dominiccummings.wordpress.com/2017/01/09/on-the-referendum-21-branching-histories-of-the-2016-referendum-and-the-frogs-before-the-storm-2/

  2. Hi Sarah,

    This is important criticism, and I’m glad it’s being discussed.

    I’m quoted in the piece, and I’d like to add two quick notes about that section only. First, I’m a volunteer leader of an EA chapter in DC, not an employee of Giving What We Can or CEA. They don’t consult me to make decisions, and I have no ability to speak on their behalf.

    Second, I responded to the facebook comment quoted in the piece clarifying that I do not think Nick Cooney was intentionally misleading people, nor would I endorse his doing so. The quoted comment was meant as a hypothetical concession to Harrison’s position, not a description of fact. I can understand how that could be confusing, but in my defense… it was a facebook comment and I clarified what I meant later in the thread.

    If any of my impulsively-written facebook comments can be presented as evidence of ‘deep problems with the EA movement’, then I’ll stop posting. That sort of incentive doesn’t seem like it will lead to a better EA movement though…

    • I am sorry, I hadn’t realized you weren’t an employee of GWWC. I’ve removed your name and the link to the FB post, and put a correction at the bottom of my post.

      • Thanks, Sarah I appreciate that! If you wouldn’t mind, I’d like you to delete these comments as well. I’m happy to defend my comments, but I’d rather not have that conversation here.

  3. This is important criticism, and I’m glad it’s being discussed.

    I’m quoted in the piece, and I’d like to add two quick notes about that section only. First, I’m a volunteer leader of an EA chapter in DC, not an employee of Giving What We Can or CEA. They don’t consult me to make decisions, and I have no ability to speak on their behalf.

    Second, I responded to the facebook comment quoted in the piece clarifying that I do not think Nick Cooney was intentionally misleading people, nor would I endorse his doing so. The quoted comment was meant as a hypothetical concession to Harrison’s position, not a description of fact. I can understand how that could be confusing, but in my defense… it was a facebook comment and I clarified what I meant later in the thread.

    If any of my impulsively-written facebook comments can be presented as evidence of ‘deep problems with the EA movement’, then I’ll stop posting. That sort of incentive doesn’t seem like it will lead to a better EA movement though…

  4. Much of this is troubling.
    However, I think a distinction should be made between criticisms that say that an EA-promoted activity is harmful, useless, or worse than whatever it’s replacing, and those that say it’s merely oversold. If it’s the former, then it’s a serious problem and to say “look at all the pledges we’re getting” would miss the point entirely. But if it’s the latter, then it could be valid to say that even though whatever you’re doing might be suboptimal and some alternative would be better if you were just starting out, given that the organization already exists and is doing good, changing course would be bad. It’s like if someone well into their career discovers that they’d have made more money if they had started in finance than in software engineering – it doesn’t follow that they should let go of whatever they’ve built up and switch to finance now. If EAs gave up the current Pledge, they’d risk not creating a comparably popular alternative, and end up doing less good. So if critic agrees that the activity is good overall, it’s legitimate to appeal to concerns about growth.
    That said, it doesn’t justify clamping down on criticism.

    Speaking of honesty and Robert Wiblin, shortly after the election he approvingly quoted a passage from this Medium post, including this bit: “We need to beware not to become divided…, we need to avoid getting lost in arguing through facts and logic, and counter the populist messages of passion and anger with our own similar messages. We need to understand and use social media. We need to harness a different fear”. Endorsing that is basically displaying a giant red flag with “DON’T TRUST ME” on it.

    • What’s the evidence for “approvingly” in the comment above? Rob often shares (and quotes from) articles he doesn’t agree with, and the comment he left on it was just “Intellectuals intellectualise, then get steamrollered” (an empirical rather than evaluative claim).

      • His later comment on the post seems to be endorsement: “If you want to stick to tech R&D or something then stay rational.

        If you want to engage in politics you’re going to have to deal with the reality of the process. And I don’t think it is safe for us all to ignore politics.”

  5. There are many problems with this post– the fact that all the evidence is from facebook and forum posts and is heavily extrapolated from, for one– but the most central problem is Sarah’s refusal to question her own personal ethics. What if, for the sake of argument, it *was* better to persuade easy marks to take the pledge and give life-saving donations than to persuade fewer people more gently and (as she perceives it) respectfully? How many lives is extra respect worth? She’s acting like this isn’t even an argument. When Ben talks about weighing the costs of harm to the movement, he’s taking an ethical position on this question. Movements like EA most often die by being picked apart from the inside– it *is* a serious risk, and keeping it alive might be worth some compromises. Sarah is also taking an ethical position, that this is never okay and our lack of “respect” for the public is all the condemnation she needs to offer.

    If you are not an act-utilitarian, then, yeah, I think you have a philosophical disagreement with a lot of EAs. It would be dishonest of most of the people in EA to say they didn’t think lying was ethically correct in some circumstances. Most people think this. I am one of the more gung-ho people for honesty I know in the movement, and I actually try to live my life without lying at all, but honesty compels me to admit that I think lying is the right thing to do in some scenarios. (I also think there is a fashion among ethical people to think that being honest means putting things in the worst possible light– equating “honesty” with steelmanning the worst interpretations instead of the best. They reject trying to be persuasive as trying to be manipulative. This is holding yourself to a high standard, but I don’t think it’s the most honest approach to have a negative bias.)

    Once we allow that lying or spin is not always the wrong move, we have a legitimate argument about when to use it. I think we shouldn’t for reasons that Sarah gives: 1) it makes us less able to track the truth, and 2) it makes us less trustworthy. THEN there’s the question of whether EA leaders and orgs actually *are* lying and lying unethically.

      • In the interests of clarity: no, I am not an act-utilitarian, and I’m not even really “inside the movement”, apart from having some EA friends.

    • I agree that the ends can justify the (dishonest) means, but what ends are we talking about?
      If the “end” we have in mind is growing the currently constituted EA as a movement, then bad leafletting research and iffy pledges aren’t that bad. But if our goal is actually making the world the best place we can make it, we have to think much longer term.

      On a long time scale, EA is in its infancy. We know very little about what’s actually effective, compared to what we’ll know in the years to come. Ultimately, the impact of EA will depend on our ability to actually discover the best interventions through research, and then (and only then) convincing everyone to support them. Any deviation from 100% honesty and rationality at this stage not only harms our ability to do the former, but also depletes the reserves of credibility we would need to draw for the latter.

      I see Effective Altruism as a project for decades and decades to come. 2017 is way too early to start deviating from scrupulous integrity.

    • And it might also be the case that stealing money to give to charity might also be the best thing a specific individual can do in a specific situation, but I don’t think it’s a good policy to endorse or a good idea to start planning the best ways to steal money.

    • I think this is a reasonable concern, and my ethics are fairly close to act-utilitarianism.

      The main reason I suspect lying is dramatically overvalued & honesty is dramatically undervalued comes from the question of “why do we need EA in the first place?” After all, in a sane world, organizations would be reasonably good at achieving their stated goals.

      For me, I think the essential problem is that it’s incredibly difficult to get an organization to stay honest enough to make interesting discoveries. There are tons of pressures that veer groups toward anti-intellectual norms and away from honesty, to the extent that groups can’t make good decisions because they can’t even assess the actual ground truth. Charity Navigator isn’t constrained by ill intentions, they’re constrained by the limits of their epistemology.

      If EA wants to have a truly outsized impact, it needs to manage to gain resources while not succumbing to anti-intellectualism. This is incredibly difficult, I can’t think of any social movement that has achieved this, and I definitely do not get the impression that EA is on track to succeed.

      • Obligatory comment where I point out that by “anti-intellectualism” Satvik does not mean the thing that is usually called that but rather the sort of default social dynamic whereby a group ceases to accomplish what it is notionally about and becomes purely a medium for personal politics, which destroys good epistemic practice, but not because it believes such to be bad.

      • @sniffnoy it sounds like you think my position is “individuals in groups will develop priorities which are orthogonal to good epistemic practice, and therefore fail to achieve it.”–is that right?

        My position is closer to “individuals in groups will generally develop behaviors that actively shut down what looks like signs of intellectualism.” An example that would be relevant 15 years ago is calling people who seem to read a lot or be interested in computers nerds in a derogatory way.

        (I don’t think talking about intent or beliefs really makes sense at the group level, while talking about norms & automatic behaviors does.)

      • Ah, I see! OK, that’s a little different than what I thought you meant, yes.

        What I was talking about was… well, I didn’t previously have a good name for it, but I’ll steal a term from Freddie deBoer here and call it “social capture”. (I may be using it a little differently from him, but whatever.) This is sort of the default state that every group degenerates into without active effort to constantly ground things in reality. There are multiple processes by which this happens (although each seems to make the others easier, so they tend to eventually co-occur), but the result is the same — the group becomes solely about personal politics and ingroup/outgroup politics and such. Status in the group is achieved by social maneuvering, not by any sort of competence. Etc. Freddie seems to have been talking about a fairly open form of this but obviously usually it becomes expressed through other things.

        I assumed you were also talking about social capture. It looks like instead you were talking about a particular thing that can, I’m going to assert, occur as a result of social capture. I’ll follow you and call it “anti-intellectualism”, although I’m still a little uncertain how well it coincides with the usual use of that term.

        I think this is a common result of social capture basically for straightforward “Why Nerds Are Unpopular” reasons. Once social capture has occurred, an easy way to gain status in the group is to go after a group that’s unpopular, and an easy way to be unpopular is to not perform the necessary maneuvering to remain popular — such as, say, because you’re focused on something else, like actually trying to get the right answer.

        But I’m not sure that that particular manifestation of it always occurs, or at least certainly not so openly. Remember that social capture commonly operates by corrupting things. If the group is an intellectual one, then installing openly anti-intellectual norms may be hard, but corrupting the existing norms may be much easier. An idea will be judged to be good if its proponents are popular — oh, we have norms stating that ideas should be judged to be based on the evidence for them? Then evidence will be judged in a manner so as to produce the same result. Etc.

        Basically I think that social capture is the fundamental problem, and the sort of anti-intellectualism you describe an outgrowth of it. (Though anti-intellectualism considered more generally may also come about by other means, I think.) And I think social capture is seriously destructive one way or another, regardless of whether that sort of anti-intellectualism results, because things like trying to actually ground things in external reality and care about competence are things that get in its way and thus are things that it destroys (or corrupts). I guess open anti-intellectualism makes it worse, but I’m not sure it much matters by that point. I think the point to intervene is the point where you can stop social capture in the first place; stopping it from progressing to anti-intellectualism doesn’t seem as relevant.

    • There’s a thing I’ve been starting to understand about activism in politics.

      In one frame, moderate policymakers have to take into account political facts such as whose support they lose if they do different things. From this frame, activists making extreme demands can seem unreasonable, since the politicians who rely on their support are already extracting as much as they can from the other side, as implied by the median voter theorem. Sometimes this is the case – for instance, in the US, Democrats rejected Nixon’s universal health care proposal, because they hoped for more, and ended up with much less for a long time: http://ihpi.umich.edu/news/nixoncare-vs-obamacare-u-m-team-compares-rhetoric-reality-two-health-plans

      But, sometimes you have to *be* the political facts. Not be “reasonable” by the first frame, because nothing changes that way. Single-issue voters with strong litmus tests do this. The US Civil Rights movement was another version of this – they forced the issue, against the advice of sympathetic moderate liberals ( http://crookedtimber.org/2013/01/21/the-white-moderate-the-greatest-threat-to-freedom/ ), because they thought they could get more – and they were right.

      If your argument is that we have to lie because of the political landscape, then I’m through identifying with the reasonable decisionmakers. You don’t automatically get to make me adopt the same frame as the people who are endorsing a strategy involving some amount of misrepresenting the facts or overstating promises. I *don’t know* what their intentions or plans are in detail. I wasn’t consulted. So, you’d have to actually make the case to me that the harm done to my interests due to misinforming me is outweighed by the benefits due to misinforming the marks.

      But, it’s going to be somewhat difficult to get me to believe what you’re saying if your strategy is known to include some amount of misinformation. And until you overcome this hurdle, I’m a fact on your political landscape. If I can’t trust EA orgs not to misrepresent how much evidence there is for things, or how seriously they expect pledges to be taken, then I’ll complain, openly. If that would harm the EA movement, then that’s just a potential harm EAs will have to take into account when deciding whether to misrepresent things. Just like I try to avoid lying lest it harm *my* reputation.

      • This view of lying is close to my own personal ethics, and I agree with what you are saying. But what Sarah is pointing out here is not a case of lying. It’s a case of EA orgs presenting the pledge in a positive light and a case of EAs being frustrated with a hit piece on ACE from a DxE person (DxE has a long history of sabotaging effective animal activism strategies because it doesn’t like accomodationism. They base their strategy on “the history of social movements” which I don’t consider comparably scientific an attempt to ACE, even when ACE is imperfectly realized). The only issue I would agree is close to lying is misrepresenting the effectiveness of leaflets. iirc, Ben was being super-nice to a person who had asinine criticisms of the pledge and trying to limit damage to the public’s perception of EA due to people making easily-addressed criticisms publicly without checking them out first.

        EA has to grapple with teh norms of rationalist culture whcih tend to go against group cohesion as a way of scoring status points. This can be a good system, but it’s also got it’s downsides. Considering the harm nitpicking does to EA’s brand is not underhanded or deceptive– it’s weighing all the consequences of your actions. It seems Sarah doesn’t think it’s virtuous to consider anything but finding out the truth when making these statements. I wonder if she feels the same way about people who bring up the idea that vaccines cause autism over and over again– at some point, you have to consider public perception of the conversation and, yes, you have to consider that the public cares about different things than you do.

        “The problem is that it’s not quite fair to Jane Q. Public to treat her as a patsy rather than as a peer.” How is this self-evidently a problem? What if the general public isn’t and doesn’t want to be our peers? How is this not just trying to communicate effectively?

  6. I wonder if some drift away from the honesty we might wish for is simply inevitable for problems that are too hard.

    Let’s say I’m sympathetic to EA and to animal welfare, but there simply aren’t any tried and tested animal interventions on the level of AMF. I may be OK with just putting that money aside and deciding to “come back next year when there’s better research”, but if you work for Animal Charity Evaluators it’s very hard to admit that any useful results are years or decades away. Inevitably, the space and the conversation are going to be dominated by people who can claim to at least be doing *something*, not by those who admit the problem isn’t solved yet. I feel the same way about AI safety: MIRI is definitely doing *something*, and the meta-problem of evaluating whether MIRI’s work is useful is too hard right now for anyone to gain credibility trying to solve it.

    To echo RD above, the main takeaway for me is that **EAs should avoid politics like the plague**. There was actual discussion among prominent EAs this year of endorsing Clinton’s campaign as an effective charity. Even if helping a good political candidate is “effective” in the short game, it would mean suicide in the long one. If EAs are getting mind-killed over leafletting, what would happen to truth if we started meddling in politics?

    • I dunno, the correct answers to some political issues should be pretty damn obvious, even if a lot of people are being idiots or selfish bastards when it comes to them. For example, reasonable people can disagree on the best way to address climate change, but anyone who says global warming is a hoax is clearly unsuitable for any public office where they can affect environmental policy.

      • Except if you think that even though anthropogenic climate change is real, the current consensus on how to combat it is too economically costly, and appointing a GW denier is a way to push that consensus back.

        Can you objectively evaluate whether the above position is reasonable? Note that I do not endorse this (or Trump’s EPA appointment), I just think that “pretty obvious” is a phrase that should ring alarm bells in the quest for objective effective altruism. Nothing is “obvious” that 50% of the country disagrees with. It may well be true, but not obvious.

        Climate change policy is exactly the sort of fight that EAs should stay out of. There is so much politically motivated dirty fighting going on in this area that EAs will either have to fight dirty (i.e. lie and cheat) or be ineffectual. This is neither the strength nor the mission of EA.

    • This seems like a good argument for explicitly revising one’s view of at least one of:
      – What standards of evidence it’s reasonable to use (e.g. you might decide that RCTs are too high a standard and use logical arguments from plausible premises, or reference class forecasting, instead).
      – Whether there are appealing animal welfare interventions.
      – Whether you should be aiming for interventions with direct impact at all (vs interventions that explicitly test hypotheses – see this essay by Chris Blattman for more on this: https://chrisblattman.com/2016/07/19/14411/ ).

      The whole point of looking for evidence is that eventually you might change your mind about something.

    • I strongly disagree.

      To me, the takeaway is “don’t get fancy, don’t get clever, just try to figure out what the best thing is and honestly report the state of your knowledge.” To me, one of the biggest ways that EA improves on generic charity is a *rigorous* commitment to honesty, including talking about where you’re uncertain and flaws of your preferred interventions. A GiveWell report on a charity isn’t going to hide the charity’s weaknesses– even weaknesses that are extremely embarrassing for the charity. So I think that if you sincerely believe that campaigning for Hillary Clinton is the best charity, you should lay out your evidence, listen to other people’s criticism, and update as needed. If you think that campaigning for Hillary is the best thing and you talk about mosquito nets instead, that’s still deception.

  7. I have feelings similar to the feelings here, though I am a fairly peripheral EA. Seeing the leafleting study critiques come out has been interesting for me, since I always felt like the numbers I’d seen quoted for the effectiveness of vegetarian outreach were jarringly different than my experience of reality – people becoming vegetarian seems like a pretty rare event, so getting a convert per couple dozen leaflets felt wildly unintuitive.

    Similarly, I’ve felt a little nervous about the pledge drive-y stuff around GWWC recently, particularly the proportion of pledge-takers that seem to be students who’ve never had a stable income. I waited until I’d had an income for a year or two before I took the pledge, because I wanted to try giving 10% for a while, and really seeing how it felt, before I committed to doing it forever. Students are more impressionable than average. They are legal adults and have every right to take a pledge, but I always feel uncomfortable urging anyone to commit to anything permanently. It feels to me like that sort of thing should be carefully felt out over time, rather than endorsed in a moment of passion. I don’t mean that students never should! Many current college students are much more mature than I am currently, I’m sure, and fully able to appreciate and endorse a promise of that scale. But as a demographic I don’t think college students are particularly ethical targets for: “hey it’s cool and good to commit to this thing permanently!”

    Basically, I feel like we’re hungry for good metrics on the success of EA, and ‘GWWC members’ seems like a good proxy for some for some of that, but as a GWWC member (and plodding ol’ AMF giver so far), I’d rather feel like most other members have a clear idea about what they’re getting into and a thorough intention to never go back on their promise.

    • I think getting students to take the pledge is a great strategy *because* they’ve never had a stable income. It bypasses the problem of loss aversion. How different do you think your life would be if you happened to make 10% less money each year? Probably not very. Do you feel strongly compelled to change your job oro lifestyle to find a way to make 10% more? Probably not. But deciding to give up money you already feel like you have in the bag is a lot harder than realizing that most first worlders could give 10% and do a ton of good without enduring too much hardship themselves.

      This is a more general problem I’ve had with Sarah’s writing and medical ethics in general– the fixation on meticulously informed consent as if it’s the paramount moral issue. It’s sanctified status quo bias and loss aversion. It’s not unethical to nudge someone to give away some of their money to help other people a lot more, even if that person might not have done it without your urging (or if they might have not done it if they knew some asinine criticism existed). I think you need to weigh the ethics of not “respecting” pledgers against the many more people you aren’t advocating for who would benefit from the pledge. This is on a continuum with “greater good” atrocities, but not all continuua are slippery slopes. And we are all on this continuum whether we like it or not.

      • > meticulously informed consent as if it’s the paramount moral issue. It’s sanctified status quo bias and loss aversion.

        Thought of the day:

        In the absence of Free Will, informed consent is merely status quo bias.

      • Just to be clear, I meant the level of scrupulosity about consent sanctified status quo bias, not that informed consent is meaningless. Making sure that people are making the decision without undue influence is good, but it’s not the only good thing. helping gpeople in poverty is pretty good, too. I think a certain amount of pushing people to help a certain amount of poor people is a risk worth taking.

      • I like this point. Thank you.
        On further consideration, I think I’m less worried about poor wealthy-nation-inhabiting college students being boondoggled or peer-pressured into giving 10% after manipulative nudging. I’m more worried about college students deciding to take the pledge and not sticking to it in a way that erodes the pledge as a serious commitment.
        When I frame it that way, though, my worry sounds overblown to myself. I suppose it’s probably fine.
        I continue to have more minor marketing quibbles, though. For example, mentioning in a short list of qualities the pledge has in a promotional email that it’s “not legally binding” seems off. Sure, it’s true that the pledge is not legally binding. But that shouldn’t be a selling point if you intend with every confidence to keep it!

  8. I’m grateful for this post. Honesty seems undervalued in EA.

    An act-utilitarian justification for honesty in EA could run along the lines of most answers to the question, “how likely is it that strategic dishonesty by EAs would dissuade Good Ventures-sized individuals from becoming EAs in the future, and how much utility would strategic dishonesty generate directly, in comparison?” It’s easy to be biased towards dishonesty, since it’s easier to think about (and quantify!), say, the utility the movement might get from having more peripheral-to-EA donors, than it is to think about the utility the movement would get from not pushing away would-be EAs who care about honesty.

    I’ve never been confident enough to publicly say anything when I’ve seen EAs and ostensibly-EA-related organizations acting in a way that I suspect is dishonest enough to cause significant net harm. I think that I’d be happy if you linked to this post from LW and the EA forum, since I’d like for it to be more socially acceptable to kindly nudge EAs to be more honest.

  9. The first two examples discussed here seem simply to be weird uncharitable misreadings of mundane comments. This post comes closer to dishonesty in its tenuous glossing of the author’s it quotes than the original posts!

    The EA Vegan discussions of putatively effective charities certainly are ripe with borderline dishonesty or exceptional incompetence, as is well documented with their use and misuse of terrible studies which prove the opposite of what they said.

    Jacy Reese is not doing anything problematic with his talk of intuitions, which you pick upon above though. This is a perfectly ordinary way of describing how we all set priors and formulate Fermi estimates based on reasonable Guesstimation. The suggestion that there’s anything more going on here seems unmotivated.

    InIn is a particularly weird case since, as you allude to, they were largely rejected by the EA community before receiving a stunning, outright denunciation from its highest echelons. To my mind the case for rejecting InIn based on suppose dishonesty and dodgy practices was largely vapid, which says more about the EA community than any of the charges in this post. But even without taking that stance, as an example, it seems to suggest that EA is excessively sensitive to any hints of dishonesty, the opposite of the thesis of this post.

    None of the examples in the post seem to suggest act utilitarian justified dishonesty.
    As an aside the InIn case, even if it’s not really an EA case, does raise an issue which I think is more salient and explanatory. In cases where you have to communicate an idea which is very complicated to a person who is very much not able to understand it all, and it is very important that you get them to understand as much of it as you can, then complex tradeoffs arise between saying things which are strictly accurate which won’t be understood (indeed their misinterpretation will make people less informed) and saying simpler things which are not strictly true but which produce closer understanding. This is a more common case which better captures the InIn dilemma about simple messaging, and is not about act utilitarian dishonesty.

  10. I’m the one Rob was replying to, the one who has been donating more than 10% for several years and yet hasn’t dared to take the pledge because he isn’t absolutely sure whether he might not one day be in a situations where it would be detrimental. (And I don’t perceive his comment as ableist – it even made me a little proud to be part of a tribe with such a nice attribute as particularly valuing commitments.) I’m the person who goes around quoting Parfit’s Hitchhiker and Robert Axelrod’s research into cooperation in prisoner’s dilemmas. The EA Foundation has made cooperation a central piece of its suffering-focused ethics, and I like to geek out with them about it, and I generally agree with Paul Christiano’s Integrity for Consequentialists (and he’s a well-known EA). I try to always use just the right hedges because I feel iffy about miscommunicating my level of confidence. I empathize with several moral frameworks, and I’m certainly constrained by one that you could call either rule utilitarian or deontological, which I defend with arguments from game theory and with Chesterton’s fence.

    That said, I’ve repeatedly been in the situation that the people you quote were in, and I can empathize with them only too well, particularly the ones attacked by Harrison Nathan’s piece. I’m probably somewhat above average sensitive, and I’ve taken to using veganism as a proxy to find people who are probably also sensitive enough that I can be vulnerable around them. Consequently, I can imagine how hard it must be for activists like me to read a piece like Harrison’s that is addressed at them and at the work they’ve been doing for years. I might be crying or taking modafinil (to tone down my emotions) halfway through it.

    I consider myself a bit of an insider of the movement at this point and particularly the antispeciesism part of it, and I’m as sure that the valuable parts of the article would’ve fit into a few paragraphs with friendly wording as most of my peers are sure that global warming is not a hoax (so around 90% sure perhaps). There are some good suggestions in there that ACE would’ve been delighted to hear. There is also a lot of repetition of critiques that ACE itself has been struggling to empathize strongly enough, there is nirvana fallacy, and there is a lot of ivory tower idealism that is completely unrealistic without a hundred times the funding over several years. Where it is within my power, I would’ve emailed ACE to volunteer to update its articles rather than writing something like this. Writing such an aggressive harangue is just as nasty, immature, and uncooperative as the shaming of animal activists by other animal activists that Melanie Joy has written about. That deserves to be criticized the way Melanie did, not the comments of the people who try to very cautiously explain why an organization has not responded to something in full on every forum.

    The harangues that were written against an organization that I used to run were much less intelligent and much shorter, but they were just as unhelpful, unnecessary, and downright harmful, and I felt just as hurt, demotivated, and powerless as a result. They dictated the framing, which made it hard to respond to them without sounding phony, and responding itself would only expose even more people to their misinformation. (It was on Facebook.) Had I continued to work for the organization and scaled it up, I would’ve been exposed to much more of that, and I don’t know if I could’ve taken it. Please don’t make it worse for or hold it against the people who are in this situation right now.

    So I want to echo something that Holly said about negativity (and also about moral cooperation, which is not only important for utilitarians). When I started studying linugistics, one of the first things that we learned is that communication is a fundamentally cooperative act. There is no communication without a readiness to cooperate. (It starts with choosing a common language.) Please put yourself into the shoes of the people you want to criticize until you understand well where they are coming from, what they know, what they are exposed to, what pressures (also moral pressures) they feel, and only when you understand them well, then allow yourself to critique them.

    • I have also been in the situation of working for an organization that I cared deeply about and being constantly at risk of being ridiculed or condemned for how we went about pursuing our goals. Sometimes it was justified; sometimes it was unfair. But I gained a real understanding of the situation of feeling unsupported in trying to practice your ideals.

      I really don’t think that anybody should stop working on their projects merely because a stranger says it isn’t up to some supposed quality standard. I don’t think artists should stop drawing because someone insults their work, and I don’t think activists should quit if someone doubts their efficacy. I know it is much harder for the “man in the arena” than the critic, and in the long run, makers will always matter more than critics. I hope I haven’t pressured anyone into giving up work that they believe is important.

      I wrote this essay in something of a spirit of “defense against proselytizing.” If someone says to me, “I’m doing the most good! follow me!” then I have the right to say “really? show me how that works.” And if their explanation is unsatisfying, and especially if I see something that looks hypocritical there, I have the right to go “yeah, no, I’m not following you.”

      • > I have the right to go “yeah, no, I’m not following you.”

        Well that’s fine, but you made a sweeping claim in the title and the conclusion when in reality – as other commenters have noted, so I don’t need to repeat it – the evidence doesn’t bear out any of it. I’ve posted Melanie’s article (linked in my comment above) on Facebook. Alternatively I could’ve written an article “EA Has a Shaming Problem,” but I don’t think a few instances here and there – and yours perhaps not even intending to shame – make for an endemic problem.

        While I intuitively lump your article into the same “shaming” category as Harrison’s, it was his article that I was talking about above when I worried about its impact on the activists. Yours is much more civil in tone, and your topic is one where everyone can, with minimal effort, form their own opinion of whether they want to call something “lying.” Harrison’s is a statistical tour de force that takes a lot of knowledge to put into context and see how little value it (for the most part) adds, knowledge of (1) ACE’s messaging over the past years, always stressing how uncertain their estimates are and that they should not be taken literally, (2) ACE’s amazing openness for constructive criticism (e.g., from me), even when responding to it takes time, (3) the value of hits-based giving, conscious of the great uncertainties involved, (4) the scarcity of research in animal advocacy and ACE’s work to change that, and (5) the importance of not getting paralyzed by uncertainty. It is thus the better example of the intellectual, subtle type of shaming.

        Jacy was responding to that, and Ben was responding to a critique I wouldn’t call shaming; it just warmed up some of the disadvantages of the particular pledge format, which GWWC found did not outweigh the advantages. What they did was a balancing act of trying to be honest on the one hand – and their honest opinion is probably that they’re fricking annoyed with the whole nonsense (but I don’t speak for them) – and diplomatic and kind on the other, because they know that some people have not been there with them in 2012 or so when they discussed these issues for the first time.

        Rob’s example is even more personal to me because he went on the tightrope walk of trying to describe exactly the degree of commitment that the pledge implied for a neurotypical person by likening it to marriage vows and the tightrope walk of nudging me to consider that my view may be the more unusual one without being insulting. I value that he did that (and that it worked), and I hate to see the comment used against him now.

        By the way, if you get divorce statistics of the EA movement, you can make a much stringer claim of “lying” being rampant here than with these examples. Rob, after all, is still following through on his pledge, but there are probably a whole bunch of EAs who got divorced at some point.

        Just to reiterate, because it seems to me we may’ve misunderstood each other here: my thesis was that you may write something very similar to what Jacy and Ben wrote if you were in their situation, and Harrison’s piece is the one that I criticize so harshly above, not yours.

        (I’m kind of ignoring InIn here. That’s because I don’t know much about it. But Gleb seemed really nice when I chatted with him once. I trust that their intentions, at least, are good.)

      • Thanks for the song; I’ll make a note to try it next time. 🙂

        I hope I haven’t been too harsh in my comments. I generally prefer to avoid discussions where I can’t be sure that I’m being friendly throughout (as read by the interlocutor). You certainly signal with your article that you agree with me on my interpretation of commitments, and that is an aspect of general cooperative behavior that I like a lot. This should not be the only article of yours that I read, because if I knew you better, I would surely know that you are a wonderful person, and a person of great integrity.

      • PDV, yeah, I’ve followed that. Well-intentioned people can do really dumb things. No pun intended. ^^

    • If a company had lied, tried to clamp down on criticism, or dismissed stringent promise-keeping as autistic, would you defend them by saying that being criticized is stressful and that we should put ourselves in their shoes? Probably not, and I don’t see why this should be any different. If you’re worried about the impact of criticism on your emotional health, it’d be better not to put yourself in a position in which you might be have to deal with that. Moreover, if an organization is engaging in deceptive practices or tactics, I don’t want to cooperate with it, and demotivating it would be good.

      • Sorry to just quote the most famous Laconic response, but I can’t help it: “If.” (I hope you can read it as funny rather than dismissive. :-/)

        My second response to Sarah should explain that I just don’t see any of these lies, this censorship, or this dismissiveness. If I did, I would be on board with your conclusion.

        “It’d be better not to put yourself in a position in which you might be have to deal with that”: Thanks for your consideration. I have taken it into account too, but if I can have an outsized impact, the emotional toll may be worth it. And perhaps it’ll even train my resilience, so that I’ll emerge as a more versatile activist.

      • If you don’t think that lies/censorship/dismissiveness are currently a problem for EA, then that’s what a response should be about. If there’s an honesty problem, it doesn’t matter why they’re lying, it’s a problem that deserves criticism and where negativity is appropriate, and if there isn’t, then the problem isn’t that the critics are negative but that they’re wrong. “Put yourself in their shoes” sounds like excuse-making, but in this case there’s either a serious problem where such an excuse would be inadequate, or there’s not a problem and the excuse is irrelevant.

      • Did you read my second response to Sarah? I tried to clarify that what I was arguing was that putting oneself in their shoes may reveal that what seems like evading criticism is really very well contained annoyance with someone who doesn’t follow the protocol of general decency to first talk to the group or organization they have a problem with or other experts, and only when it turns out that they don’t have a good explanation, then perhaps, if absolutely necessary, to pillory them. Pillory first is a toxic policy that deserves to be criticized.

      • That protocol deprives others of the benefit of seeing the concerned person’s thoughts, and the organization’s responses to them, and if the criticism attracts a lot of support, it’s an indication that the organization should take the issue more seriously. In a worse case, asking them privately gives them an opportunity to prepare a coverup or a PR strategy. Discussion that starts with criticism isn’t pillory, it’s productive truthseeking, at least in our circles, and norms that push negative views underground are dangerous.

      • We’re framing this in different contexts mentally I think. There are probably contexts where the approach that you defend is the appropriate one. Maybe a big oil company doesn’t deserve a heads-up when it routinely disregards safety protocols to cut costs; I don’t know. There are surely situations where it’s the best approach.

        My framing is more like this: A few friends meet in a restaurant. They greet each other with a hug, and chat briefly about the one friend’s pet who had surgery and is recovering, and about the other friend’s partner, who has to come to terms with the observation that SSRIs seem to really help them. Then they order drinks, and start discussing the topic they met up to discuss, namely, how to save the world.

        With this framing, it’s hopefully easier to see why I think that it’s inappropriate to assume something bad about one friend and “expose” it in uncharitable terms to the others before the first had a chance to explain it.

        I feel close to the people we’re discussing here and trust them – the ones I know and other by association – and I feel that I know them well enough that they deserve the friend treatment.

      • Friend treatment is supposed to be gotten from friends. I agree that if EA were a small tightly-knit friend group, it’d be appropriate to deal with disagreements in the way you describe – but it’s a large community where most people are acquaintances or strangers to each other, and with so many organizations and agents that it’s not feasible to get a proper measure of the participants’ character. It’s typically more similar to business: you hope that the companies you’re dealing with are giving you a decent deal and aren’t trying to scam you, but you don’t have any particular connections to them, so their honesty and openness are a higher priority than their well-being. I hope that GWWC, 80k Hours, and others are respecting and practicing norms of honesty, promise-keeping, and so on, but if they aren’t, they deserve to be exposed for it. Of course, they should be treated fairly and their explanations should be listened to and considered, but as Sarah said below, criticism is an adversarial activity.

        I understand that it’s unpleasant to see your friends accused, but that’s no reason for those of us who aren’t their friends to treat them any differently.

      • Then I think I disagree strongly on two counts: (1) I think that almost always the default (in the absence of knowing someone) should be the friend treatment, and that it should take significant evidence to the contrary before this prior is abandoned, and (2) that criticism should almost always be a cooperative and not an adversarial activity.

        The first has served me well, I think, without exception. The opposite, if actually practiced by many people, would lead to a toxic environment where you’re guilty until proven innocent, which would be paralyzing to any sort of enterprising spirit.

        The second has served me well in that people are much more open to actually changing their behavior when you don’t first back them into a corner and force them to get all their defenses up. It is a common practice among rationalists to give each other honest, friendly feedback (that takes some effort to phrase and structure just right, which is a good thing) that is meant to help the other and is understood that way too.

      • The kind of friend treatment you describe is desirable when you care about a person as an individual towards whom you feel affection and relatively strong levels of benevolence, and whom you know personally and well (e.g. you know their character and like it). If anything, it’d be toxic to act as if you had this kind of relationship with someone when you really don’t, as is the case for a stranger. I’m extremely wary of norms and environments that expect friend-like treatment of strangers and acquaintances, because when successful it’s very cognitively and emotionally taxing while being unrewarding, and when unsuccessful it’s a breeding ground for resentment and enmity, and either way it’s not good for respecting boundaries.

        Fortunately, one need not treat someone like a friend to be fair towards them. A person in the role of critic seeks to uncover problems with another or an organization, and the other party addresses these criticisms if they want to and see them as important. That’s the kind of environment where issues aren’t pushed under the carpet or “resolved” in smoke-filled back rooms. Of course, criticism can be conducted fairly or unfairly, and in the latter case there’s an opportunity for counter-criticism. It’s a healthy process overall, compared to the alternative.

        As for persuading people to change their wrong behavior, I agree that pointing it out to them quietly and non-threateningly is most effective, but to what extent is that the goal? I see this kind of criticism as being more for the benefit of onlookers: for example, not “80k Hours, change your evil ways!” but “People of EA, beware 80k Hours’ evil ways!”.

      • I think we’re meeting somewhere. Yay! To demonstrate the positive effects of the shift in tone (and I did not plan this): the last time I was disagreeing with Jon of ACE on something, I naturally included all the things that I suspect may bias me. I observe now that I didn’t do that here, so here we go:

        While I’m not particularly embarrassed of being wrong, I am terribly embarrassed of being aggressively or lecturingly wrong. Extreme example: I asked my partner not to throw away jars of peanut butter that still had about 10% of the peanut butter in them. Turns out it wasn’t her who threw it away. Terribly embarrassed!

        Since I still disagree with people and their behavior, I am likely to have cultivated a form of criticism that is maximally cooperative/nonaggressive. Since I don’t have much of a choice in the matter because of the feeling of embarrassment, I may have selectively collected any evidence that supports my modus operandi. Since I’ve hardly ever tried anything else, and since I count no effect as neutral, I’ve had lots of good experience with the approach. It doesn’t even matter that I don’t use a counterfactual for the “comparison,” because the counterfactual would be dominated by the terrible embarrassment.

        If I counted no effect as negative, then I would be more disappointed, e.g., about the limited reach of my article “The Attribution Moloch.” It points to two important problems – the first particularly in animal advocacy, the second in EA – and is the product of comprehensive feedback of several very smart insiders of the movements.

        Plus, I select my social circles based on preferences like these, preferences for the absence of aggression in any form. Naturally, my approach works really well with people who are so much like me. The last time I met someone who was more aggressive (and wrong about a few things), it was a bad and terribly futile experience for me. I’ve tried hard to avoid contact with her.

        So just as this shows that I’m probably not the adaptable socialite who could be thrown into a completely different community and immediately code-switch to the most effective behavior in the new context, it also shows that the EA and LW communities are my result of a long search and selection process, so as I downgrade my confidence that my approach is generally the most viable one, I upgrade or leave unchanged my confidence that my approach is the best for this community. Being inflammatory might’ve gotten me greater reach, but it is a low-status thing to do in this community, so all this reach would’ve translated into less respect for my opinions generally.

        “I see this kind of criticism as being more for the benefit of onlookers:” In this case here, there is no benefit for the onlookers. On the contrary, a cursory reader will get an absurd impression of EA. But there may be situations where exposés are called for, but even in those cases, consistently steelmanning and allowing the other enough time to respond before publication are vital parts of the process.

    • I question “There are some good suggestions in there that ACE would’ve been delighted to hear”. I am one of the people Harrison cited as having criticized ACE’s leafletting numbers, in a post I wrote in 2015. I did talk to Jacy Reese about it. He was perfectly engaging, said ACE was already thinking about these things and trying to figure out what to do… and absolutely nothing happened. The other piece Harrison cites was in 2013, shortly after ACE was founded (I think before it was even called ACE). Other people have voiced similar things for a long time. Nothing happened until Harrison Nathan wrote that polemic, and even then it wasn’t much. Maybe that’s because he went too far, but given the drumbeat of people voicing the strongest criticism in a much nicer way, I don’t think that’s likely.

      • Okay, thanks, I’ll update a little on that. But I still think that the better mantra to apply here is the famous, “Never attribute to malice that which is adequately explained by lack of staff capacity.”

      • > Nothing happened until Harrison Nathan wrote that polemic, and even then it wasn’t much

        Didn’t they remove Vegan Outreach as a top charity a couple years ago, largely due to the fact that they thought leafleting was much less effective than they used to?

        I can’t find the original posts from when they were downgraded, but the page now says “We have some concerns that Vegan Outreach has relied too heavily on poor sources of evidence to determine the effectiveness of leafleting as compared to other interventions.”

  11. If a company had lied, tried to clamp down on criticism, or dismissed stringent promise-keeping as autistic, would you defend them by saying that being criticized is stressful and that we should put ourselves in their shoes? Probably not, and I don’t see why this should be any different. If you’re worried about the impact of criticism on your emotional health, it’d be better not to put yourself in a position in which you might be have to deal with that. Moreover, if an organization is engaging in deceptive practices or tactics, I don’t want to cooperate with it, and demotivating it would be good.

  12. This is a good and important critique. I’d heard about the problems with InIn and the leafletting dishonesty from ACE, but not that it was also affecting groups I care more about like 80k.

    Thanks for writing this.

  13. I think this post (particularly the title) is more dishonest than the cited examples of dishonesty. That’s not a particularly high bar, since I don’t consider the cited comments themselves dishonest. Nonetheless, I think most of the comments are taken out of context here in a way that will predictably mislead people, and most of the issues raised here are in fact raised and answered by the same people/orgs in nearby comments on the respective threads.

    Before I get into the point-by-point detail of what I mean by that, I think it’s worth saying that the main way this post caused me to update is that I now more-strongly think that people like Rob, Ben and Jacy, i.e. identifiable members of well-known EA organisations, should not respond on comment threads unless they are willing and able to take the time to be utterly meticulous about what they are saying so that nobody can ever take their statements out of context. Since I very much doubt it is worth their time to do that, the practical effect would be that they would comment much less frequently, if at all. People like me will take the hit instead. Or maybe they won’t, if they can potentially be misidentified as a leader as Chris was above. This might be the desired effect of your post! If you think avoiding even the slightest hint of ‘dishonesty’ from core EA organisations is overwhelmingly important, then the most obvious way for them to achieve that is to say nothing. But for anyone who thinks back-and-forth interaction with EA leaders is important and valuable, this is no way to achieve that goal. Discussions can’t happen without both sides being willing to take reasonably charitable interpretations of comments that in isolation might sound ‘off’.

    Now, point by point:
    1. Ben Todd on public criticism:
    You characterize this as saying “we don’t want to respond to your criticism, and we also would prefer you didn’t make it in public.”
    As far as ‘public’ goes, Ben says you should run the criticism past the org before posting. He didn’t say ‘you should ask the org for permission’ or ‘you shouldn’t post publicly because public criticism is bad, only criticize privately’. He just thinks that talking to GWWC before posting will likely lead to higher-quality criticism because (a) you might discover your criticism is about to be addressed and/or (b) it avoids re-running old debates. Alyssa’s main points had in fact been made by Michael Dickens one month earlier in the same venue. So I think he’s probably right. Don’t you? And just to avoid any doubt about his intended meaning, he actually clarifies the harm/permission point later on in the thread:

    >Most importantly, I don’t think criticizing GWWC publicly is harmful in expectation, just that it has costs, so is sometimes harmful.

    >Second, I think a policy of discussing criticisms with GWWC before making them public reduces these harms, so is a reasonable policy for people to consider. But, I’m not saying you need GWWC’s permission to post criticism.

    As far as responsiveness goes, if Ben didn’t want to respond to this criticism for any reason (time, irritation, stress, PR, whatever), there was an extremely simple way for him to implement that, namely not responding. Furthermore, in the section you quote Ben notes that GWWC *will* probably respond to e-mailed criticism, as well as noting it down for future reference in case the same thing comes up lots of times. It’s hard to square either of these things with your description of them as not wanting to respond to criticism.

    2. Rob on the GWWC pledge:
    Firstly, for the benefit of your readers I think it’s worth quoting in full what GWWC says about the pledge here. Rob also did this, and his comments should therefore be read in that context.

    >The Pledge is not a contract and is not legally binding. It is, however, a public declaration of lasting commitment to the cause. It is a promise, or oath, to be made seriously and with every expectation of keeping it. All those who want to become a member of Giving What We Can must make the Pledge and report their income and donations each year.

    >If someone decides that they can no longer keep the Pledge (for instance due to serious unforeseen circumstances), then they can simply contact us and cease to be a member. They can of course rejoin later if they renew their commitment. Obviously taking the Pledge is something to be considered seriously, but we understand if a member can no longer keep it.

    So it’s explicitly allowed to drop the pledge if serious negative things happen to you. It has always been allowed. This isn’t news to anyone who’s actually read the GWWC website. It was news to some of the other posters on the thread though, which led to Rob’s further comment about *why* GWWC has structured the pledge in a way that explicitly allows people to untake it, namely to make it more consistent with how most people think about other serious pledges (such as marriage). GWWC’s website taken fully literally says that you should be willing to untake the pledge in some circumstances. So Rob pointing this out (to an admittedly surprised thread) can’t be ‘lying’; he’s just the messenger here. At best maybe you can argue that GWWC is somehow lying, maybe by not drawing sufficient attention to the section I quoted? That feels like a stretch though.

    3. Jacy on leafleting:
    Here I think I mostly just disagree with you on the object level.

    >But that’s not the point. Obviously, my intuition is valuable to me in making decisions on the fly. But my intuition is not a reason why anybody else should follow my lead. For that, I’d have to give, y’know, reasons.

    Jacy has spent maybe two orders of magnitude more time than I have thinking about the impact of leafleting and sifting through the admittedly-poor data, both anecdotal and otherwise. At that point, his intuition clearly is evidence in the Bayesian sense of the word ‘evidence’, and for the same reason it is a reason why I should follow his lead if I want to help animals. But if I do want to put in the extra time to check the reasoning for myself, I can go to the ACE website….

    >This is what the word “objectivity” means. It is the ability to share data between people, so that each can independently judge for themselves.

    where I will discover that both of the study results you cite (1 in 486 going vegetarian in uncontrolled study, 0 effect in controlled study) are readily available. I found them in <15 seconds once I'd navigated to their page on leafleting. And I know for a fact that many people have read those results and then 'independently judged' that donating to leaflets isn't for them. Is it possible that some people never bothered to read the page, even though it's fairly short (much shorter than, say, Givewell's charity reviews), and so made a choice they wouldn't have made if they'd been in full possession of the facts? Of course. But that hardly passes the threshold for 'lying'; for that ACE would need to have *at least* hidden the disappointing studies somewhere out of sight, or preferably deleted them altogether. After all, if ACE thinks the studies are weak but that leafleting is good in spite of this weakness, what else are they supposed to do but recommend the intervention and clearly state the weak data? Not recommending an intervention they honestly support would also be dishonest.

    4. Intentional Insights
    Not much to say here. InIn pushed the bounds of what EAs were comfortable with for several months in several ways, many people discussed among themselves in private how uncomfortable they were with their approach, then a prominent EA wrote a public criticism on Facebook and it blew up. After that came a forum post detailing the conclusions of the Facebook thread, and CEA's post repudiating them. I mean, what do you think would have happened here if EA *didn't* have a lying problem? Because I find that sequence hard to reconcile with the title.

    P.S. One of the underlying assumptions through this post is that EA organisations are mostly focused on gaining the support of 'Jane Q'; a hypothetical uncritical, non-detail-oriented, emotionally-moved member of the public. So I thought I might as well give my impression of the opinion of many EA leaders, namely that people like Jane Q giving 10% is not where EA will achieve most of its impact. Instead they expect most of the impact to come from a small highly-committed minority of the movement doing fairly dramatic/exceptional things; the obvious and extreme example would be Good Ventures/OPP. This view can be fairly criticized on its own terms, for instance it feeds into the common accusation that EA is elitist and uninterested in reaching the wider public. But given that is a common view, it would be incredibly self-defeating to prioritize the wider public over the nit-picking highly-committed deeply-thoughtful core of EA, and partly for this reason I very much doubt that this is what they are consciously doing.

    • 1. No, I don’t think you should in general run criticism of an organization by them before posting. Criticism, especially of activism, is often an adversarial activity, and doesn’t work if the majority of it is done gently and behind closed doors; there’s also value to the “speaking out”, “picking fights” kind of disagreement. I think my disagreement with and disapproval of Ben Todd stands.

      2. Rob Wiblin wasn’t lying about the content of the pledge. The pledge is not a legally binding contract but a personally binding oath. The problem with Wiblin’s statement is that he doesn’t think people have to *keep* oaths that they bind themselves to. I think they do, and I think he exposed himself as the kind of person who doesn’t believe in promise-keeping, and that should speak for itself. The connection to “lying” here is that it is a lie to say you will do a thing and then not do it.

      3. ACE doesn’t hide the disappointing leafleting studies; I linked directly to ACE’s page citing and summarizing them. But ACE uses those studies (and some questionable statistical extrapolations from them) as evidence of the unusual cost-effectiveness of vegan advocacy charities. Because leafleting is so cheap, this leads to the extraordinary claim that, if you view human and animal lives as even close to comparable, advocating veganism/vegetarianism is way more cost-effective than, say, interventions targeted at the global poor. In other words: ACE uses studies that *fail to show leafleting is better than nothing at all* as inputs into a chain of reasoning that comes out with the claim that leafleting is a super important activity. This is shady.

      4. I think CEA handled the Intentional Insights issue well. I include it as an example of a failure mode that arose within the EA community. Yes, the community successfully self-policed and that’s great, but the fact that it happened at all means it could happen again.

      5. If you’re right about EA leaders’ priorities, then that’s good news from my point of view…mostly. It’s still possible to treat a donor like a mark even if he’s rich, and even an elitist movement can go astray if they do that.

      • For what it’s worth, I had the same reaction as Alex.

        It’s standard journalistic practice (and common courtesy) to run articles by people who are quoted/critiqued in them before publication. I don’t think the New York Times is trying to pull one over on Jane Q. Public when they reach out to Obama for comment before publishing an article on his latest executive action, but even if you dislike asking people to comment before publication “lying” doesn’t seem like the right term.

      • First of all, I am not a journalist, this is a blog post.
        Second of all, I am quoting people’s public comments on the internet, not verbal comments that they could reasonably expect were private.
        Third of all, it actually IS a major problem for a free press that increasingly journalists have come to depend so heavily on White House access that they have essentially become more like PR than independent investigators.

      • This is a sidepoint but, fwiw, that’s not standard journalistic practice in my experience. They’ll usually ask for comments on the general topic of the piece but my impression is that they will usually not let the subjects of a piece see the piece itself before publication and that a lot of outlets treat this as a matter of ethics. (I think they are wrong).

        Fwiw, I actually agree that it’s often (although not always) worth running criticism by its target before making it public. I definitely think it’s a reasonable request for an organization to make although I don’t think the request should always be honored.

      • “Yes, the community successfully self-policed and that’s great, but the fact that it happened at all means it could happen again.”
        Um, what? What community are you envisioning that never has problems arise?

  14. Hi Sarah,

    The 80,000 Hours career guide says what we think. That’s true even when it comes to issues that could make us look bad, such as our belief in the importance of the risks from artificial intelligence, or when are issues could be offputtingly complex, such as giving now vs. giving later and the pros and cons of earning to give. This is the main way we engage with users, and it’s honest.

    As an organisation, we welcome criticism, and we post detailed updates on our progress, including our mistakes:
    https://80000hours.org/about/credibility/evaluations/
    https://80000hours.org/about/credibility/evaluations/mistakes/

    I regret that my comments might have had the effect of discouraging important criticism.

    My point was that public criticism has costs, which need to be weighed against the benefits of the criticism (whether or not you’re an act utilitarian). In extreme cases, organisations have been destroyed by memorable criticism that turned out to be false or unfounded. These costs, however, can often be mitigated with things like talking to the organisation first – this isn’t to seek permission, but to do things like check whether they’ve already written something on the topic, and whether your understanding of the issues is correct. For instance, GiveWell runs their charity reviews past the charity before posting, but that doesn’t mean their reports are always to the charity’s liking. I’d prefer a movement where people bear these considerations in mind as well, but it seems like they’re often ignored.

    None of this is to deny that criticism is often extremely valuable.

    Ben

    • First of all, thanks for engaging in this discussion publicly. It’s important. I also agree with you very strongly that false criticism is bad. If someone’s credulous enough to publicly signal-boost strong criticism with tenuous evidence, without bothering to check what the other side’s story is, then I do think they’re behaving somewhat badly.

      It’s also great news to me that 80K has a mistakes page – I didn’t realize that. I’ll start mentioning 80K as one of the EA orgs that I know to follow this basic good practice.

      That said, I think you’re proposing a norm that might effectively discourage a lot of valuable criticism. I don’t think you *intend* this consequence, but I think that it would be a consequence of your suggestions as I understand them.

      You gave the example of GiveWell. GiveWell runs its reviews past charities in part because it needs access to these charities’ internal records in order to evaluate them. This creates substantial risk of revealing sensitive info that the charity wouldn’t have voluntarily disclosed to the public. GiveWell understandably wants to retain access to this information, so it tries not to leave them worse off than they’d be without participating in the review process. (Disclosure: I’ve worked for GiveWell in the past, though I have no current affiliation with it. I think the above is publicly available info.)

      I think it’s reasonable to ask anyone with access to inside, sensitive info to run their criticism by the relevant organization first, as long as the organization doesn’t seem likely to try to silence whistleblowers. I haven’t heard of anything like that about 80K, so if I were to write something up about 80K’s internal practices, I would want to give 80K plenty of time to take a look at it first, even if it weren’t mainly critical. I would do this because I would prefer not to be embarrassingly wrong. Similarly, I try to let people know if I plan to quote them, in case there’s context I’m missing, or they want to point me to more recent public writing on the topic, etc.

      The question of criticizing things that are already public is quite different, though. If we’re going to ask people criticizing public behavior to check first to make sure there’s not secret information that’s relevant, then that imposes an asymmetrical burden on critics. No one’s proposing that GWWC should have checked in with all the potential critics before rolling out a version of the pledge; that would have obviously been impractical. Before GWWC “owned” the pledge, there was neither a burden on “pro” nor on “anti” discussion. But after GWWC declared itself in favor of the pledge and asked people to take it, under this heuristic GWWC has something like a presumptive right to review discussion before it goes public.

      The forum post you were commenting on made four major points. Three were considering the Giving What We Can pledge from the perspective of a prospective pledge-taker considering publicly available information, not from the perspective of the Giving What We Can organization. The fourth was considering it from the perspective of EA as a whole. While of course Alyssa could surely have done more to ask EAs what they think about these things, I don’t see how the internal affairs of GWWC would be much more relevant here than any of several other information sources.

      It’s also odd to implicitly criticize Alyssa for repeating old criticism, when that old criticism does not seem to have substantially informed the discourse, and not criticize others for failing to update. If we keep rehashing old debates, that’s a structural problem with the discourse, and the blame does not necessarily fall on those who are repeating points that people around them don’t behave like they’ve heard of.

      • I feel like Ben gave two very specific reasons for speaking to the org in question first, and this reply ignores them*. I would summarize his given reasons as:

        1. There may already be plans in place to deal with your criticism, in which case your criticism will incur most of the harms of criticism but very few of the benefits, because the thing you wanted to change via your criticism is already changing.
        2. The org has very likely heard similar things before and/or debated them internally before. Their internal debate has likely advanced beyond yours. Of course, that doesn’t automatically make them right and you wrong, but it probably does mean you will learn something by talking to people who have spent maybe an order of magnitude more time than you on these issues. They might even have written about it before, in which case you have a reference. If they have written about it before, there might be critics of the previous post who you can draw on. Very likely this will help you strengthen your criticism at the very least by avoiding making embarrassing mistakes, but also by letting you be more concrete about what the positions are.

        You have pointed out a third orthogonal reason for running things by orgs first, namely the issue of internal/sensitive information. I agree that this is a good reason, I agree it contributes to why Givewell does what they do (though I don’t think it’s the primary reason), and I agree it doesn’t apply to the Alyssa/GWWC case. But proving that a third unrelated reason doesn’t apply isn’t even close to explaining why 1 and 2 either don’t apply or aren’t important; it seems like a red herring, albeit probably an unintentional one.

        To concretely argue other way though, I think (1) applies in this specific case because based on my own inside information I expect that at least one of Alyssa’s strands of criticism will be rendered irrelevant (or at least need to be thoroughly re-worded) within a year, due to a process that had begun before she made her post. I think this was roughly 80% to happen both before and after the post.
        I also think (2) applies because it became apparent in the ensuing discussion that Alyssa had made at least one embarrassing mistake, namely she’d missed a bunch of metrics that other orgs use (to her credit, she edited the post to reflect this in the case of 80k). I don’t think this would have happened if literally anyone at CEA had been able to see the post beforehand, even informally. I think there are bunch of other ways some interaction could have strengthened the criticism as well, but I’m just focusing on the most straightforward one here.

        I expect (1) and (2) to virtually always apply, what we should be arguing about is their strength and value, and weighing that against the costs of following the policy. We can’t do that because people can’t bring themselves to even acknowledge the points, pretending that this is a one-sided policy debate. Unfortunately, that makes this debate highly unconstructive. Personally, I think (2) is especially valuable because using past material is the only way I think these debates can advance, as opposed to re-treading ground from 2012.

        *In your defense, it’s not just you; everyone seems to be ignoring them. I’m actually picking on you because your generally constructive tone suggests you might not ignore them if I point them out to you again.

      • Thanks, Alexander. I think that those are relevant considerations, but not ones that are specific to *criticism*. I’ve taken a while to realize that this is actually a crux for me, and apologize for being initially unclear. I think my true objection isn’t that these are bad things to look at, but that asking critics but not supporters to check to make sure what they’re saying is actually true is incompatible with the excuse that one hasn’t addressed potential criticism because one has limited time.

      • I can’t speak for Ben, but personally my first reaction to the ‘we should be symmetric between critics and supporters’ was immediately is to say ‘ok, fine, ask supporters to check in first as well’. After further consideration, I still think that. Your argument seems to be:

        1. Praise should have (and does have) low costs.
        2. Asymmetric costs between criticism and prise are bad.
        3. Criticism should have low costs.

        Whereas I’d like to raise the cost of both praise and criticism, i.e. improve the quality of the debate. As far as I can tell, you haven’t provided any justification for (1), and without it (2) does not appear to imply (3).

        Long-form praise of EA organisations (excluding by members of the orgs themselves, who obviously meet the ‘check in first’ standard) is already vanishingly rare, but I’m really quite happy to make it even more rare in order to eliminate a huge amount of low-quality criticism which I think is crowding out the few strands of high-quality criticism.

  15. On this very same day I saw someone who said they were so financially dire that they were putting their GWWC donations on credit cards that they cannot afford with their income. Following Wiblin’s advice, stopping the donations temporarily is the healthy thing to do because it is doing worse than nothing (you’re wasting much more future donations by getting wrecked by credit card debt). Following your suggested more literal interpretation of a pledge would ruin this person. So what was the problem with the looser interpretation of the pledge again?

    The way autism was referenced seemed like an off-hand side note. Is it really reasonable for someone to argue that autists takes pledges too seriously based only on anecdotes from a reddit thread? I imagine Wiblin himself would easily concede this assertion due to the poor evidence, so don’t get too upset about it.

    • The problem here is social pressure to take the pledge, combined with a cavalier attitude towards ways this might harm or render less effective people who take the exact wording seriously. If GWWC doesn’t mean the pledge to be quite this demanding, they should change the wording of the pledge, not dismiss criticism of the pledge’s demandingness as evidence of a mental disorder. (To be fair, Rob works for 80K, not GWWC, I’d like to see an official CEA or GWWC response to this issue.)

      • I’ll repost what I posted on that facebook thread. We’ll put either a version of this or the piece that Will plans to write on a more official GWWC platform so it’s easier to find.

        Some questions have come up recently about what it means to take a lifetime pledge and what happens when someone cannot keep it.

        The Giving What We Can FAQ page says:
        “The Pledge is not a contract and is not legally binding. It is, however, a public declaration of lasting commitment to the cause. It is a promise, or oath, to be made seriously and with every expectation of keeping it. All those who want to become a member of Giving What We Can must make the Pledge and report their income and donations each year.
        If someone decides that they can no longer keep the Pledge (for instance due to serious unforeseen circumstances), then they can simply contact us and cease to be a member. They can of course rejoin later if they renew their commitment. Obviously taking the Pledge is something to be considered seriously, but we understand if a member can no longer keep it.”

        This is a difficult topic to communicate about because different people need different advice, and a person who’s inclined to take things casually might need different advice than someone who’s very strict with themselves about promises and goals. When you think of “someone who breaks a pledge,” you might think of someone who didn’t take it seriously in the first place. In our experience, the people who have contacted us to tell us that they need to withdraw from their pledge are not people who took this lightly or who just found it inconvenient. They’re conscientious people who take the pledge very seriously, have sometimes gone to extreme lengths to try to keep it despite hardship, and at this point are not able to. In some cases they’ve rejoined after getting back on their feet.

        In these cases, we want to talk through the possibilities with them. The expectation is that Giving What We Can members will donate on an ongoing basis rather than letting “donation debt” build up, but there’s not a strict rule about how often you have to donate. In cases of temporary hardship, a member is often able to stick to the intent of the pledge by making up donations later in the next year or two.

        If you’re thinking about pledging, we recommend you check out the FAQ (including a new section on “What should I think about before pledging?”) and that you feel free to talk to us at community@givingwhatwecan.org. If you’ve already pledged and aren’t sure if you’re able to keep up a level of donation that’s consistent with the spirit of the pledge, please talk with us and we’ll try to clear things up.
        https://www.givingwhatwecan.org/about-us/frequently-asked-questions/

      • As I said below, I’m encouraged by Will’s plan to write on this topic. (I am also somewhat disappointed that it seems to have taken this much of a fuss to get this visibly prioritized.)

    • I do not think people in this situation should be ruined. I appreciate that GWWC seems to be sympathetic to their plight and it’s kind of them to reach out and say that the pledge shouldn’t be binding.

      But let’s look at what’s actually happening. A few people get themselves in real trouble due to giving too much to charity, in the context of a community which strongly pressures people to “do the most good”, chiefly through charitable giving. I know the person in question, and she has scrupulosity issues (obsession with imaginary sins); these kinds of psychological issues are common among people who are at risk of getting in over their head with charity. For a typical person, social pressure to give is pretty harmless. For a scrupulous person, it’s toxic.

      EA official doctrine has typically been pretty good about trying to mitigate the harm of their activity to people with scrupulosity issues. But occasionally, these kinds of tragedies do still happen. And I don’t think it’s because of the wording of the pledge, one way or another. The problem is that being in a culture that is obsessed with “doing good” through altruism is inherently toxic for people who have scrupulosity issues, and these people are going to either self-harm or have to spend enormous emotional energy *stopping* themselves from self-harming. And *I* find that concerning. I’m not sure what EAs ought to do about it, but what *I* want to do about it is make it more visible that taking the pledge is optional and being an EA, for that matter, is optional, and one doesn’t have to.

      • That’s not really a problem with lying then, is it? Are you just saying the pledge shouldn’t exist for people like your friend…? Because to my mind, Rob clarifying that the pledge is not legally binding and *should* be broken under some circumstances (just like a marriage) is exactly the right way to address the possibility of scrupulous unwillingness to break the pledge. But then you make it sounds like foreseeing circumstances where you’d have to break your pledge means you’re dishonest from the start.

        I am a person with scrupulosity issues that mostly manifested in (vegan) diet and depressive realism-type “honesty” before I got involved in EA. EA is one of the only communities I’m aware of that gives a satisfying, healthy answer to scrupulosity that isn’t just “give up and don’t take morality so seriously.” I don’t understand what you are suggesting the community do to help your scrupulous friend other than not exist. Not encourage others to engage in moral behavior? Dishonestly focus on personal happiness as if we believe that matters more than the outsized amount of good you could be doing for others? (For the record, I think the consensus in EA is that personal happiness is a great asset for helping others and helping others is a source of deep and lasting happiness, so the two aren’t in conflict regardless of whether you rank as more important than the other.)

        Who does not know that it is optional to be an EA? Optional to what, being a good person?

  16. Good article, I’ve always had my skepticism of EA but didn’t have as much concrete data as you. I wouldn’t call them liars, I’d say that they often have convoluted and self-contradictory belief systems. Furthermore I’d say they haven’t analyzed exactly what ‘good’ is and are unaware of perspectives on morality outside of the mainstream.

    For example, it’s not clear to me that any EA organization exists to distribute birth control technology around the world. I’ve heard EAs say they want to save the most babies per dollar, distributing birth control does not fit into that sophomoric moral calculus.

    I also question their effectiveness, an EA told me that he spent $3000 to get a mosquito net to Africa (I don’t remember the exact details, it could have involved more than that and this was years ago so perhaps they’re more effective now). I know that a round trip ticket to parts of Africa and renting a car and buying a mosquito net costs less than $3000 and could be considered a vacation. Of course if I could get $30,000 from ten people I could take a lot of that as profit, which is probably what most EA organizers do. They are the biggest beneficiary of this alleged altruism.

    Of course, you can unpackage altruism into a convoluted self contradictory scam, but that would be RANDom 😉

    • I think you or the person you talked to might be confusing the cost per mosquito net delivered (which is estimated to be much less than $3,000) with GiveWell’s estimate of the cost per life-saved equivalent for the Against Malaria Foundation, which is indeed around $3,400 if I recall correctly.

  17. Hi,

    Thanks for writing this post!

    I think that the issue of honesty for people who are consequentialist-sympathetic is very important. Insofar as pure consequentialists don’t place any intrinsic disvalue on promise-keeping or honesty, they are likely to be trusted less as a result – which is a very bad thing if you want to do good in the world! This makes it *extra* important for consequentialist-sympathetic groups to place great emphasis on honesty and promise-keeping, and try to cultivate personalities where not being honest is very difficult psychologically for them. (Also note that I think that, because of moral uncertainty, even people who are highly sympathetic to consequentialism should still place some intrinsic value on promise-keeping and honesty). I hope to write something on this in the future.

    I’m not sure I’d draw the lesson of ‘EA has a lying problem’ from the examples you give. Even if the negative interpretation of Ben and Rob’s comments were correct, I wouldn’t call that ‘lying’, rather than: “shushing criticism”; and “being insufficiently clear on an important topic.” (Bearing in mind that neither are employees of or spokespeople for GWWC.) These might be bad things, and we shouldn’t have bad things in the community, but I’d only call the former ‘lying’, and most people think that lying is a lot worse than the things you described. Apologies if I’m just being pedantic, but I’m always concerned with criticism causing in-group/out-group dynamics (leading to fewer changes of behavior in light of feedback overall), and terms like ‘lying’ need to be handled with care. I also find their comments much less problematic than you do (agreeing with AGB above); but it’s perfectly possible that I read their comments in light of knowing them well, which makes me reach for one interpretation of those comments whereas other people reach for another. I didn’t take the interpretation that you describe in the post.

    Anyway, I’m sorry to focus on that detail. I do think that “EAs should be really concerned about being disingenuous and here are some examples that really risk that” is an important thing to write about.

    There does seem to me to be more confusion around the pledge than I had realized, and I’ll write up something soon on how to understand the pledge, but it’ll take me a little bit to write it up in a way I’m happy with. In the meantime: I think that the analogy with wedding vows is pretty good – marriage is a firm intent, it’s not a trivial commitment, you’re doing a bunch of stuff to make it hard to back out of this commitment at a later date, you’re going to make life plans on the assumption that you’ll keep the commitment, and you’re making a commitment to act in a certain way even when its against your own later interest to do so. But, at the same time, you know that there’s a chance you won’t live up to the commitment, or that things will change sufficiently that being married to this person really really won’t make sense any more. In general, there’s quite a bit of space between “this is an absolutely binding commitment that I can never renege on” and “I can quit any time I feel like it”. I think that most promises, including serious promises like marriage, are in that space in between the two – having promised to do X provides a very significant, but not overriding, reason to do X.

    • Thanks for the reply!

      Yes, I was aware that I was being inflammatory, and it’s quite possible that a different phrasing would have been more precise, but I did want to start a debate.

      I believe that marriage is the way you describe it is for many people. For me, it’s a literal lifelong oath. I make very very few literal lifelong oaths, and the ones I do make are for real. I think the capacity to make a truly binding promise is a neat trick that more people should look into.

      • “Yes, I was aware that I was being inflammatory, and it’s quite possible that a different phrasing would have been more precise, but I did want to start a debate.”

        This seems like a weird stance to take, given the thrust of your criticism. Do you think ‘starting a debate’ justifies imprecisely phrased inflammatory writing?

      • If I have the desired political effect, I am willing to take the hit to my reputation and credibility.

      • Yes. If I were able to maintain the moral high ground that would probably have been better, but I’m not.

      • @Sarah

        I appreciated your post a lot, but I appreciated it largely because it sounded like it came from someone with a very strong moral compass which is very sensitive to wrong and trying to point it out to others who are less sensitive. Your admission that you’re engaging in the same behavior you’re criticizing breaks that.

        My internal reaction right now is just like a sputtering “you can’t do that!!” although since you said you are willing to take the credibility hit I don’t know how to adequately convey it. I find it extremely objectionable to criticize something in terms that make it sound like a fundamental moral failure, and do the same thing yourself. (Criticizing some bad behavior which you know is bad and you sometimes fail to prevent in yourself is fine, I think, especially if you disclose that this is something you struggle with as well. But this isn’t that.)

        If EA “has a lying problem”, then you “have a lying problem”. I understand that your position may be that it’s more okay for an individual blogger to be dishonest than a social movement that claims a lot of moral authority. But I think this should make us trust your blog, and this post in particular, less.

        (Why trust this post less? Because
        – it involves some subjective interpretation of statements which others interpret differently. Your post is evidence that at least some people interpret them as evidence of dishonesty. But if your post is not honest, that evidence is not good.
        – this makes it more likely that you intentionally went looking for the worst statements to cobble together a pattern to condemn
        – the post sounds like you consider this behavior extremely objectionable and unacceptable. But if you do this yourself, you undermine that.
        – if you don’t actually consider this behavior all that bad, then it seems like you have some other reason for disliking EA and are being dishonest about your motives.)

        I still think this post was valuable because it drew attention to an actual problematic attitude and caused a good number of EAs to reaffirm a commitment to norms of honesty and I think it will have a good effect on honesty in EA. But I have a problem with all dishonesty, including apparently yours.

        (I’m really sorry that my first interaction with you is so hostile and adversarial. I have really appreciated many of your posts, fact posts in particular, and I hope you keep posting and I intend to keep reading and hopefully engage more amicably in the future. I just feel really strongly about this.)

      • You are right to oppose all dishonesty. I’ll give some thought to whether I want to delete or retract this post at some time. Unfounded accusations are usually bad, I agree. I don’t currently know whether mine are.

      • @Sarah

        Thanks for your response. I know I have no business trying to tell you how to run your blog, but I want to say that despite my strongly worded criticism, I would be sad if you deleted the post – as I said above, I think it has been useful in getting people to take issues like this more seriously. Whether you should publish a retraction depends on whether you think it’s true.

        I hope things are okay, anyway.

      • Thanks for responding!

        “Yes, I was aware that I was being inflammatory, and it’s quite possible that a different phrasing would have been more precise, but I did want to start a debate.”

        I hate to say “tu quoque,” but it does seem to me that that’s exactly the sort of instrumentally-justified misrepresentation of views that we both want to avoid.

        Final thing I should say (responding not to you but to a general feeling from the comments) is: no-one at CEA / Giving What We Can thinks that everyone should take the pledge, or even that everyone who calls themselves an EA should take the pledge. Some people have much less secure financial situations than I do. Some people are planning to do good via low-paid direct work such that pledging won’t make sense for them. Some people don’t like pledges. In every case – more power to you. That the pledge doesn’t make sense for everyone doesn’t mean it doesn’t make sense for anyone.

      • (My response was written before SC’s replies. I’m astonished and dismayed by her comments above.)

    • The nice thing about marriage is that you can write your own vows. I will almost certainly never be comfortable taking a marriage vow that promises literally lifelong commitment. In fact, I never actually promise anyone anything, because I know that I often fail to do things I intend to do; I try to say things like “I’ll do my best”. I don’t think I’m alone in this.

      So I’m not actually comfortable taking a lifelong donation pledge either. My plan instead is to do the “try giving” thing at 10% indefinitely, renewing every year – same intended effect, less risk of moral conflict later if I change my mind about this being a good idea. If the “try giving” option didn’t exist, I’d be complaining a lot more about the inability to “write my own vows” on this.

    • I think it’s worth distinguishing between these claims:

      (1) The quotes from Ben Todd and Rob Wiblin show that they personally lied.
      (2) The quotes from Ben Todd and Rob Wiblin are evidence or contribute to a lying problem in EA.

      Seems to me like (1) is pretty dubious, but (2) may be right, depending on exactly what you mean by a “lying problem.” Shushing criticism imposes costs on those calling out lies, which lowers the cost to the liar of lying. Likewise, a cavalier attitude towards the question of whether pledges are for real contributes to a culture in which people make promises to be agreeable, and one can’t count on promises being kept.

      Because of this, I’m very happy to hear that you’re planning on writing up a clarification of the pledge.

    • On wedding vows, I think there’s a bit of goalpost-moving here, hopefully unintentional. Rob brought up wedding vows in the comments to the original forum post as an example of a commitment people do break from time to time without being extreme outliers. This was in response to Alyssa’s point that a lifelong commitment to giving 10% might be a bad idea if you have substantial uncertainty about what your life will be like in a couple of decades such that there’s a non-negligible chance that keeping such a commitment would interfere with your ability to pursue the highest-impact option:

      >Second, the GWWC pledge uses the phrase “for the rest of my life or until the day I retire”. This is a very long-term commitment; since most EAs are young (IIRC, ~50% of pledge takers were students when they took it), it might often be for fifty years or more. As EA itself is so young (under five years old, depending on exact definitions), so rapidly growing, and so much in flux, it’s probably a bad idea to “lock in” fixed strategies, for the same reason that people who take a new job every month shouldn’t buy a house. This is especially true for students, or others who will shortly make large career changes (as 80,000 Hours encourages). People in that position have very little information about their life in 2040, and are therefore in a bad position to make binding decisions about it. In response to this argument, pledge taker Rob Wiblin said that, if he changed his mind about donating 10% every year being the best choice, he would simply un-take the pledge. However, this is certainly not encouraged by the pledge itself, which says “for the rest of my life” and doesn’t contemplate leaving.

      The relevant thing isn’t whether the pledge, like marriage vows, might be somewhere in between an entirely nonbinding statement of intent and an unbreakable vow. It’s whether the pledge is sufficiently binding that considerations like the ones Alyssa brings up are strong arguments against taking the pledge. I think that the analogous considerations around marriage are quite strong, but are sometimes counterbalanced by the very high potential value of being able to extract a reciprocal commitment from another party. I certainly wouldn’t dismiss such an argument by saying that sometimes people divorce.

      • > On wedding vows, I think there’s a bit of goalpost-moving here, hopefully unintentional.

        I’m confused, which party seemed to be moving goalposts here? (At first I thought you meant by this that divorce was a bad analogy, but then you seemed to agree with the analogy at the end of your comment.)

      • Here’s the shift I see:

        (0) Alyssa makes the argument that the GWWC pledge is not right for some people because a lifelong pledge constrains behavior in ways that it’s hard to guarantee will be good.
        (1) Rob heavily implies that wedding vows are an example of a promise meant somewhat seriously, like the GWWC pledge, that is nonetheless sufficiently breakable that Alyssa’s argument that we should be reluctant to take lifelong pledges is unwarranted.
        (2) Critics such as Sarah point out that this seems to be taking wedding vows surprisingly unseriously, and that it seems like a bad sign if they’re a good example of the sort of thing where you wouldn’t worry at the beginning about the risks of making a lifelong commitment.
        (3) People like Will agree with Sarah that wedding vows are serious commitments to be entered into with care, costly to break, but seem to me to be implying that the fact that they’re still breakable *at all* saves Rob’s argument.

        This last bit doesn’t quite follow, and based on Will’s comments above, it seems like he doesn’t really endorse that conclusion. If so, it seems like he ought to agree that Rob’s implied argument is not really a relevant criticism of Alyssa’s original argument, and I’d like to see that explicitly affirmed.

        Here’s what looks somewhat like goalpost shifting to me:
        (0) Alyssa implicitly poses the question, “Is the GWWC pledge sufficiently binding that we should be worried about making such a lifelong commitment?”
        (1) Rob equates the bindingness with of the GWWC pledge with the bindingness of wedding vows, and then implies that they are not sufficiently binding that Alyssa’s argument is relevant.
        (2) Sarah points out that this implies a surprisingly dim view of wedding vows.
        (3) Will implicitly substitutes the question, “are wedding vows completely unbreakable?”.

      • Benquo: Alyssa didn’t just argue your point (0), she also mentioned Rob personally and said

        >In response to this argument, pledge taker Rob Wiblin said that, if he changed his mind about donating 10% every year being the best choice, he would simply un-take the pledge. However, this is certainly not encouraged by the pledge itself, which says “for the rest of my life” and doesn’t contemplate leaving.

        And this is what Rob quoted and responded to, not the rest of that paragraph about the pledge being too inflexible. So I would summarize the sequence as:
        (0) Alyssa says the pledge doesn’t contemplate leaving.
        (1) Rob points out that the GWWC website explicitly does contemplate the possibility of leaving.
        (2) Some other people (Telofy) say that being willing to un-take pledges doesn’t meet the definition of ‘pledge’, and is problematic because it ‘commits one to only ever cooperating with agents that are so highly value-aligned that there is no conceivable way you could defect against them’
        (3) Rob counters with, among other things, the examples of wedding vows as a ‘pledge’ which is strong enough to be meaningful but weak enough to be breakable under certain circumstances. With this I think he intends to highlight that viewing pledges as completely unbreakable is not the mainstream usage of the word.
        (4) Sarah quotes Rob’s (3) (but not (1)), saying “In other words: Rob Wiblin thinks that promising to give 10% of income…does not literally mean committing to actually do that. It means that you can quit any time you feel like it.”.
        (5) Will reiterates the comparison to wedding vows and implicitly points out that ‘you can quite any time you feel like it’ is an unreasonable characterization of wedding vows, because people take serious meaningful steps in order to keep them.

        I don’t really see any goalpost-moving here. I’m curious if you still do. I would agree that the goalposts had moved if I agreed with you (0) so I think that’s the crux. But I’m curious why you chose a (0) which was not the thing Rob actually quoted; at least personally I didn’t think it was ambiguous which part he was responding to.

      • Alexander, in the paragraph immediately prior to the one discussing the lifelong nature of the pledge, Alyssa specifically addressed circumstances where the pledge might not be infeasible to fulfill, but might still be a bad idea. It seems to me as though the obvious reading is that she had this in mind when bringing up the problem of making a lifelong commitment when you don’t know what your opportunities are going to be.

        Alyssa was talking about the pledge itself, which is a straightforward statement that one will keep giving 10% until retirement or death. Leaving is not mentioned in the text of the pledge. Rob brought up an FAQ, which does contemplate leaving. The FAQ mentions that the pledge is not *legally* binding, and says that you “can” cease to be a member, specifically bringing up the prospect of “serious unforeseen circumstances.”

        Overall, this seems entirely consistent to me with a reading whereby the pledge is morally binding barring circumstances that make it pretty much infeasible to fulfill. You don’t have to be an absolutist about promises (I’m not) to be surprised when someone thinks that a sufficient condition for reneging is that it no longer seems like a good idea, rather than the somewhat stricter condition that it seems like such a hardship that you don’t see how you can avoid taking the painful step of breaking a serious promise.

        Also, the FAQ is not on the pledge page, nor is the link to the FAQ prominently featured, nor does the FAQ show up prominently when I click through the links to the signup page or the “more info” button. It’s part of the sitewide FAQ linked in the site footer. I don’t think this is necessarily a bad setup, but it does mean that it’s important to consider how the pledge is most prominently described when thinking about its public meaning, not just the sum of all discussion available *somewhere*. Presumably there is a reason why the main pledge page does not in fact contemplate leaving.

      • I think I agree with nearly everything you just said. The way you describe the pledge is the way I treat the pledge. But basically I don’t think Rob was responding to the earlier points/paragraph (your (0)), he just saw a misconception about the impossibility of leaving (my (0)) being given airtime and wanted to point out that it wasn’t true, precisely because the setup of the website means that it’s more likely to be ignorance than malice leading to that misconception. If I’m wrong and it was intended as a response to your characterisation of Alyssa’s point, then I agree it’s a poor one, so maybe we can agree to disagree on what was being responded to and then we don’t seem to disagree on anything else.

        On your very last point, when I’m thinking about how the pledge will be interpreted, rather than speculating I would tend to think about how the (many) people I know who have taken the pledge actually do interpret it. I’m yet to actually meet anyone who treats it even as seriously as wedding vows, let alone more seriously, though I’m certain those people are out there. If I was hearing lots of people say, e.g., ‘I want to take the pledge but can’t because what if I get very sick’, or ‘I wish I hadn’t taken the pledge because now it’s forcing me to make financially unwise decisions’, then I’d already be pushing for a change to the pledge or a more prominent position for the FAQ or both. But I don’t. Maybe British people just read this stuff differently; I really don’t know. By the sounds of Will’s post, he’s similarly surprised at the extent of the confusion. I strongly expect that this surprise, plus some generic bias towards simplicity, is the only reason it’s not in the pledge text.

  18. The post does raise some valid concerns, though I don’t agree with a lot of the framing. I don’t think of it in terms of lying. I do, however, see that the existing incentive structure is significantly at odds with epistemic virtue and truth-seeking. It’s remarkable that many EA orgs have held themselves to reasonably high standards despite not having strong incentives to do so.

    In brief:

    * EA orgs’ and communities’ growth metrics are centered around numbers of people and quantity of money moved. These don’t correlate much with epistemic virtue.
    * (more speculative) EA orgs’ donors/supporters don’t demand much epistemic virtue. The orgs tend to hold themselves to higher standards than their current donors.
    * (even more speculative; not much argument offered) Even long-run growth metrics don’t correlate too well with epistemic virtue.
    * Quantifying (some aspects of) quality and virtue into metrics seems to me to have the best shot at changing the incentive structure here.

    The incentive structure of the majority of EA-affiliated orgs has centered around growth metrics related to number of people (new pledge signups, number of donors, number of members), and money moved (both for charity evaluators and for movement-building orgs). These are the headline numbers they highlight in their self-evaluations and reports, and these are the numbers that people giving elevator pitches about the orgs use (“GiveWell moved more than $100 million in 2015” or “GWWC has (some number of hundreds of millions) in pledged money”). Some orgs have slightly different metrics, but still essentially ones that rely on changing the minds of large numbers of people: 80,000 Hours counts Impact-Adjusted Significant Plan Changes, and many animal welfare orgs count numbers of converts to veganism (or recruits to animal rights activism) through leafleting.

    These incentives don’t directly align with improved epistemic virtue! In many cases, they are close to orthogonal. In some cases, they are correlated but not as much as you might think (or hope!).

    I believe the incentive alignment is strongest in cases where you are talking about moving moderate to large sums of money per donor in the present, for a reasonable number of donors (e.g., a few dozen donors giving hundreds of thousands of dollars). Donors who are donating those large sums of money are selected for being less naive (just by virtue of having made that much money) and the scale of donation makes it worth their while to demand high standards. I think this is related to GiveWell having relatively high epistemic standards (though causality is hard to judge).

    With that said, the organizations I am aware of in the EA community hold themselves to much higher standards than (as far I can make out) their donor and supporter base seems to demand of them. My guess is that GiveWell could have been a LOT more sloppy with their reviews and still moved pretty similar amounts of money as long as they produced reviews that pattern-matched a well-researched review. (I’ve personally found their review quality improved very little from 2014 to 2015 and much more from 2015 to 2016; and yet I expect that the money moved jump from 2015 to 2016 will be less, or possibly even negative). I believe (with weaker confidence) that similar stuff is true for Animal Charity Evaluators in both directions (significantly increasing or decreasing review quality won’t affect donations that much). And also for Giving What We Can: the amount of pledged money doesn’t correlate that well with the quality or state of their in-house research.

    The story I want to believe, and that I think others also want to believe, is some version of a just-world story: in the long run epistemic virtue ~ success. Something like “Sure, in the short run, taking epistemic shortcuts and bending the truth leads to more growth, but in the long run it comes back to bite you.” I think there’s some truth to this story: epistemic virtue and long-run growth metrics probably correlate better than epistemic virtue and short-run growth metrics. But the correlation is still far from perfect.

    My best guess is that unless we can get a better handle on epistemic virtue and quantify quality in some meaningful way, the incentive structure problem will remain.

  19. There are lots of comments here so I want to focus on two quick issues:
    First, it’s important not to mix our emotional commitments and desirability biases with claims about what is true or effective. There may be good reasons to treat Jane Q. Public as a peer rather than a mark, not use deceptive/pressure tactics but the mere fact that they seem unappealing isn’t a good argument against them. Many of the most important/utility producing features of modern society, e.g., paying people less who have less talent/ability or didn’t have the chance to be education, are intuitively unappealing and ‘treating people as a mark’ is a practice that is wildly successful in traditional advertising (just show them your product with some skimply dressed chicks and rock music). (Though arguably the true ‘mark’ is the EA nerds who the charity is addressing and the talk of tricking Jane Q. Public is just a way to let them feel superior).

    Secondly, if one is going to raise concerns about the dangers of lying one should be careful in how one does it. The kind of lying you seem to be worried about isn’t bad because it risks gathering power and putting it to harmful uses. It is bad because it is likely to be detected and cost credibility.

    Also, just because something *could* be used in a negative way isn’t an argument (in itself) that it is bad. What is true is that ‘accumulating power in the fashion mentioned is dangerous (even if unlikely to be discovered) because of the substantial risk that your future self or future organizational leaders will misuse the power’. It is still true that if one could somehow commit to using power so gathered in a good fashion it would be a good thing to do.

  20. Part of this essay is about confusion about how binding GWWC’s pledge is. I have a comment about that. For clarity, I’m a random Internet schmoe, and not a part of GWWC or EA.

    Suppose that Alice has a blog, and she posts that she spent some time with John, who “is a cool cat”. Bob reads this post and understands that Alice endorses John as a fun person to spend time with. Eve reads the post and responds with an angry comment, because John is not a cat, and he isn’t suffering from hypothermia even a little bit.

    Maybe someone tries to explain that “cool cat”, in this context, means “fun person to hang out with”. Eve doubles down: a cat is a feline, and anyone who uses that word to describe a person is being dishonest.

    In this situation we say that Eve is indulging in linguistic prescriptivism.

    I’m getting that feeling from this post. We have a quote from Robert Wiblin saying: “The use GWWC and I apply is completely mainstream use of the term pledge…”, and we have a response from OP talking about “what the word pledge actually means”.

    For myself, all the instances of “pledge” I have encountered have been in the sense of “yeah, you’re promising to seriously try, but if you can’t do this it’s not world-ending”. Examples include the Pledge Of Allegiance and the PBS Pledge Drive. So when I read GWWC’s pledge, I understand that the pledge is meant to be fairly binding but not unbreakably so. When I read their FAQ Entry #42 (https://www.givingwhatwecan.org/about-us/frequently-asked-questions/#42-how-does-it-work-is-it-legally-binding), which OP quotes exactly half of (she omits the half that would undermine her argument), it’s super clear to me that this is what GWWC means the pledge to be.

    So: GWWC attempted to communicate an idea to me, and I unambiguously and clearly understood the idea they were trying to communicate, but here’s this blog post saying that GWWC has a lying problem, because they used a word in a way that OP doesn’t agree with. :-/

    If someone took a poll (of EA forumgoers, or GWWC members, or whatever), and asked them if GWWC’s pledge was meant to be eternally unbreakable, and it turned out that a large fraction of those polled had believed it was meant in that way, that would change my mind. If someone took this poll and found that a very small fraction had believed this, I think that should change OP’s mind.

    ————-

    For what it’s worth, I do agree with this essay that the Animal Charity Evaluators thing is sketchy and bad. I hope that gets fixed.

    • Alyssa’s original criticism of the lifelong nature of the pledge was entirely consistent with a pledge that is “fairly binding but not unbreakably so.” The FAQ gives “serious unforeseen circumstances” as an example of a reason to renege, but not opportunities to have more impact that only make sense (or where you can be more effective) if you renege on the pledge, which is the sort of case Alyssa discussed in the prior paragraph of her EA Forum post.

      In that context, when someone responds by saying that pledges, such as wedding vows, are frequently broken, they’re saying by Gricean implication that such pledges are sufficiently OK to break that Alyssa’s criticism is irrelevant or very weak. I think you can disagree with that without insisting on an extremely nonstandard definition of “pledge”. The right thing to do – which Will McAskill has now said he’ll do (see his above comment) – is to clarify what the pledge means, not just diagnose the people who had a different impression with a mental disorder.

  21. Issue 1: (note – in this particular comment I got a bit emotional, and then wasn’t sure how to rewrite it to undo that. I have other comments that will be more matter-of-fact)

    The title and tone of this post is playing with fire, i.e courting controversy, in a way that (I think, but am not sure) undermines its goals.

    A, there’s the fact that describing these as “lying” seems approximately as true as the first two claims, which other people have mentioned. In a post about holding ourselves to high standards, this is kind of a big deal. Others have mentioned this.

    B: Personal integrity/honesty is only *one* element you need to have a good epistemic culture. Other elements you need include trust, and respect for people’s time, attention, and emotions.

    Just as every decision to bend the truth has consequences, every decision to inflame emotions has consequences, and these can be just as damaging.

    I assume (hope) it was a deliberate choice to use a provocative title that’d grab attention. I _think_ part of the goal was to punish the EA Establishment for not responding well to criticism and attempting to control said criticism.

    That may not be a _bad_ choice. Maybe it’s necessary but it’s a questionable one.

    The default world (see: modern politics, and news) is a race to the bottom of outrage and manufactured controversy. People _love_ controversy. *I* love controversy. I felt an urge to share this article on facebook and say things off the cuff about it. I resisted, because I think it would be harmful to the epistemic integrity of EA.

    Maybe it’s necessary to write a provocative title with a hazy definition of “lying” in order to get everyone’s attention and force a conversation. (In the same way it may be _necessary_ to exaggerate global warming by 4x to get Jane Q Public to care). But it is certainly not the platonic ideal of the epistemic culture we need to build.

  22. Meta note:

    Sarah – the font/formatting of your blog comments feel harder to parse than other sites (such as LW or effective-altruism.com) largely because of their large font-size and spacing. This may be a totally random aesthetic preference on my part, but since all the comments are _long_, having to scroll multiple pages to read them (and keep track of who is replying to who) creates a cognitive overhead that makes it harder keep the entire conversation in my working memory.

    (I tried zooming out the page so I could see more at once, which sort of helped, but since wordpress forces everything into a fairly small column, it didn’t help as much as I wanted)

    *If* other people also thought this and if it’s actually practical on wordpress (and if you want to encourage commenting on your blog, instead of LW or the EA forums), you might consider tweaking the line-height, font-size or other css attributes. (Although if you don’t have wordpress pro the only option may be to change the theme which may have a bunch of consequences you don’t like).

    • I roughly agree. I’d recommend dropping font size to 16px, line height to 25px, keeping width the same.

      Selecting a different font may also be good, but that’s trickier and anyway this one is roughly ok.

      (source: am a UX designer)

      (also: good post)

  23. Issue 2: Running critical pieces by the people you’re criticizing is necessary, if you want a good epistemic culture. (That said, waiting indefinitely for them to respond is not required. I think “wait a week” is probably a reasonable norm)

    Reasons and considerations:

    a) they may have already seen and engaged with a similar form of criticism before. If that’s the case, it should be the critic’s responsibility to read up on it, and make sure their criticism is saying something new. Or, that it’s addressing the latest, best thoughts on the part of the person-being-criticized. (See Eliezer’s 4 layers of criticism)

    b) you may not understand their reasons well. Especially with something off-the-cuff on facebook. The principle of charity is crucial because our natural tendency is to engage with weaker versions of ideas.

    c) you may be wrong about things. *Because* our kind have trouble cooperating because we tend to criticize a lot, it’s important for criticism of Things We Are Currently Trying to Coordinate On to be made as-accurate-as-possible through _private_ channels before unleashing the storm.

    Controversial things are intrinsically “public facing” (see: Scott Alexander’s post on Trump that he specifically asked people not to share and disabled comments on, but which Ann Coulter ended up retweeting). Because it is controversial it may end up being people’s first exposure to Effective Altruism.

    Similar to my issue 1, I think Sarah intended this post as tit-for-tat punishment for EA Establishment not responding enough to criticism. Assuming I’m correct about that, I disagree with it on two grounds:

    – I frankly think Ben Todd’s post (the one most contributing to “EA establishment is defecting on meta-level epistemic discourse norms”) was totally fine. GWWC has limited time. Everyone has limited time. Dealing with critics is only one possible use of their time and it’s not at all obvious to me it’s the best one on the margin. Ben even notes a possible course of action: communicate better about discussions GWWC has already had.

    – Even in the maximally uncharitable interpretation of Ben’s comments… it’s _still_ important to run things by People You Are Criticizing Who Are In the Middle of a Project That Needs Coordination, for the reasons I said. If you’re demanding time/attention/effort on the part of people running extensive projects, you should put time/effort into making sure your criticism is actually moving the dialog forward.

    • So, the thing is, you’re assuming that I’m part of the community that’s “trying to coordinate” on EA. I never was an EA. EA became popular in my friend group, but I never liked it. I don’t have to be gentle with it, just because I know people who believe in it.

      If your friend writes a story and you want to help her make it better, you want to provide “constructive criticism.” But these people aren’t my friends and I’m not trying to help them do their jobs better. I don’t like what they’re doing in the first place.

  24. Right now you are talking to a person whose mind is out of control and who won’t make sense for at least the next few days.

  25. Occurs to me that, judging by your older posts, you seem to be familiar with Slate Star Codex (SSC). This post looks like a perfect example of toxoplasma (see “The Toxoplasma of Rage” on SSC):

    1. It’s written to be maximally inflammatory, especially the title. (Section 1 of the SSC article.)
    2. It draws on evidence that is so tenuous that endorsing it is strong anti-EA signaling – there would be no counter-signaling in endorsing something that everyone agrees with. (Sections 2 & 3 of the SSC article.)
    3. It’s on a topic that it highly charged. Every utilitarianism FAQ I remember reading had a section on why honesty/integrity/cooperation/etc. are crucial for every utilitarian who is not alone in their world. EAF has made it part of its “sequences” to rebut this naive conception of consequentialism. (Section 4 for the SSC article.)
    4. You draw on comments from discussions that a lot of people were involved in, so that they will feel personally addressed, like I did. (No particular section.)

    So you created a parasite that threatens to do what Owen Cotton-Barratt in his wisdom warned of years ago, turn something as ipso facto good as EA into something controversial.

    In turn, it will do nothing to promote honesty. In contract, you already admitted to sacrificing your moral “high ground,” so it has already started to erode honesty: “Which means that it’s not a coincidence that the worst possible flagship case for fighting police brutality and racism is the flagship case that we in fact got. It’s not a coincidence that the worst possible flagship cases for believing rape victims are the ones that end up going viral. It’s not a coincidence that the only time we ever hear about factory farming is when somebody’s doing something that makes us almost sympathetic to it. It’s not coincidence, it’s not even happenstance, it’s enemy action. Under Moloch, activists are irresistably incentivized to dig their own graves.”

    (Oh, and I’m being overly dramatic. I don’t think all this will have any noticeable long-term effect.)

  26. Posted the following elsewhere, would like my position on this noted here.

    You know, this is a topic I’m really interested in, and I’d like to take part in the discussion, but I don’t feel that comfortable wading in because I feel personally ill-treated by the author of the post, and I think I am biased against them.

    Ironically, concern over this scenario is one of the reasons I responded challengingly to Harrison’s original criticism. Public, long-form criticism has *LOTS* of first-mover advantages, one of which being that the targets of your criticism are likely to be emotionally affected by the criticism itself.

    Harshly-worded, long-form criticism like this piece, and Harrison’s ACE critique, are written at the author’s leisure, out of the public eye, and without much uninvited scrutiny or outside evaluation.

    Responses to long-form criticism, on the other hand, are time-bound, high-pressure, emotionally-charged events in a context that is set by the opponent. Respond too slow, and you are tried in absentia. Make a rhetorical misstep in an attempt to respond quickly, and you are judged again. Responses to criticism often require repeating the criticism itself, triggering mere-exposure effects. Multiple criticisms create compounding cognitive costs for the target, and this feature is especially present in long-form blog posts. People who read the criticism may not read a response, anyone who reads the response will be reasonably likely to read the criticism.

    Responding to criticism is often time-consuming and may require explaining complex background knowledge or accurately describing detailed historical contexts. Any errors in explaining the background information, or the historical context, will likely be scrutinized and punished. Some people will be naturally better at responding to criticism, quickly and without compromising emotions, than others, and the best responders will not necessarily be the people who are most right. Criticism of organizations like ACE or 80k has asymmetrical stakes; criticism that falls flat costs the author a little bit of credibility, maybe, and possibly some ire from defenders of the organization. Criticism that gets a lot of attention, or attention from important entities (donors perhaps?) can cost the targets their jobs, or the organization its existence. Again, the criticism that takes off in this way is not necessarily *good* criticism. Critical blog pieces, especially long-form posts can also spread far beyond what the author originally intended. Remember that time Ann Coulter tweeted a SSC article? I take these features to be largely orthogonal to the actual quality of the criticism. They are structural characteristics of ‘criticizing’ and don’t depend on any particular feature of any specific criticism.

    I strongly believe that these features of public criticism, as well as others that I’m choosing to omit, should lead us to hold criticism to a *much* higher standard than non-critical descriptive writing. I am *not* saying criticism is bad or unimportant! I *am* saying that there are structural features of criticism which favor the critic, and these features are hazardous enough to warrant a higher standard for criticism. I recognize that there are good reasons to think differently about this; criticism is hard and people are probably too unwilling to do it. It’s good to reward criticism, and we should definitely applaud good-faith efforts to give constructive feedback. That said, I firmly believe critics should make every effort to ensure that long form ‘callout’ pieces like Sarah’s and Harrison’s are the best work they can reasonably produce, and that they have been completely fair to their targets and at least *sought* other perspectives. If it is worth writing the criticism, it is worth writing it well.

    There are ways to diminish the first-mover advantage of critics, but neither Harrison’s post nor Sarah’s made use of them. In particular, requests for comment from the targeted parties (which is *not* the same as seeking approval), steelmanning the target’s case (perhaps in addition to criticizing a less-flattering interpretation that the author thinks is more likely) and ensuring that the initial criticism is clearly-written, well-researched, and only makes use of accusation, sarcasm, and other inflammatory tactics when it is absolutely warranted. I’d personally also like any intentionally inflammatory statements to be justified in an addendum or footnote, but that may be too much to ask.

    I think Harrison’s piece was much closer to being ideal criticism than Sarah’s, but his conduct in other contexts gave me pause. Sarah’s criticism I won’t further comment on, because, again, I feel a bit bruised and hostile toward her right now, and I would not trust myself to be totally objective.

      • My post does not attempt to address the content of Sarah’s post, I’m trying to point to a feature of criticism more generally that I believe is important. Better discourse is better for the side of truth. My intention is to provide meta-level feedback that could improve the discourse. I agree with some of Sarah’s object level claims, but disagree with how she communicated them. I think better norms about this kind of external criticism would help *both* sides get a fair hearing.

      • Your proposed remedy is extremely one-sided, would make it easier to praise than to criticize, and provides cover for criticized organizations to not engage with criticism unless it conforms to their specific preferred format.

        As I said above, this makes it read very much like excuses to not address the concerns, with no real justification.

  27. Sarah’s post highlights some of the essential tensions at the heart of Effective Altruism.

    Do we care about “doing the most good that we can” or “being as transparent and honest as we can”? These are two different value sets. They will sometimes overlap, and in other cases will not.

    And please don’t say that “we do the most good that we can by being as transparent and honest as we can” or that “being as transparent and honest as we can” is best in the long term. Just don’t. You’re simply lying to yourself and to everyone else if you say that. If you can’t imagine a scenario where “doing the most good that we can” or “being as transparent and honest as we can” are opposed, you’ve just suffered from a failure mode by [flinching away](http://lesswrong.com/lw/21b/ugh_fields/) from the truth.

    So when push comes to shove, which one do we prioritize? When we have to throw the switch and have the trolley crush either “doing the most good” or “being as transparent and honest as we can,” which do we choose?

    For a toy example, say you are talking to your billionaire uncle on his deathbed and trying to convince him to leave money to AMF instead of his current favorite charity, the local art museum. You know he would respond better if you exaggerate the impact of AMF. Would you do so, whether lying by omission or in any other way, in order to get much more money for AMF, given that no one else would find out about this situation? What about if you know that other family members are standing in the wings and ready to use all sorts of lies to advocate for their favorite charities?

    If you do not lie, that’s fine, but don’t pretend that you care about doing the most good, please. Just don’t. You care about being as transparent and honest as possible over doing the most good.

    If you do lie to your uncle, then you do care about doing the most good. However, you should consider at what price point you will not lie – at this point, [we’re just haggling](http://quoteinvestigator.com/2012/03/07/haggling/).

    The people quoted in Sarah’s post all highlight how doing the most good sometimes involves not being as transparent and honest as we can (including myself). Different people have different price points, that’s all. We’re all willing to bite the bullet and sometimes send that trolley over transparency and honesty, whether questioning the value of public criticism such as Ben or appealing to emotions such as Rob or using intuition as evidence such as Jacy, for the sake of what we believe is the most good.

    As a movement, EA has a big problem with believing that ends never justify the means. Yes, sometimes ends do justify the means – at least if we care about doing the most good. We can debate whether we are mistaken about the ends not justifying the means, but using insufficient means to accomplish the ends is just as bad as using excessive means to get to the ends. If we are truly serious about doing the most good as possible, we should let our end goal be the North Star, and work backward from there, as opposed to hobbling ourselves by preconceived notions of “intellectual rigor” at the cost of doing the most good.

    • Suppose an EA said something like “When push comes to shove, do we prioritize not stealing from people or doing the most good? Don’t say that that the former is the way to do the latter, if you do, you’re lying to yourself. If you don’t steal, that’s fine, but don’t pretend to want to do the most good.”. I’d start locking up my valuables when they’re around – and not expect them to do much good, either. The point being, while there are some situations in which there are tradeoffs between truthfulness and other values, it’s significantly atypical (in a non-dysfunctional environment), and when someone sees themselves as having to make that tradeoff, they’re much more likely to have made a mistake somewhere than to be correct, and so they end up sacrificing both. That’s why rule consequentialism beats act consequentialism.
      Which is not to say that the best rule is “never lie, no matter what”. But nor is it “lie when it seems to serve the greater good”.

Leave a comment