Reply to Criticism on my EA Post

My previous post, “EA Has A Lying Problem”, received a lot of criticism, and I’d like to address some of it here.

I was very impressed by what I learned about EA discourse norms from preparing this post and responding to comments on it. I’m appreciating anew that this is a community where people really do the thing of responding directly to arguments, updating on evidence, and continuing to practice discourse instead of collapsing into verbal fights.  I’m going to try to follow that norm in this post.

Structurally, what I did in my previous post was

  • quote some EAs making comments on forums and Facebook
  • interpret what I think is the attitude behind those quotes
  • claim that the quotes show a pervasive problem in which the EA community doesn’t value honesty enough.

There are three possible points of failure to this argument:

  • The quotes don’t mean what I took them to mean
  • The views I claimed EAs hold are not actually bad
  • The quotes aren’t evidence of a broader problem in EA.

There’s also a possible prudential issue: that I may have, indeed, called attention to a real problem, but that my tone was too extreme or my language too sloppy, and that this is harmful.

I’m going to address each of these possibilities separately.

Possibility 1: The quotes don’t mean what I took them to mean

Case 1: Ben Todd’s Quotes on Criticism

I described Ben Todd as asking people to consult with EA orgs before criticizing them, and as heavily implying that it’s more useful for orgs to prioritize growth over engaging with the kind of highly critical people who are frequent commenters on EA debates.

I went on to claim that this underlying attitude is going to favor growth over course correction, and prioritize “movement-building” by gaining appeal among uncritical EA fans, while ignoring real concerns.

I said,

Essentially, this maps to a policy of “let’s not worry over-much about internally critiquing whether we’re going in the right direction; let’s just try to scale up, get a bunch of people to sign on with us, move more money, grow our influence.”  An uncharitable way of reading this is “c’mon, guys, our marketing doesn’t have to satisfy you, it’s for the marks!”  

This is a pretty large extrapolation from Todd’s actual comments, and I think I was putting words in his mouth that are much more extreme than anything he’d agree with. The quotes I pulled didn’t come close to proving that Todd actually wants to ignore criticism and pander to an uncritical audience.  It was wrong of me to give the impression that he’s deliberately pursuing a nefarious strategy.

And in the comments, he makes it clear that this wasn’t his intent and that he’s actually made a point of engaging with criticism:

Hi Sarah,

The 80,000 Hours career guide says what we think. That’s true even when it comes to issues that could make us look bad, such as our belief in the importance of the risks from artificial intelligence, or when are issues could be offputtingly complex, such as giving now vs. giving later and the pros and cons of earning to give. This is the main way we engage with users, and it’s honest.

As an organisation, we welcome criticism, and we post detailed updates on our progress, including our mistakes:

https://80000hours.org/about/credibility/evaluations/

https://80000hours.org/about/credibility/evaluations/mistakes/

I regret that my comments might have had the effect of discouraging important criticism.

My point was that public criticism has costs, which need to be weighed against the benefits of the criticism (whether or not you’re an act utilitarian). In extreme cases, organisations have been destroyed by memorable criticism that turned out to be false or unfounded. These costs, however, can often be mitigated with things like talking to the organisation first – this isn’t to seek permission, but to do things like check whether they’ve already written something on the topic, and whether your understanding of the issues is correct. For instance, GiveWell runs their charity reviews past the charity before posting, but that doesn’t mean their reports are always to the charity’s liking. I’d prefer a movement where people bear these considerations in mind as well, but it seems like they’re often ignored.

None of this is to deny that criticism is often extremely valuable.

I think this is plainly enough to show that Ben Todd is not anti-criticism. I’m also impressed that 80,000 Hours has a “mistakes page” in which they describe past failures (which is an unusual and especially praiseworthy sign of transparency in an organization.)

Todd did, in his reply to my post, reiterate that he thinks criticism should face a high burden of proof because “organisations have been destroyed by memorable criticism that turned out to be false or unfounded.” I’m not sure this is a good policy; Ben Hoffman articulates some problems with it here.

But I was wrong to conflate this with an across-the-board opposition to criticism.  It’s probably fairer to say that Todd opposes adversarial criticism and prefers cooperative or friendly criticism (for example, he thinks critics should privately ask organizations to change their policies rather than publicly lambasting them for having bad policies.)

I still think this is a mistake on his part, but when I framed it as “EA Leader says criticizing EA orgs is harmful to the movement”, I was exaggerating for effect, and I probably shouldn’t have done that.

Case 2: Robert Wiblin on Promises

I quoted Robert Wiblin on his interpretation of the Giving What We Can pledge, and interpreted Wiblin’s words to mean that he doesn’t think the pledge is morally binding.

I think this is pretty clear-cut and I interpreted Wiblin correctly.

The context there was that Alyssa Vance, in the original post, had said that many people might rationally choose not to take the pledge because unforeseen financial circumstances might make it inadvisable in future. She said that Wiblin had previously claimed that this was not a problem, because he didn’t view the pledge as binding on his future self:

pledge taker Rob Wiblin said that, if he changed his mind about donating 10% every year being the best choice, he would simply un-take the pledge.”  

Wiblin doesn’t think that “maybe I won’t be able to afford to give 10% of my income in future” is a good enough reason for people to choose not to pledge 10% of their lifetime income, because if they ever did become poor, they could just stop giving.

Some commenters claimed that Wiblin doesn’t have a cavalier attitude towards promises, he just thinks that in extreme cases it’s okay to break them.  In the Jewish ritual law, it’s permissible to break a commandment if it’s necessary to save a human life, but that doesn’t mean that the overall attitude to the commandments is casual.

However, I think it does imply a cavalier attitude towards promises to say that you shouldn’t hesitate to make them on the grounds that you might not want to keep them.  If you don’t think, before making a lifelong pledge, that people should think “hmm, am I prepared to make this big a commitment?” and in some cases answer “no”, then you clearly don’t think that the pledge is a particularly strong commitment.

Case 3: Robert Wiblin on Autism

Does Robert Wiblin actually mean it as a pejorative when he speculates that maybe the reason some people are especially hesitant to commit to the GWWC pledge is that they’re on the autism spectrum?

Some people (including the person he said it to, who is autistic), didn’t take it as a negative.  And, in principle, if we aren’t biased against disabled people, “autistic” should be a neutral descriptive word, not a pejorative.

But we do live in a society where people throw around “autistic” as an insult to refer to anybody who supposedly has poor social skills, so in context, Wiblin’s comment does have a pejorative connotation.

Moreover, Wiblin was using the accusation of autism as a reason to dismiss the concerns of people who are especially serious about keeping promises. It’s equivalent to saying “your beliefs are the result of a medical condition, so I don’t have to take them seriously.”  He’s medicalizing the beliefs of those who disagree with him.  Even if his opponents are autistic, if he respected them, he’d take their disagreement seriously.

Case 4: Jacy Reese on Evidence from Intuition

I quoted Jacy Reese responding to criticism about his cost-effectiveness estimates by saying that the evidence base in favor of leafleting includes his intuition and studies that are better described as “evidence against the effectiveness of leafleting.”

His, and ACE’s, defense of the leafleting studies as merely “weak evidence” for leafleting, is a matter of public record in many places. He definitely believes this.

Does he really think that his intuition is evidence, or did he just use ambiguous wording? I don’t know, and I’d be willing to concede that this isn’t a big deal.

Possibility 2: The views I accused EAs of holding are not actually bad.

Case 1: Dishonesty for the greater good might sometimes be worthwhile.

A number of people in the comments to my previous post are making the argument that I need to weigh the harms of dishonest or misleading information against its benefits.

First of all, the fact that people are making these arguments at least partly belies the notion that all EAs oppose lying across the board; I’ll say more about the prevalence of these views in the community in the next section.

Holly Elmore:

What if, for the sake of argument, it *was* better to persuade easy marks to take the pledge and give life-saving donations than to persuade fewer people more gently and (as she perceives it) respectfully? How many lives is extra respect worth? She’s acting like this isn’t even an argument.

This is a more general problem I’ve had with Sarah’s writing and medical ethics in general– the fixation on meticulously informed consent as if it’s the paramount moral issue.

Gleb Tsipursky:

If you do not lie, that’s fine, but don’t pretend that you care about doing the most good, please. Just don’t. You care about being as transparent and honest as possible over doing the most good.

I’m including Gleb here, even though he’s been kicked out of the EA community, because he is saying the same things as Holly Elmore, who is a respected member of the community.  There may be more EAs out there sharing the same views.

So, cards on the table: I am not an act-utilitarian. I am a eudaimonistic virtue ethicist. What that means is that I believe:

  • The point of behaving ethically is to have a better life for yourself.
  • Dishonesty will predictably damage your life.
  • If you find yourself tempted to be dishonest because it seems like a good idea, you should instead trust that the general principle of “be honest” is more reliable than your guess that lying is a good idea in this particular instance.

(Does this apply to me and my lapses in honesty?  YOU BET.  Whenever it seems like a good idea at the time for me to deceive, I wind up suffering for it later.)

I also believe consent is really important.

I believe that giving money to charitable causes is a non-obligatory personal decision, while respecting consent to a high standard is not.

Are these significant values differences with many EAs? Yes, they are.

I wasn’t honest enough in my previous post about this, and I apologize for that. I should have owned my beliefs more candidly.

I also exaggerated for effect in my previous post, and that was misleading, and I apologize for that. Furthermore, in the comments, I claimed that I intended to exaggerate for effect; that was an emotional outburst and isn’t what I really believe. I don’t endorse dishonesty “for a good cause”, and on occasions  when I’ve gotten upset and yielded to temptation, it has always turned out to be a bad idea that came back to bite me.

I do think that even if you are a more conventional utilitarian, there are arguments in favor of being honest always and not just when the local benefits outweigh the costs.

Eliezer Yudkowsky and Paul Christiano have written about why utilitarians should still have integrity.

One way of looking at this is rule-utilitarianism: there are gains from being known to be reliably trustworthy.

Another way of looking at this is the comment about “financial bubbles” I made in my previous post.  If utilitarians take their best guess about what action is the Most Good, and inflate public perceptions of its goodness so that more people will take that action, and encourage the public to further inflate perceptions of its goodness, then errors in people’s judgments of the good will expand without bound.  A highly popular bad idea will end up dominating most mindshare and charity dollars. However, if utilitarians critique each other’s best guesses about which actions do the most good, then bad ideas will get shot down quickly, to make room for good ones.

Case 2: It’s better to check in with EA orgs before criticizing them

Ben Todd, and some people in the comments, have argued that it’s better to run critical blog posts by EA orgs before making those criticisms public.  This rhymes a little with the traditional advice to admonish friends in private but praise them in public, in order to avoid causing them to lose face.  The idea seems to be that public criticism will be less accurate and also that it will draw negative attention to the movement.

Now, some of the issues I criticized in my blog post have also been brought up by others, both publicly and privately, which is where I first heard about them. But I don’t agree with the basic premise in the first place.

Journalists check quotes with sources, that’s true, and usually get quotes from the organizations they’re reporting on. But bloggers are not journalists, first of all.  A blog post is more like engaging in an extended conversation than reporting the news. Some of that conversation is with EA orgs and their leaders — this post, and the discussions it pulls from, are drawn from discussions about writings of various degrees of “officialness” coming from EA orgs.  I think the public record of discussion is enough of a “source” for this purpose; we know what was said, by whom, and when, and there’s no ambiguity about whether the comments were really made.

What we don’t necessarily know without further discussion is what leaders of EA orgs mean, and what they say behind closed doors. It may be that their quotes don’t represent their intent.  I think this is the gist of what people saying “talk to the orgs in private” mean — if we talked to them, we’d understand that they’re already working on the problem, or that they don’t really have the problematic views they seem to have, etc.

However, I think this is an unfair standard.  “Talk to us first to be sure you’re getting the real story” is extra work for both the blogger and the EA org (do you really have to check in with GWWC every time you discuss the pledge?)

And it’s trying to massage the discussion away from sharp, adversarial critics. A journalist who got his stories about politics almost entirely from White House sources, and relied very heavily on his good relationship with the White House, would have a conflict of interest and would probably produce biased reporting. You don’t want all the discussion of EA to be coming from people who are cozy with EA orgs.  You don’t necessarily want all discussion to be influenced by “well, I talked to this EA leader, and I’m confident his heart’s in the right place.”

There’s something valuable about having a conversation going on in public. It’s useful for transparency and it’s useful for common knowledge. EA orgs like GiveWell and 80K are unusually transparent already; they’re engaging in open dialogue with their readers and donors, rather than funneling all communication through a narrow, PR-focused information stream.  That’s a remarkable and commendable choice.

But it’s also a risky one; because they’re talking a lot, they can incur reputational damage if they’re quoted unfavorably (as I did in my previous post).  So they’re asking us, the EA and EA-adjacent community, to do some work in guarding their reputation.

I think this is not necessarily a fair thing to expect from everyone discussing an EA topic. Some people are skeptical of EA as a whole, and thus don’t have a reason to protect its reputation. Some people, like Alyssa in her post on the GWWC pledge, aren’t even accusing an org of doing anything wrong, just discussing a topic of interest to EAs like “who should and shouldn’t take the pledge?” She couldn’t reasonably have foreseen that this would be perceived as an attack on GWWC’s reputation.

I think, if an EA org says or does something in public that people find problematic, they should expect to be criticized in public, and not necessarily get a chance to check over the criticism first.

Possibility 3: The quotes I pulled are not strong evidence of a big problem in EA.

I picked quotes that I and a few friends had noticed offhand as unusually bad.  So, obviously, it’s not the same thing as a survey of EA-wide attitudes.

On the other hand, “picking egregious examples” is a perfectly fine way to suggest that there may be a broader trend. If you know that a few people in a Presidential administration have made racist remarks, for instance, it’s not out of line to suggest that the administration has a racism problem.

So, I stand behind cherry-picked examples as a way to highlight trends, in the context of “something suspicious is going on here, maybe we should pay attention to it.”

The fact that people are, in response to my post, defending the practice of lying for the greater good is also evidence that these aren’t entirely isolated cases.

Of course, it’s possible that the quotes I picked aren’t egregiously bad, but I think I’ve covered my views on that in the previous two sections.

I think that, given the Intentional Insights scandal, it’s reasonable to ask the question “was this just one guy, or is the EA community producing a climate that shelters bullshit artists?”  And I think there’s enough evidence to be suspicious that the latter is true.

Possibility 4: My point stands, but my tactics were bad

I did not handle this post like a pro.

I used the title “EA Has a Lying Problem”, which is inflammatory, and also (worse, in my view), not quite the right word. None of the things I quoted were lies. They were defenses of dishonest behavior, or, in Ben Todd’s case, what I thought was a bias against transparency and open debate. I probably should have called it “dishonesty” rather than “lying.”

In general, I think I was inflammatory in a careless rather than a pointed way. I do think it’s important to make bad things look bad, but I didn’t take care to avoid discrediting a vast swath of innocents, and that was wrong of me.

Then, I got emotional in the comments section, and expressed an attitude of “I’m a bad person who does bad things on purpose”, which is rude, untrue, and not a good look on me.

I definitely think these were mistakes on my part.

It’s also been pointed out to me that I could have raised my criticisms privately, within EA orgs, rather than going public with a potentially reputation-damaging post (damaging to my own reputation or to the EA movement.)

I don’t think that would have been a good idea in my case.

When it comes to my own reputation, for better or for worse I’m a little reckless.  I don’t have a great deal of ability to consciously control how I’m perceived — things tend to slip out impulsively — so I try not to worry about it too much.  I’ll live with how I’m judged.

When it comes to EA’s reputation, I think it’s possible I should have been more careful. Some of the organizations I’ve criticized have done really good work promoting causes I care about.  I should have thought of that, and perhaps worded my post in a way that produced less risk of scandal.

On the other hand, I never had a close relationship with any EA orgs, and I don’t think internal critique would have been a useful avenue for me.
In general, I think I want to sanity-check my accusatory posts with more beta readers in future.  My blog is supposed to represent a pretty close match to my true beliefs, not just my more emotional impulses, and I should be more circumspect before posting stuff.

Advertisements

45 thoughts on “Reply to Criticism on my EA Post

  1. By and large this seems like a pretty reasonable reply. A couple thing jumped out at me. First: “a blog post is more like engaging in an extended conversation than reporting the news”, and second: “’Talk to us first to be sure you’re getting the real story’ is extra work for both the blogger and the EA org.”

    More conversations do create more work, and if there are too many it doesn’t scale. But, if you’re making conversation in an attempt to influence EA organizations, this is creating more work for them anyway; or at least it will if you’re successful in being heard! So you probably shouldn’t assume up front that the issues you want to discuss are so unimportant that it’s not worth their time to have a private conversation. This is self-defeating. Better to be direct about it, and let them decide whether they want to have the conversation. (Particularly if they’ve said that’s what they wanted.)

    Or to put it another way: I’m not saying you’re protesting, but there is a model of activism that says we protest in public in order to get a “seat at the table” to start negotiations. But if the door is already open, maybe you want to walk through it?

    • Posts like Sarah’s (and public criticism in general) don’t just influence the person or group you’re criticizing. They also inform the interested public. That is immensely valuable. “Walking through the door” (i.e. taking the criticism private) removes that tremendous benefit.

      I am not associated with any EA organization, nor am I within any social circles close enough to those orgs as would enable me to hear via the private grapevine about the problems and trends Sarah discusses. But I sure do want to know about those problems and trends. What’s more, if EA organizations would prefer that they be criticized in private, rather than have people like me find out about stuff like this from posts like Sarah’s, then I want to know all the more.

      To put it another way: if you set out to do good, that’s fine and well; but that does not mean that everyone in the world will, nor should, join with you to help you in your task in the way most convenient for you. Organizations like GiveWell, GWWC, etc., absolutely should be subjected to serious, informed, public criticism. Such criticism (if it’s accurate, reasoned, etc.) is of immediate benefit to the public, regardless of whether it is of any help whatsoever to the organizations being criticized.

      • It might be worth emphasizing again that the suggestion was ‘talk to them to get their input before going public’ rather than ‘talk to them and then don’t go public’. Excluding the case where the critic’s objections are so thoroughly answered by the organisation that they don’t want to raise them in public anymore, you’d still get to hear about it when it does go public, just with a small delay, extra context and some extra amount of work. Worrying about the extra work, as Sarah does, seems reasonable.

        Reflecting on my own experiences though, maybe even in the unlikely case where a critic completely changes their mind it would generally be useful for the critic to go public explaining what they did think and why the conversation changed their mind. Certainly I’ve occasionally had issues with CEA, raised them with CEA, been impressed by the changes they plan to make in response, and then seen people around be raise the same criticisms a little later (normally before the changes are actually implemented). It’s possible I missed an opportunity to raise the general level of information/discourse in those cases.

      • @Alexander Gordon-Brown:

        I agree, in principle, with your reflection. I definitely wouldn’t want to exclude “the case where the critic’s objections are so thoroughly answered by the organisation that they don’t want to raise them in public anymore”.

        The problem, of course, is that the scenario where the critic goes to the org in question, has their criticism answered, and then writes up the whole “how I changed my mind” story, exacerbates the “extra work for the critic” issue. I predict that trying to adopt such a policy, as a critic, in practice drastically reduces how much criticism you actually end up publishing. This would be a bad thing.

        Who benefits from this more considerate approach to criticism? Well, the organization which is the target of the criticism, certainly. But do I, as a member of the public, care about that? I do not. Their interests are not mine. Who else? I benefit, possibly, if the result is that I get more accurate information, on the whole. But this will only be the case if the increased accuracy of publicly available information outweighs the in-practice suppressive effect on criticism (as described in previous paragraph).

        And consider this also: criticism isn’t just about *facts*. It’s also about *perspective*. This very post on which we’re commenting illustrates this, with several examples of criticism which boils down not to accusation of previously-unrevealed unsavory facts, but of condemnatory judgment of publicly-available facts. This kind of criticism cannot gain accuracy from being checked with its target. So the public does not benefit from the critic having gone through private channels first.

        Let critics go directly to the criticism. Let the targets respond in public. Let the critics respond in turn with updates to their evaluation, if any. And let the public judge.

  2. I was struck by how the previous post did not have an epistemic status (and for that matter, neither does this one); this seems like a case where it would have been particularly useful.

    Also, though it may be a tangent, I doubt the following somewhat:

    If you know that a few people in a Presidential administration have made racist remarks, for instance, it’s not out of line to suggest that the administration has a racism problem.

    Or rather, while I agree that it’s probably not “out of line” (i.e. a violation of discourse norms) to make such a suggestion, I doubt that it will actually be true. To be more precise, I’m wary of a tendency to overestimate what the “racism problem” would imply about the beliefs of the average member of such a hypothetical administration. It seems to pattern match to a general problem of “rounding people off to extremes”, which I think has bad consequences — as I have remarked upon elsewhere.

  3. Thanks for this post Sarah. I was very critical of the first post. and while I still on-balance disagree with your object-level assessment of a couple of points, nothing here strikes me as unreasonable or ill-founded. I think it’s a useful contribution to an ongoing debate about what norms EA should or should not have, and it’s helped me to update my views.

    On a more personal note, my experience is that it’s very difficult to publicly admit, as you do here, that you “expressed an attitude of “I’m a bad person who does bad things on purpose”, which is rude, untrue, and not a good look on me”. I don’t know you, so I obviously don’t know whether you find it as difficult as I do, but assuming you do I’m impressed.

  4. The discussion of talking to organizations first reminds me of security vulnerability disclosure. There too the health of the discourse contends with various reputation issues and the potential for true information to do harm.

    The solution that community settled on is that vulnerabilities are reported to the author first. Ideally, the author then issues a patch, waits for their users to apply it, and then publishes crediting the researcher, who is then free to speak at length. If the author does not act promptly (the longest delay I’ve heard of is six months) then the researcher is entitled to publish loud and angry.

    This compromise was reached after several years of experimentation and seems to work pretty well for everyone. The software publishers who are best about good behavior wrt these customs also have the best security records.

    I’m not sure exactly what ever equivalent of issuing a patch is here, but where there is one, we may be able to learn from that experience.

  5. I’d strongly prefer not to get involved in a lengthy discussion on this topic right now, but since there was a question about my views, I’d like to answer.

    <>

    I do think my intuition — and other people’s — is evidence. By evidence, I mean it’s information that if accounted for will lead someone to form more accurate beliefs about the world. This is entirely an empirical belief. I’d change my mind on this if I repeatedly saw intuition-assisted beliefs being less accurate than non-intuition-assisted beliefs, all else equal.

    I think the strength of intuition evidence varies widely. My intuition about what ingredients will taste good together in cooking is pretty good, and I’d hate to cook without it, but I hardly trust my intuition on economics in the Middle East. Of course, if my intuition is all i have to work with to form beliefs about Middle East economics, I’d sooner rely on it than make completely uneducated guesses. Intuition also varies widely across people, e.g. between me and an economist specializing in the Middle East.

    <>

    Yes, I think current leafleting studies are weak evidence. With a lot of uncertainty, I think they’re even weaker evidence than intuition and weaker evidence than using higher quality RCTs from other fields, such as Get Out The Vote advocacy, and connecting it to veg leafleting with intuitive conversion rates. I also believe that we need to use all the evidence we have available to make decisions about how to do the most good. Yes, most all of that is weak, but if we can just bump our likelihood of success from 55% to 56%, that’s still a difference of potentially billions or more animals helped.

  6. Hi Sarah, I’d just like to thank you for bringing this important issue to my attention. I am mortified to find out that anyone read my comment as derogatory towards anyone, as that was the precise opposite of my intention. I hope you’ll forgive the delay in my reply as I have been away on annual leave.

    “But we do live in a society where people throw around “autistic” as an insult to refer to anybody who supposedly has poor social skills, so in context, Wiblin’s comment does have a pejorative connotation.”

    I used the term ‘on the autism spectrum’, which I thought was respectful or at least neutral.

    “Moreover, Wiblin was using the accusation of autism as a reason to dismiss the concerns of people who are especially serious about keeping promises.”

    It was not my intention to do that. That any view is disproportionately held by people on the autism spectrum is not itself an argument that it is wrong (or right). I figured that in this community such a thing would go without saying.

    Rather I was speculating about why me and others could have such different i) intuitions about the strength of commitment implied by the word ‘pledge’, as a pure matter of social linguistic convention, and ii) moral intuitions about how bad it is to break a pledge relative to other moral considerations, such as giving up the opportunity to switch to more effective opportunities to help other people that become available.

    I figured this observation might encourage a more ‘live and let live’ attitude to such differences – at least it did for me. That seems to be the main conclusion of research into intercultural and interpersonal differences in the ‘moral foundations theory’: that we can’t really expect everyone to share our exact intuitions in moral matters and just have to get along anyway.

    I’m sorry that what I said came across the wrong way to others – I truly was only trying to understand how people come to different viewpoints, and wasn’t trying to minimize anyone’s perspective.

    I am at least relieved to find out that the person I was writing to interpreted it just as I intended and did not take any offence.

    Thanks again,

    Rob

    • I used the term ‘on the autism spectrum’, which I thought was respectful or at least neutral.

      It’s not the term that’s insulting, though.

      I can’t speak for Sarah, but I don’t find anything wrong with the term “autistic”; it’s descriptive, it’s clear, it’s fine. The problem is not the word, but the thought behind it: that I only believe what I believe because I am autistic — and that you therefore do not need to engage with my beliefs. By explaining my beliefs as the product of autism, you explain them away; by saying “well, sure, autistic people believe this, being autistic and all”, you save yourself having to explain what, exactly, is wrong with those beliefs, if anything, and why you believe differently, and why people should agree with you and not with me.

      It is, in short, an instance of Bulverism.

      • I engaged with the belief substantively throughout the rest of the thread, precisely because I don’t think someone believing something because they are on the spectrum (even if you could be ever confident that that were the case) is a reason to think it is false.

    • I was quite surprised to hear you describe that thread as ” I truly was only trying to understand how people come to different viewpoints.” The autism comment was not the only thing you said in that thread I found dismissive. Phrases like “That’s the only sensible way to act”, “plainly dominates”, “obviously” also contributed to my impression that you thought your way was obviously correct and other people needed to stop bothering you about it.

      I’m relieved to hear that that’s not the case.

    • For what it’s worth, my problem wasn’t with the term “autistic” and most people I know don’t have a problem with it; I definitely wasn’t advocating for more euphemisms.

  7. On Ben: You say, “I still think this is a mistake on his part, but when I framed it as “EA Leader says criticizing EA orgs is harmful to the movement”, I was exaggerating for effect, and I probably shouldn’t have done that.” But that’s not all of what you said, is it? You singled out Ben by name in a post entitled “EA has a lying problem.”

    On Rob: It is not “clear-cut” at all that Rob doesn’t consider the pledge “morally binding.” Comparing the pledge to a marriage does not indicate that the pledge is not a serious commitment undertaken with every intention of keeping it. Some people are cavalier about marriage, sure, but the solution to that is not to insist that marriage should be considered an Unbreakable Vow. All legal contracts can be broken under some cirumstances– otherwise entering a contract would be a form of slavery. The point of making a contract is not that it will be literally impossible for you not to fulfill it– it’s that you’re setting a penalty if you don’t, such as having to pay for breach of contract, losing collateral, or being held responsible in court. That’s how marriage is *legally* (only you can know if you’ve given it every possible chance in your heart). You are legally bound and have to go through a difficult legal process to become unbound. This gives the moral pledge weight. The divorce analogy for unpledging is perfectly apt and does not indicate a lack of seriousness at all. This is leaving aside the important point that GWWC sets the rules for their pledge, not *your* unusually specific and strict definition of a pledge.

    You say: “However, I think it does imply a cavalier attitude towards promises to say that you shouldn’t hesitate to make them on the grounds that you might not want to keep them. If you don’t think, before making a lifelong pledge, that people should think “hmm, am I prepared to make this big a commitment?” and in some cases answer “no”, then you clearly don’t think that the pledge is a particularly strong commitment.”

    False. Where did Rob say he would unpledge if he didn’t want to keep donating? iirc, Alyssa was not talking about people who anticipate not *wanting* to donate in the future, but people who wouldn’t be *able* to due to unforeseen circumstances.

    • False. Where did Rob say he would unpledge if he didn’t want to keep donating?

      Right here: http://effective-altruism.com/ea/14i/contra_the_giving_what_we_can_pledge/9g5

      And here: http://effective-altruism.com/ea/14i/contra_the_giving_what_we_can_pledge/960

      Alyssa Vance also quotes Rob as saying this in the original post here: http://effective-altruism.com/ea/14i/contra_the_giving_what_we_can_pledge/

      However, the reference is to a convo on Facebook, which I don’t know how to find or whether I could access it. Feel free to check on this for yourself, if you have access.

      • Both of these statements are perfectly reasonable. He’s not saying he would stop because he didn’t want to (by which I was thinking, *merely* didn’t want to give the money), but if he beieved it was not accomplishing the altruistic goal he set out to fulfill. That’s not giving up because you just don’t feel like it, which is what I took Sarah to mean.

        “I’ve taken the pledge because I think it’s a morally good thing to do and it’s useful to have commitment strategies to help you live up to what you think is right. I expect to follow through, because I expect to believe that keeping the pledge is the right thing for me to do.”

        If it turns out to be bad, I will no longer do it, because there’s no point having a commitment device to prompt you to follow through on something you don’t think you should do. That’s the only sensible way to act.”

      • “The question is under what conditions you can break a pledge, as it’s ambiguous.

        I think ‘this pledge no longer accomplishes the underlying goal which motivated my past self to take it’ is a generally acceptable reason, and rightly so. Your past self would have wanted to write in such an exit clause if they had anticipated it (or had the flexibility), so there’s no breakdown in cooperation.”

        This is closer to “Rob would stop pledging if he didn’t want to.” There’s still an important difference between giving up on the pledge because it’s convenient or more fun to have the extra money for yourself and determining that the pledge is not the best thing you can do for the world.

      • The original Facebook thread was only about whether I would untake the pledge if I believed it was *morally* best to donate 10% of my income, not if it was prudentially bad for me (e.g. I wanted to buy fancy new things for myself). As was most of the discussion on the forum thread.

        I would only untake the pledge if I believed it was morally harmful for me to continue fulfilling it, as my taking it was implicitly premised on it being morally desirable for me to give away 10% of my income. I find such a scenario pretty far-fetched though. I would not untake it for personal non-moral gain.

        Cite: https://www.facebook.com/events/177675359369053/permalink/181505212319401/?comment_id=181591775644078&reply_comment_id=182348602235062&comment_tracking=%7B%22tn%22%3A%22R%22%7D

      • > Both of these statements are perfectly reasonable. He’s not saying he would stop because he didn’t want to (by which I was thinking, *merely* didn’t want to give the money), but if he beieved it was not accomplishing the altruistic goal he set out to fulfill.

        I think this highlights a key discrepancy in how people are viewing this. In my mind, there’s no point in pledging something unless you otherwise might not do it. In this particular case, I see it as people trying to commit themselves to giving, even if it seems hard later. No one thinks you should donate when it’s hurting the world, but it’s easy to talk yourself into the idea that it’s hurting the world when it’s just hurting you.

        I literally don’t understand why you would pledge something, conditional on it being a good way to accomplish a goal you have. I also think it’s incorrect and misleading to count pledges taken with that conditional as lifelong for purposes of calculating impact.

  8. As for your comments on me:
    You say: “A number of people in the comments to my previous post are making the argument that I need to weigh the harms of dishonest or misleading information against its benefits.”

    Totally agree.

    You say: “First of all, the fact that people are making these arguments at least partly belies the notion that all EAs oppose lying across the board”

    1) Who ever said this? 2) Why are you bringing this up as if opposing lying across the board is the only way not to have a “lying problem”?

    Thank you for quoting me fairly. I said things that you could easily have quote-mined, but you didn’t, and I appreciate that. However, you did quote me next to Gleb for no apparent reason other than call my creibility into question by association. I totally agree with what you quoted Gleb as saying, but I resent you comparing our statements as if to suggest that Gleb (an EA ex-communicant) acts similarly to me (a respected member of EA) in general.

    “I’m including Gleb here, even though he’s been kicked out of the EA community, because he is saying the same things as Holly Elmore, who is a respected member of the community. There may be more EAs out there sharing the same views.”

    What purpose does quoting Gleb serve here other to hold me and EA quilty by association with Gleb?

    “I do think that even if you are a more conventional utilitarian, there are arguments in favor of being honest always and not just when the local benefits outweigh the costs.”

    This is, in fact, what I think, as I said in my original comment. Looking back, I might have been unclear (i.e. I should have said “Once we allow that lying or spin is not always the wrong move BY DEFINITION” instead of just “Once we allow that lying or spin is not always the wrong move” as if I thought we routinely face these decisions). Fwiw, I believe act utiliarianism is (essentially) the truth about morality, however, the best way to act on that knowledge is by behaving as a rule utilitarian.

    I sort of ran together the issues of “what is being honest?” and “should we ALWAYS be honest?” A big reason I disagreed with your post so strongly is that I do not think persuasion is necessarily dishonesty or manipulation (and if it was, your OP would have its pants on fire). Being too harsh on others is a good practice if you want to avoid errors in their favor, but it’s not exactly honesty, either. I think that’s what you want EA to do to itself, and that would only harm the cause without really satisfying anyone (since I strongly suspect this is not your real problem with EA). I think the risk of errors or fudging in EA’s favor has to be weighed against the risk of EA dying because of an inability to meet its own impossible standards of discourse or attract any new people.

  9. “I also exaggerated for effect in my previous post, and that was misleading, and I apologize for that. Furthermore, in the comments, I claimed that I intended to exaggerate for effect; that was an emotional outburst and isn’t what I really believe. I don’t endorse dishonesty “for a good cause”, and on occasions when I’ve gotten upset and yielded to temptation, it has always turned out to be a bad idea that came back to bite me.”

    You can’t just assert that there is never any conflict between utility and honesty. They are very often in conflict, such that I agree lying can never be a good general purpose policy. You have to own the price of uflinching honesty, like when that Nazi is at your door asking for the Jew in your attic. You’re dodging hypotheticals which I was engaging. At least be clear about that if you’re going to criticize me.

    You’re tempting me to excoriate you for your “lying problem” because apparently you have lied before, and you had “emotional outbursts” that were basically lies on your own blog.

    • That sentiment has been a core value of Western civilization ever since Socrates argued that injustice harms the doer.

      But I’m not surprised by your attitude; in fact, it pattern-matches to my stereotype of EA. I think it also encapsulates the aspect(s) of EA that I don’t like, and I suspect something similar may be true of Sarah. (I mention this because you asked Sarah in the comments on the previous post to state her true rejection of EA; this seems like a candidate.)

      • Yeah, that’s my true rejection of EA. I’m not as frank about it as I would like, most of the time, but…yeah. The problem is the A.

      • What I find immoral is the idea that ultimately morality is about living a better life for yourself. I strongly agree that living ethically correlates with a better life for one’s self.

        Would you defend the idea that other people don’t have ultimate value? I find the moral importance of others to be a glaring ommission when someone lays out the premises of their morality.

      • Oh, I think I owe you an explanation. Of course I think other people matter. I think that what motivates me to act on other people’s behalf is their importance to me, which is often quite large. I also think people matter to themselves and deserve to seek *their* happiness; I, Sarah, am not in a special frame of reference. It’s about radiating circles of concern, not literally being indifferent to the welfare of others.

      • “Yeah, that’s my true rejection of EA. I’m not as frank about it as I would like, most of the time, but…yeah. The problem is the A.”

        I cannot tell if you are serious about this or if you think it’s clear that you are joking or something. If youare serious, then that’s really all I need to know not to take your moral critiques to heart.

      • I’m not joking, but explaining why it’s not crazy would take some time, and I had forgotten how non-obvious it is. Obviously you have no obligation to take anything I write to heart.

      • To be fair, believing that the point of morality is living a better life for yourself doesn’t exclude being an EA (though it excludes utilitarian justifications for it). To the extent that strangers are important to you, EA has a lot to say about how best to help them. It’s similar to maintaining your own health – it’s not important enough to always forgo eating unhealthy food, but you still care about it to some degree, so it’s useful to know how to take care of it effectively.

      • @Holly Elmore:

        Would you defend the idea that other people don’t have ultimate value? I find the moral importance of others to be a glaring ommission when someone lays out the premises of their morality.

        “Ultimate value” is a confusing notion to which we don’t really have access, and a word like “ultimate” invokes Far Mode; whereas I tend to think ethics are best reasoned about using Near Mode. The principle is that people generally make better decisions when they’re thinking concretely; thus insofar as ethics is about making decisions, things closer to oneself serve as a better foundation of ethics, because when people think about their own lives, they think more concretely than when they think about “other people” (particularly when the “other people” are just an abstraction).

        I sympathize with those who argue that the world would improve a lot more if people increased their effectiveness at improving their own lives, than if they increased their explicit endorsement of helping others. Obviously it would be great if people were effective at helping large numbers of others, but my model suggests that in most cases having a prayer of achieving this requires (in addition to all kinds of other advantages) having one’s psychological house in order to a degree much greater than is common.

      • @blacktrance:

        To be fair, believing that the point of morality is living a better life for yourself doesn’t exclude being an EA (though it excludes utilitarian justifications for it)

        This may be logically correct, but EA is a movement, not (just) a theory. Movements are social clusters, and EA is a social cluster centered around people who (among other things) espouse utilitarianism.

        Clusters don’t have sharp boundaries, and to be sure, the edges of EA lead pretty smoothly into the general rationalist diaspora, where EA-resembling people with other philosophies can be perfectly at home.

      • According to the 2015 survey, only 56% of EAs identified as utilitarian, and 34% saw EA as only an opportunity and not an obligation. So while the utilitarian view is central, many in the movement hold a different view.

    • I feel that Sarah has cooperated on a few non-obvious, yet highly important conversational norms here: admitting that you’re not as frank as you’d like is a subtle way of reassuring your conversational partners. Admitting that you exaggerated for effect when you really didn’t is cooperative if others feel attacked, since, on a gut level, it’s somewhat harder to feel attacked by someone if they’ve just admitted to making a mistake.

      The claim in the parent comment to the reply I’m writing here was prefaced with “I find”, and I think this is similarly cooperative. I also think that stating that someone holds an immoral sentiment is a socially weighty thing to do. The author of the parent comment to this reply posted a screenshot of both their own comment, komponisto’s comment, and Sarah’s reply to facebook, with the caption, “For anyone who is still taking Sarah Constantin’s recent attacks on EA seriously”. I feel that posting this caption to facebook was violent. I don’t like it that either of these two choices disincentivises future helpful critiques of EA, because I think that nuances in how one comes across create incentives for future writers that are much stronger than we’d intuitively realize. On the inside, my disapproval more simply feels like: “I don’t approve, because I want people to be nice”; however, I think that there are strong incentive-based utilitarian reasons to feel that this sort of social violence is bad, as well.

      • Sorry, did she this reply earlier. I thought it was very reasonable up until calling my posting the screenshot “violent.” I am very against hyperbole about violence on the grounds that it waters down the term.

        I thought it was important for people to know Sarah’s true rejection of EA. And I was, frankly, floored by her response to my comment. I thought for sure she would respond by clarifying that, of course, morality is about the importance of other people.

        I was perhaps wrong to suggest that Sarah shouldn’t be taken “seriously” because she has a different terminal value than I or most EAs have. But I do think that viewing lying as more egregious than not fundamentally valuing the moral importance of others makes zero sense (not that she doesn’t, but I think that’s a huge ommission when stating your premises). A lot of people who were alarmed by what Sarah alleged about EA would be interested to know this.

      • Also, I felt pretty unfairly attacked by Sarah quoting me alongside Gleb to discredit me. I think this was far more aggressive than me sharing a screencap on my facebook (as opposed to my blog, where it wouldn’t get buried on my wall in a day or two) without tagging Sarah (idk if she’s even on facebook) and without providing context (so that it only made sense to people who were already in on the debate).

        Perhaps it was still wrong to post. I’ll consider that. But I don’t think it was unfair or uncalled for based on Sarah’s actions.

      • > I am very against hyperbole about violence on the grounds that it waters down the term.

        This is quite reasonable of you! While I still agree with the points made in my original comment, I used it to *selectively* emphasize quotes in a way that was harmful to you, and this was wrong of me.

    • Same. To be more precise, I was very critical of many framings and details of the first post, but I agreed that dishonesty in the spirit of (naive) act consequentialism is a problem that exists and is important to address. Maybe the discussion was off to a suboptimal start with the first post, but I’m glad you started it and now helped channel it in a productive direction!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s