The Right to Be Wrong

Epistemic Status: pretty confident

Zvi recently came out with a post “You Have the Right to Think”, in response to Robin Hanson’s “Why be Contrarian?”, itself a response to Eliezer Yudkowsky’s new book Inadequate Equilibria.  All of these revolve around the question of when you should think you “know better” or can “do better” than the status quo of human knowledge or accomplishment.  But I think there’s a lot of conflation of different kinds of “should” going on.

Yudkowsky’s book, and Hanson’s post, are mostly about epistemic questions — when are you likely to get the right answer by examining an issue yourself vs. trusting experts?

Inadequate Equilibria starts with the canonical example of when you can’t outperform the experts — betting on the stock market — and explains about efficient markets, and then goes on to look into what kinds of situations deviate from efficient markets such that an individual could outperform the collective intelligence of everyone who’s tried so far.  For instance, you might well be able to find a DIY treatment for your health problem that works better than anything your doctor would prescribe you, in certain situations — but due to the same incentive problems that prevented medical consensus from finding that treatment, you probably wouldn’t be able to get it to become the standard of care in the mass market.

Hanson mostly agrees with Yudkowsky’s analysis, except on some points where he thinks the argument for individual judgment being reliable is weaker.

Zvi seems to be talking about a different thing altogether when he talks about the “rights” that people have.

When he says “You have the right to disagree even when others would not, given your facts and reasoning, update their beliefs in your direction” or “You have the right to believe that someone else has superior meta-rationality and all your facts and reasoning, and still disagree with them”, I assume he’s not saying that you’d be more likely to get the right answer to a question in such cases — I think that would be false.  If we posit someone who knows better than me in every relevant way, I’d definitionally be more likely to get the right answer by listening to her than disagreeing with her!

So, what does it mean to have a right to disagree even when it makes you more likely to be wrong?  How can you have a right to be wrong?

I can think of two simple meanings and one subtle meaning.

The Right To Your Opinion

The first sense in which you “have a right to be wrong” is social and psychological.

It’s a basic tenet of free and pluralistic societies that you have the legal right to believe a false thing, and express your belief.  It is not a crime to write a horoscope column.  You can’t be punished by force just for being wrong.  “Bad argument gets counterargument.  Does not get bullet.  Never.  Never ever never for ever.”

And tolerant, pluralist cultures generally don’t believe in doing too much social punishment of people for being wrong, either.  It’s human to make mistakes; it’s normal for people to disagree and not be able to resolve the disagreement; if you shame people as though being wrong is horribly taboo, your community is going to be a more disagreeable and stressful place. (Though some communities are willing to make that tradeoff in exchange for higher standards of common knowledge.)

If you are regularly stressed out and scared that you’ll be punished by other people if they find out you believe a wrong thing, then either you’re overly timid or you’re living in an oppressive environment.  If fear of punishment or ostracism comes up regularly when you’re in the process of forming an opinion, I think that’s too much fear for critical thinking to work properly at all; and the mantra “I have the right to my opinion” is a good counterweight to that.

Discovery Requires Risking Mistakes

The second sense in which you have a “right to be wrong” is prudential.

You could ensure that you’d never be wrong by never venturing an opinion on anything.  But going all the way to this extreme is, of course, absurd — you’d never be able to make a decision in your life!  The most effective way to accomplish any goal always involves some decision-making under uncertainty.

And attempting more difficult goals involves more risk of failure. Scientists make a lot of hypotheses that get falsified; entrepreneurs and engineers try a lot of ideas that don’t work; artists make a lot of sketches that wind up in the wastebasket.  Comfort with repeated (hopefully low-stakes) failure is essential for succeeding at original work.

Even from a purely epistemic perspective, if you want to have the most accurate possible model of some part of the world, the best strategy is going to involve probabilistically believing some wrong things; you get information by testing guesses and seeing where you’re mistaken.  Minimizing error requires finding out where your errors are.

Note, though, that from this prudential perspective, it’s not a good idea to have habits or strategies that systematically bias you towards being wrong.  In the “right to your opinion” sense, you have a “right” to epistemic vices, in that nobody should be attacking you for them; but in this goal-oriented sense, they’re not going to help you succeed.

Space Mom Accepts All Her Children

The third sense in which you have a “right to be wrong” is a little weirder, so please bear with me.

There’s a mental motion you can do, when you’re trying to get the right answer or do the right thing, where you’re trying very hard to stay on the straight path, and any time you slip off, you violently jerk yourself back on track.  You have an aversion to wrongness.

I have an intuition that this is…inefficient, or mistaken, somehow.

Instead, there’s a mental motion where you have peripheral vision, and you see all the branching paths, and consider where they might go — all of them are possible, all of them are in some cosmic sense “okay” — and you perform some kind of optimization procedure among the paths and then go along the right path smoothly and without any jerks.

Or, consider the space of all mental objects, all possible thoughts or propositions or emotions or phenomena or concepts.  Some of these are true statements; some of them are false statements. Most of them are unknown, or not truth-apt in the first place.  Now, you don’t really want to label the false ones as true — that would just be error.  But all of them, true or false or neither or unknown, are here, hanging like constellations in this hypothetical heaven. You can look at them, consider them, call some of them pretty.  You don’t need to have an aversion response to them. They are “valid”, as the kids say; even if they don’t have the label “true” on them, they’re still here in possibility-space and that’s “okay”.

In a lot of traditions, the physical metaphor for “good” is high and bright.  Like the sun, or a mountaintop. The Biblical God is described as high and bright, as are the Greek Olympians or the Norse gods; in Indian and Chinese traditions a lot of divine or idealized entities are represented as high and bright; in ordinary English we talk about an idealistic person as “high-minded” and everybody knows that the “light side of the Force” is the side of the good guys.

To me, the “high and bright” ideal feels connected to the pattern of seeking a goal, seeking truth, trying not to err.

But there are also traditions in which “high and bright” needs to be balanced with another principle whose physical metaphor is dark and vast.  Like the void of space, or the deeps of the sea.  Like yin as a complement to yang, or prakrti as a complement to purusa, or emptiness as a complement to form.  

The “high and bright” stuff is value — knowledge, happiness, righteousness, the things that people seek and benefit from.  The “dark and vast” stuff is possibility.  Room to breathe. Freedom. Potential. Mystery. Space.

You can feel trapped by only seeking value — you can feel like you lack the “space to be wrong”.  But it’s not really that you want to be wrong, or that you want the opposite of value; what you want is this sense of “enough room to move”.

It’s something like Keats’ “negative capability“:

…at once it struck me, what quality went to form a Man of Achievement especially in Literature & which Shakespeare possessed so enormously—I mean Negative Capability, that is when man is capable of being in uncertainties, Mysteries, doubts, without any irritable reaching after fact & reason—Coleridge, for instance, would let go by a fine isolated verisimilitude caught from the Penetralium of mystery, from being incapable of remaining content with half knowledge.

or something like the “mother” Ahab perceives behind God:

But thou art but my fiery father; my sweet mother, I know not. Oh, cruel! what hast thou done with her? There lies my puzzle; but thine is greater. Thou knowest not how came ye, hence callest thyself unbegotten; certainly knowest not thy beginning, hence callest thyself unbegun. I know that of me, which thou knowest not of thyself, oh, thou omnipotent. There is some unsuffusing thing beyond thee, thou clear spirit, to whom all thy eternity is but time, all thy creativeness mechanical. Through thee, thy flaming self, my scorched eyes do dimly see it. Oh, thou foundling fire, thou hermit immemorial, thou too hast thy incommunicable riddle, thy unparticipated grief.

The womb of nature, the dark vastness of possibility — Space Mom, so to speak — is not the opposite of reason and righteousness so much as the dual to these things, the space in which they operate.  The opposite of being right is being wrong, and nobody really wants that per se.  The complement to being right is something like “letting possibilities arise” or “being curious.” Generation, as opposed to selection.  Opening up, as opposed to narrowing down.

The third sense in which you have the “right to be wrong” is a lived experience, a way of thinking, something whose slogan would be something like “Possibility Is.”

If you have a problem with “gripping too tight” on goals or getting the right answer, if it’s starting to get oppressive and rigid, if you can’t be creative or even perceive that much of the world around you, you need Space Mom.  The impulse to assert “I have the right to disagree even with people who know better than me” seems like it might be a sign that you’re suffocating from a lack of Space Mom.  You need openness and optionality and the awareness that you could do anything within your powers, even the imprudent or taboo things.  You need to be free as well as to be right.

 

26 thoughts on “The Right to Be Wrong

  1. Excellent. Thank you for writing this!

    I was definitely motivated by all three of these things, and I explain more in the comments to my post (I plan to try and pull it into a full post, but wanted to make sure it at least got out there, and took advantage of the looser standards of comments to do that).

    I’m worried about people being punished for being wrong, but even more than that, being punished for being what people see as wrong, even when they’re right, or for being wrong in the sense of not having the right and proper formal outside-view-approved justifications for their beliefs regardless of their truth-value or whether they should in fact believe that given their evidence. I felt a strong need to push back on that.

    I’m worried that people will consider, and are increasingly considering, it increasingly socially unacceptable, and internalize that unacceptability, to think for themselves and draw their own conclusions about what might or might not work or what is worth doing or trying, especially without formal outside-view-approved justifications. I felt a strong need to push back on that, too.

    I’m worried also about the third thing, especially, that people will shut themselves or others down with such arguments, that ideas and data and reasoning will be destroyed rather than shared, that creativity will wither, that even more force will be applied than it already is to trying to ponder exactly what everyone else is pondering and to blindly do the things other people do, lest one be struck down for their immodesty or what not. That people will feel they *do not have the right to think*. Even I can feel people saying *to me*, no, you have no right to think without a license, and you certainly don’t have the right to disagree without one that specifically qualifies this particular disagreement, and the case for it needs to stand up in virtual licensing court.

    If I was feeling that way, I could only imagine how under such attack others must be feeling. They needed to hear the Applause Lights and Rousing Speech of the other side to be reminded they not only didn’t need no one’s damned permission, it was the right thing to do!

    So the need to push back, right away and fast and hard and without qualifications and formal arguments, was strong, to the point where I felt I literally would be *not OK* emotionally until I did so, and that this was something worth spending points on.

    If I had to raise objections to this post, I would raise two. The basic one is that I instinctively worry about the name “space mom” as in people joking about someone else running back to their space mom, which is purely a (hopefully quite minor) naming issue. The other is that I do not think that such immodest action has to make you more likely to be wrong! There are cases where it makes you more likely to be correct, or to have a better probability estimate, to use each and every one of my rights/duties, and I wrote in detail about some such examples in the comments when I was asked about having someone with superior meta-rationality.

    I intentionally pushed right to the edge with the statements. In particular, there were two that seem like “this is rarely a good idea, and most of the time that it is actually done, it’s still rarely a good idea.” One was the one challenged here and in my comments, that someone else has superior meta-rationality, the other was with the unrealistically good and trustworthy expert consensus. For the meta-rationality one it was to point out the hidden assumptions – the use of sufficient compute, the sharing of tons of implied knowledge and internal perspectives and intuitions and models, and the full trust in both directions – that make the rule that you need to fully go with the other person’s assessment actually hold. In the expert case, it was to point out that even if the experts are (almost) fully deserving of your trust *in general* and you know that, you can still devote a lot more compute to a given question and/or have data that they have not looked at and have no reason to know they should look at, because they have no reason to trust *you* that it’s important, and thus you can and should occasionally still think they’re wrong, and more than that, that it shouldn’t feel like you need to give Official Outside-View-Approved-Reason in either case. Especially I don’t want to have to say “I don’t trust you enough” or “I don’t think you trust me enough” or “you haven’t thought about this enough” or “you don’t have this particular fact” in order feel entitled to not 100% jump to the estimates of the person with the best meta-rationality.

    • a.) Space Mom is a flippant way of putting it, because most of the inspiration is from Indian traditions but I don’t want to use a term of art or a religious concept incorrectly.
      b.) Obviously being “immodest” can sometimes make you more correct; I wanted to talk about the hard case where it doesn’t.

    • “the case for it needs to stand up in virtual licensing court.”

      That point is particularly problematic because the actual jurisprudence of this virtual court leaves much to be desired, taking the form of sneers rather than arguments, bets not grounded in other bets, etc.

  2. Overall this seems important and mostly right. Here’s what I see your three categories as:
    What should social policy be towards others whom we believe to be mistaken?
    What is the correct allocation of an ideal reasoner’s epistemic budget, between hypothesis-pruning and hypothesis-generation?
    What is the correct balance of the two psychological forces in actual humans that you call “High and Bright” and “Space Mom” respectively?

    But I think you’re missing some things, and especially that the last question conflates a few things that really need to be thought of separately. I’ll put them in separate comments to keep length manageable.

    • What should social policy be towards others whom we believe to be mistaken?

      We have to take into account our indexical uncertainty about which of us is wrong, but more than that, it can be advantageous to be able to interact peacefully even with people who are persistently wrong about some things, so long as they observe basic protocols of nonviolence.

      However, this trades off against two things, which it seems like you and Eliezer are both missing.

      First, not all wrongness is innocent error. Sometimes people are lying, consciously or unconsciously. This is violence directed at the listener to control their behavior. Even advertising that makes no false claims is often in this category, when it raises the salience of something for basically adversarial reasons. (Hard sells and infomercials are less like this, branding is more like this.) If it never ever never for ever gets bullet, then eventually a bunch of thugs barge into your nice unwalled garden and ruin it.

      Second, some types of dissent undermine the political order that enables us to interact with one another peacefully. Eliezer (in the piece you link that says bad argument never gets bullet) quotes Deuteronomy to the effect that if someone tries to encourage you to worship a foreign god, you should not listen to them, and should instead publicly stone them. He then conflates this with a general decree to punish critics. This is a totally implausible reading to anyone who’s actually bothered to pay attention to the Bible; ancient Israelite prophets frequently claimed that Yahweh’s instructions had been wrongly construed, and that the dominant power structure (including both kings and the priesthood) was in error. They seem to have been a sufficiently protected class that kings and priests would sometimes yell at them, but rarely physically injure them.

      The correct contemporary analogue to advocating the worship of a foreign god, is advocating cooperation with a foreign government. The modern analogue to stoning the person introducing the worship of foreign gods, would be imposing legal sanctions against Facebook for colluding with Russian intelligence services to manipulate American election results. If you can’t tell the difference between that and punishing criticism, then you don’t know how to have a sane walled garden.

      • Hm, I think I actually still disagree with your exceptions there, and think it’s actively dangerous to, for example, impose legal sanctions against Facebook for the things that actually happened in the 2016 election.

        Fraud actually *is* an exception where false speech should be punished, but there are good reasons why in the US you can be sued for fraud but not for “lying” and certainly not for *subconscious* lying. Those categories are far too manipulable.

      • in the US you can be sued for fraud but not for “lying” and certainly not for *subconscious* lying

        In some cases you can face civil or criminal penalties for failure to disclose or act properly on some sorts of information, based in practice on whether you reasonably should have been able to figure it out, rather than your self-report. This implies that the courts don’t actually recognize a right to self-deceive. Obviously not all lies or self-deceptions are punished, but that’s not feasible anyhow. There are lots of laws implying a strong public-policy interest in preventing deception in some domains.

      • I saw you ask the question about his interpretation of that line on Less Wrong, as well. My response would be that he is saying that there exists a class of thing that says that you have an obligation to believe in, follow the dictates of and reinforce the believing in and dictates of that thing, at least among some group that has accepted thing or had thing imposed upon it. Many religions fall into this category, and some other things do as well e.g. communism. They say, if someone who was previously your brother tries to encourage you to worship a foreign God, kill him. And that this technique is no-good-very-bad, and the things that fall into this category are no-good-very-bad for doing this. Argument gets counter-argument.

        One could also make the case that sufficiently strong social sanction against such folks is effectively the same thing; e.g. that if SJWs were to take anyone who advocated against one of their positions as someone whom everyone is to be sanctioned socially for not socially punishing in the most extreme fashion, that the fact that no one is literally being stoned in the village square (at least, not yet) is not as meaningful a distinction as one might think.

        I do not think the point was “the current USA government is doing this thing” because it clearly isn’t and he doesn’t think it is (and would very much like to keep it that way).

        I’ve stopped thinking there’s a ‘safe’ way to approach such questions. Yes, drawing a hard line at physical violence is a good idea and we should keep doing that, but you can’t say that no non-physical attacks can be met with legal sanction (e.g. physical violence), or sufficiently strong social sanction that we’re getting close, because solve for the equilibrium. It’s just a hard problem, literally everything in meme space is trying to kill us if we let it get out of hand, and we need to muddle through as best we can.

        In the particular case of Facebook’s actions, I think going after them for this particular set of violations is silly if you think their typical use cases are acceptable, but that’s a details-matter answer where I think advertising sales are what they are. Either you think ads are an OK way to fund the infrastructure of everyone’s social life, or you don’t, and you can guess which category I fall under on that…

      • It’s worth mentioning that most historical cases where the US has violated free speech norms (Alien & Sedition Acts, the 1920s Red Scare, McCarthyism, etc) have been justified by the argument that the “dissent” was actually collaboration with a foreign & hostile power. The movement against the Vietnam War involved *far* more Soviet manipulation than the 2016 election involved Russian manipulation. If colluding indirectly with enemy governments against the US were a crime that superseded ordinary free-speech rights then basically all my schoolteachers would have been guilty. I don’t think we want to live in that world.

    • What is the correct allocation of an ideal reasoner’s epistemic budget, between hypothesis-pruning and hypothesis-generation?

      Please let’s distinguish accountability-minimization from error-minimization. Just because you haven’t made a statement on the record about a thing doesn’t mean you don’t have implied beliefs about it – even if only an uninformed maximum entropy distribution. And an uninformed maximum entropy distribution makes LOTS of errors, relative to a model that actually fits the data. Having specific hypotheses is a wrongness-reduction mechanism, even as it increases the rate at which someone can call you out on being wrong. It creates opportunities for faster error-correction.

      To confuse these is to conflate healing and damage. You’re already wrong about the things where you have “no opinion,” this is unavoidable, it’s just a matter of how to efficiently become less wrong.

    • What is the correct balance of the two psychological forces in actual humans that you call “High and Bright” and “Space Mom” respectively?

      This seems to be mixing together a few dimensions best thought of separately:

      Reward-seeking vs pain-avoiding
      Legibility vs illegibility
      Authority/dominance vs autonomy
      Exploit vs explore

      For instance, someone addicted to a simple computer game like Candy Crush is hooked on some sort of reward signal, doing a simple activity rather than exploring the vast dark hypothesis space. This feels like it’s mostly well-described by your “High and Bright” category, but it is very much not an “aversion to wrongness.”

      • I would say “reward-seeking vs. pain-avoiding” doesn’t apply at all. Legibility and authority/dominance are often causally related (what the authority can’t understand, it can’t forbid) and exploration always starts illegibly (you really can’t explain new things unless they’ve had a little time to develop organically.)

      • That’s engineering. Sorry, physically exploring a new location can totally be legible. The point in the story of space exploration where things were not actually explainable would have to be a lot earlier, like Goddard before he had a rocket prototype or something.

      • On reward-seeking vs pain-avoiding, here’s the bit that made it seem relevant:

        There’s a mental motion you can do, when you’re trying to get the right answer or do the right thing, where you’re trying very hard to stay on the straight path, and any time you slip off, you violently jerk yourself back on track. You have an aversion to wrongness.

        On the other hand you also seem to tie it to “seeking value.”

      • I don’t think the physical/nonphysical distinction explains that very well – we learned a bunch from the space program. I’m trying to point to the thing where sometimes you need a model with strong predictions in order to locate interesting-to-explore spaces. Most possible assemblies of metal and rocket and humans fuel fail pretty at teaching us anything about the moon.

  3. I think you’re also wrong on how the Biblical god is described:

    And the Lord spake unto you out of the midst of the fire: ye heard the voice of the words, but saw no similitude; only ye heard a voice. – Deuteronomy 4:12

    And he said, Go forth, and stand upon the mount before the Lord. And, behold, the Lord passed by, and a great and strong wind rent the mountains, and brake in pieces the rocks before the Lord; but the Lord was not in the wind: and after the wind an earthquake; but the Lord was not in the earthquake:
    And after the earthquake a fire; but the Lord was not in the fire: and after the fire a still small voice. – 1 Kings 19:11-12

  4. A somewhat off topic conversational norms post. I did not find the large number of mid sized posts as responses to be easier to read / follow than one large post with an itemized list would have been.
    So that’s some information, but sample size of one so not that much.

  5. Have you read Camille Paglia’s Sexual Personae? She talks about the big fight between Sky Father and Earth Mother. I like this new take on the division! You’re identifying the Sky-Father with the bad elements of White, Earth (now Space) Mom with the good elements of red. You are on to something there! A big part of the gender dynamic is patriarchy vs freedom.

    You said on the MTG color wheel discussion that Green was the most alien color to you. I think your problem is a blanking-out of Green due to some horror-disgust-distaste reaction. Instead of Earth Mom, Space Mom, and a refusal to acknowledge the extent to which the division is Blue-vs-Green. That’s the side of the division that Paglia found most prominent in her exploration of representations of Earth Mom vs Sky Dad. I think that Green rejoinders to Blue are often powerful, and the sacrifice of Green in favor of Blue an unwitting surrender by women as a whole. Too willing to see the grass greener on the other side, old pain, discomfort, and mess is given up, but with it the old power. The throne room is judged a prison by the princess who longs for a vagrant’s freedom.

    Well. Blue and Red have triumphed, White is a timid retreating mess holding on to a dwindling handful of side-issues, and Green has been swept away by the pyroclastic flows of rapid change. Expect that to change – expect tradition and authority and connection to nature to return, in some form or other. The question is not whether but how. Close your eyes and heart to it, and it becomes less likely it will take a form compatible with your interests and values.

  6. Incentives in the USA being what they are…I am in a weird dilemna. I have multiple concussions, occupational blast exposure, and showed symptoms consistent with early stage CTE.

    I implemented a self-designed treatment protocol centered around a compound being sold under DSHEA for an unrelated purpose, and at present my symptoms are completely resolved.

    If the effectiveness of the compound should become widely known, the price will rise as the compound is somewhat difficult to produce, it will be classed a drug, derivatives will be developed and patented…and given the difficulty of diagnosing CTE, I assess that my own already tenuous access to the compound will be placed at risk.

    So, plenty of people suffer from CTE, but I do not, and doing anything to alter this sad state of affairs puts me at risk, and lines someone else’s pockets.

    I enjoyed Eliezer’s book.

Leave a comment