The Right to Be Wrong

Epistemic Status: pretty confident

Zvi recently came out with a post “You Have the Right to Think”, in response to Robin Hanson’s “Why be Contrarian?”, itself a response to Eliezer Yudkowsky’s new book Inadequate Equilibria.  All of these revolve around the question of when you should think you “know better” or can “do better” than the status quo of human knowledge or accomplishment.  But I think there’s a lot of conflation of different kinds of “should” going on.

Yudkowsky’s book, and Hanson’s post, are mostly about epistemic questions — when are you likely to get the right answer by examining an issue yourself vs. trusting experts?

Inadequate Equilibria starts with the canonical example of when you can’t outperform the experts — betting on the stock market — and explains about efficient markets, and then goes on to look into what kinds of situations deviate from efficient markets such that an individual could outperform the collective intelligence of everyone who’s tried so far.  For instance, you might well be able to find a DIY treatment for your health problem that works better than anything your doctor would prescribe you, in certain situations — but due to the same incentive problems that prevented medical consensus from finding that treatment, you probably wouldn’t be able to get it to become the standard of care in the mass market.

Hanson mostly agrees with Yudkowsky’s analysis, except on some points where he thinks the argument for individual judgment being reliable is weaker.

Zvi seems to be talking about a different thing altogether when he talks about the “rights” that people have.

When he says “You have the right to disagree even when others would not, given your facts and reasoning, update their beliefs in your direction” or “You have the right to believe that someone else has superior meta-rationality and all your facts and reasoning, and still disagree with them”, I assume he’s not saying that you’d be more likely to get the right answer to a question in such cases — I think that would be false.  If we posit someone who knows better than me in every relevant way, I’d definitionally be more likely to get the right answer by listening to her than disagreeing with her!

So, what does it mean to have a right to disagree even when it makes you more likely to be wrong?  How can you have a right to be wrong?

I can think of two simple meanings and one subtle meaning.

The Right To Your Opinion

The first sense in which you “have a right to be wrong” is social and psychological.

It’s a basic tenet of free and pluralistic societies that you have the legal right to believe a false thing, and express your belief.  It is not a crime to write a horoscope column.  You can’t be punished by force just for being wrong.  “Bad argument gets counterargument.  Does not get bullet.  Never.  Never ever never for ever.”

And tolerant, pluralist cultures generally don’t believe in doing too much social punishment of people for being wrong, either.  It’s human to make mistakes; it’s normal for people to disagree and not be able to resolve the disagreement; if you shame people as though being wrong is horribly taboo, your community is going to be a more disagreeable and stressful place. (Though some communities are willing to make that tradeoff in exchange for higher standards of common knowledge.)

If you are regularly stressed out and scared that you’ll be punished by other people if they find out you believe a wrong thing, then either you’re overly timid or you’re living in an oppressive environment.  If fear of punishment or ostracism comes up regularly when you’re in the process of forming an opinion, I think that’s too much fear for critical thinking to work properly at all; and the mantra “I have the right to my opinion” is a good counterweight to that.

Discovery Requires Risking Mistakes

The second sense in which you have a “right to be wrong” is prudential.

You could ensure that you’d never be wrong by never venturing an opinion on anything.  But going all the way to this extreme is, of course, absurd — you’d never be able to make a decision in your life!  The most effective way to accomplish any goal always involves some decision-making under uncertainty.

And attempting more difficult goals involves more risk of failure. Scientists make a lot of hypotheses that get falsified; entrepreneurs and engineers try a lot of ideas that don’t work; artists make a lot of sketches that wind up in the wastebasket.  Comfort with repeated (hopefully low-stakes) failure is essential for succeeding at original work.

Even from a purely epistemic perspective, if you want to have the most accurate possible model of some part of the world, the best strategy is going to involve probabilistically believing some wrong things; you get information by testing guesses and seeing where you’re mistaken.  Minimizing error requires finding out where your errors are.

Note, though, that from this prudential perspective, it’s not a good idea to have habits or strategies that systematically bias you towards being wrong.  In the “right to your opinion” sense, you have a “right” to epistemic vices, in that nobody should be attacking you for them; but in this goal-oriented sense, they’re not going to help you succeed.

Space Mom Accepts All Her Children

The third sense in which you have a “right to be wrong” is a little weirder, so please bear with me.

There’s a mental motion you can do, when you’re trying to get the right answer or do the right thing, where you’re trying very hard to stay on the straight path, and any time you slip off, you violently jerk yourself back on track.  You have an aversion to wrongness.

I have an intuition that this is…inefficient, or mistaken, somehow.

Instead, there’s a mental motion where you have peripheral vision, and you see all the branching paths, and consider where they might go — all of them are possible, all of them are in some cosmic sense “okay” — and you perform some kind of optimization procedure among the paths and then go along the right path smoothly and without any jerks.

Or, consider the space of all mental objects, all possible thoughts or propositions or emotions or phenomena or concepts.  Some of these are true statements; some of them are false statements. Most of them are unknown, or not truth-apt in the first place.  Now, you don’t really want to label the false ones as true — that would just be error.  But all of them, true or false or neither or unknown, are here, hanging like constellations in this hypothetical heaven. You can look at them, consider them, call some of them pretty.  You don’t need to have an aversion response to them. They are “valid”, as the kids say; even if they don’t have the label “true” on them, they’re still here in possibility-space and that’s “okay”.

In a lot of traditions, the physical metaphor for “good” is high and bright.  Like the sun, or a mountaintop. The Biblical God is described as high and bright, as are the Greek Olympians or the Norse gods; in Indian and Chinese traditions a lot of divine or idealized entities are represented as high and bright; in ordinary English we talk about an idealistic person as “high-minded” and everybody knows that the “light side of the Force” is the side of the good guys.

To me, the “high and bright” ideal feels connected to the pattern of seeking a goal, seeking truth, trying not to err.

But there are also traditions in which “high and bright” needs to be balanced with another principle whose physical metaphor is dark and vast.  Like the void of space, or the deeps of the sea.  Like yin as a complement to yang, or prakrti as a complement to purusa, or emptiness as a complement to form.  

The “high and bright” stuff is value — knowledge, happiness, righteousness, the things that people seek and benefit from.  The “dark and vast” stuff is possibility.  Room to breathe. Freedom. Potential. Mystery. Space.

You can feel trapped by only seeking value — you can feel like you lack the “space to be wrong”.  But it’s not really that you want to be wrong, or that you want the opposite of value; what you want is this sense of “enough room to move”.

It’s something like Keats’ “negative capability“:

…at once it struck me, what quality went to form a Man of Achievement especially in Literature & which Shakespeare possessed so enormously—I mean Negative Capability, that is when man is capable of being in uncertainties, Mysteries, doubts, without any irritable reaching after fact & reason—Coleridge, for instance, would let go by a fine isolated verisimilitude caught from the Penetralium of mystery, from being incapable of remaining content with half knowledge.

or something like the “mother” Ahab perceives behind God:

But thou art but my fiery father; my sweet mother, I know not. Oh, cruel! what hast thou done with her? There lies my puzzle; but thine is greater. Thou knowest not how came ye, hence callest thyself unbegotten; certainly knowest not thy beginning, hence callest thyself unbegun. I know that of me, which thou knowest not of thyself, oh, thou omnipotent. There is some unsuffusing thing beyond thee, thou clear spirit, to whom all thy eternity is but time, all thy creativeness mechanical. Through thee, thy flaming self, my scorched eyes do dimly see it. Oh, thou foundling fire, thou hermit immemorial, thou too hast thy incommunicable riddle, thy unparticipated grief.

The womb of nature, the dark vastness of possibility — Space Mom, so to speak — is not the opposite of reason and righteousness so much as the dual to these things, the space in which they operate.  The opposite of being right is being wrong, and nobody really wants that per se.  The complement to being right is something like “letting possibilities arise” or “being curious.” Generation, as opposed to selection.  Opening up, as opposed to narrowing down.

The third sense in which you have the “right to be wrong” is a lived experience, a way of thinking, something whose slogan would be something like “Possibility Is.”

If you have a problem with “gripping too tight” on goals or getting the right answer, if it’s starting to get oppressive and rigid, if you can’t be creative or even perceive that much of the world around you, you need Space Mom.  The impulse to assert “I have the right to disagree even with people who know better than me” seems like it might be a sign that you’re suffocating from a lack of Space Mom.  You need openness and optionality and the awareness that you could do anything within your powers, even the imprudent or taboo things.  You need to be free as well as to be right.

 

Advertisements

Psycho-Conservatism: What it Is, When to Doubt It

Epistemic status: I’m being emphatic for clarity here, not because I’m super confident.

I’m noticing that a large swath of the right-of-center infovore world has come around to a kind of consensus, and nobody has named it yet.

Basically, I’m pointing at the beliefs that Jonathan Haidt (The Righteous Mind , The Happiness Hypothesis), Jordan Peterson (Maps of Meaning), and Geoffrey Miller (The Mating Mind) have in common.

All of a sudden, it seemed like everybody “centrist” or “conservative” I knew was quoting Haidt or linking videos by Peterson.

(In absolute terms Peterson isn’t that famous — his videos get hundreds of thousands of Youtube views, about half as popular as the most popular Hearthstone streamer. There’s a whole universe of people who aren’t in the culture-and-politics fandom at all.  But among those who are, Peterson seems highly influential.)

All three of these men are psychology professors, which is why I’m calling the intersection of their views psycho-conservatism. Haidt is a social psychologist, Peterson is a Jungian, and Miller is an evolutionary psychologist.

Psycho-conservatism is mostly about human nature.  It says that humans have a given, observable, evolved nature; that this nature isn’t always pretty (we are frequently irrational, deceptive, and self-centered); and that human nature’s requirements place limits on what we can do with culture or society.  Often, traditional wisdom is valuable because it is a good fit for human nature. Often, utopian modern changes in society fail because they don’t fit human nature well.

This is, of course, a small-c conservative view: it looks to the past for inspiration, and it’s skeptical of radical changes. It differs from other types of conservatism in that it gets most of its evidence from psychology — whether using empirical experiments (as Haidt does) or evolutionary arguments (as Miller does).  Psycho-conservatives have a great deal of respect for religion, but they don’t speak on religious grounds themselves; they’re more likely to argue that religion is adaptive or socially beneficial or that we’re “wired for it.”

Psycho-conservatism is also methodologically skeptical.  In the wake of the psychology replication crisis, it’s reasonable to become very, very doubtful of the social sciences in general.  What do we really know about what makes people tick? Not much.  In such an environment, it makes sense to drastically raise your standards for evidence.  Look for the most replicated and hard-to-fudge empirical findings.  (This may lead you to the literature on IQ and behavioral genetics, and heritable, stable phenomena like the Big Five personality dimensions.) Look for commonalities between cultures across really long time periods.  Look for evidence about the ancestral environment, which constituted most of humans’ time on Earth. Try to find ways to sidestep the bias of our present day and location.

This is the obvious thing to do, as a first pass, in an environment of untrustworthy information.

It’s what I do when I try to learn about biology — err on the side of being pickier, look for overwhelming and hard-to-fake evidence, look for ideas supported by multiple independent lines of evidence (especially evolutionary evidence and evidence across species.)

If you do this with psychology, you end up with an attempt to get a sort of core summary of what we can be most confident about in human nature.

Psycho-conservatives also wind up sharing a set of distinctive political and cultural concerns:

  • Concern that modern culture doesn’t meet most people’s psychological needs.
  • A fair amount of sympathy for values like authority, tradition, and loyalty.
  • Belief that science on IQ and human evolution is being suppressed in favor of less accurate egalitarian theories.
  • Belief that illiberal left-wing activism on college campuses is an important social problem.
  • Disagreement with most contemporary feminism, LGBT activism, and anti-racist activism
  • A general attitude that it’s better to be sunny, successful, and persuasive than aggrieved; disapproval of the “culture of victimhood”
  • Basically no public affiliation with the current Republican Party
  • Moderate or silent on “traditional” political controversies like abortion, gov’t spending, war, etc.
  • Interested in building more national or cultural unity (as opposed to polarization)

Where are the weaknesses in psycho-conservatism?

I just said above that a skeptical methodology regarding “human nature” makes a lot of sense, and is kind of the obvious epistemic stance. But I’m not really a psycho-conservative myself.  So where might this general outlook go wrong?

  1. When we actually do know what we’re talking about.

If you used evolved tacit knowledge, the verdict of history, and only the strongest empirical evidence, and were skeptical of everything else, you’d correctly conclude that in general, things shaped like airplanes don’t fly.  The reason airplanes do fly is that if you shape their wings just right, you hit a tiny part of the parameter space where lift can outbalance the force of gravity.  “Things roughly like airplanes” don’t fly, as a rule; it’s airplanes in particular that fly.

Highly skeptical, conservative methodology gives you rough, general rules that you can be pretty confident won’t be totally wrong. It doesn’t guarantee that there can’t be exceptions that your first-pass methods won’t reach.  For instance, in the case of human nature:

  • It could turn out that one can engineer better-than-historically-normal outcomes even though, as a general rule, most things in the reference class don’t work
    • Education and parenting don’t empirically matter much for life outcomes, but there may be exceptional teaching or parenting methods — just don’t expect them to be easy to implement en masse
    • Avoiding lead exposure massively increases IQ; there may be other biological interventions out there that allow us to do better than “default human nature”
  • Some minority of people are going to have “human natures” very different from your rough, overall heuristics; statistical phenomena don’t always apply to individuals
  • Modern conditions, which are really anomalous, can result in behaviors being adaptive that really weren’t in ancestral or historical conditions, so “deep history” or evolutionary arguments for what humans should do are less applicable today

Basically, the heuristics you get out of methodological conservatism make sense as a first pass, but while they’re robust, they’re very fuzzy.  In a particular situation where you know the details, it may make sense to say “no thanks, I’ve checked the ancestral wisdom and the statistical trends and they don’t actually make sense here.”

2. When psycho-conservatives don’t actually get the facts right.

Sometimes, your summary of “cultural universals” isn’t really universal.  Sometimes, your experimental studies are on shaky ground. (Haidt’s Moral Foundations don’t emerge organically from factor analysis the way Big Five personality traits do.)  Even though the overall strategy of being skeptical about human nature makes sense, the execution can fail in various places.

Conservatives tend to think that patriarchy is (apart from very recently) a human universal, but it really isn’t; hunter-gatherer and hoe cultures have done without it for most of humanity’s existence.

Lots of people assume that government is a human universal, but it isn’t; nation-states are historically quite modern, and even monarchy is far from universal. (Germanic tribes as well as hunter-gatherers and pastoralists around the world were governed by councils and war-leaders rather than kings; Medieval Iceland had a fairly successful anarchy; the Bible is a story of pastoralist tribes transitioning to monarchy, and the results are not represented sympathetically!)

It’s hard to actually compensate for parochial bias and look for a genuinely universal property of human nature, and psycho-conservatives deserve critique when they fail at that mission.

3. When A Principle Is At Stake

Knowledge of human nature can tell you the likely consequences of what you’re doing, and that should inform your strategy.

But sometimes, human nature is terrible.

All the evidence in the world that people usually do something, or that we evolved to do something, doesn’t mean we should do it.

The naturalistic fallacy isn’t exactly a fallacy; natural behaviors are far more likely to be feasible and sustainable than arbitrary hypothetical behaviors, and so if you’re trying to set ideal norms you don’t want them to be totally out of touch with human nature.  But I tend to think that human values emerge and expand from evolutionary pressures rather than being bound wholly to them; we are godshatter.

Sometimes, you gotta say, “I don’t care about the balance of nature and history, this is wrong, what we should do is something else.”  And the psycho-conservative will say “You know you’re probably gonna fail, right?”

At which point you smile, and say, “Probably.”