Do Rational People Exist?

Does it make sense to talk about “rational people”?

That is, is there a sub-population of individuals who consistently exhibit less cognitive bias and better judgment under uncertainty than average people?  Do these people have the dispositions we’d intuitively associate with more thoughtful habits of mind?  (Are they more flexible and deliberative, less dogmatic and impulsive?)

And, if so, what are the characteristics associated with rationality?  Are rational people more intelligent? Do they have distinctive demographics, educational backgrounds, or neurological features?

This is my attempt to find out what the scientific literature has to say about the question.  (Note: I’m going to borrow heavily from Keith Stanovich, as he’s the leading researcher in individual differences in rationality. My positions are very close, if not identical, to his, though I answer some questions that he doesn’t cover.)

A minority of people avoid cognitive biases

Most of the standard tests for cognitive bias find that most study participants “fail” (display bias) but a minority “pass” (give the rational or correct answer).

The Wason Selection Task is a standard measure of confirmation bias.  Less than 10% got it right in Wason’s original experiment.[1]

The “feminist bank teller” question famous from Kahneman’s experiments is a measure of the conjunction fallacy. Only 10% got it right — 15% for students in the decision science program of the Stanford Business School, who had taken advanced courses in statistics, probability, and decision theory.[2]

Overconfidence bias shows up in 88% of study participants. [6]

The Cognitive Reflection Test measures the ability to avoid choosing intuitive-but-wrong answers.  Only 17% get all 3 questions right. [7]

The incidence of framing effects appears to be lower and more variable. Stanovich’s experiments find a framing effect in 30-50% of subjects.[3]   The frequency of framing effects in Kahneman and Tversky’s experiments occupy roughly the same range. [4]  The incidence of the sunk cost fallacy in experiments is only about 25%. [5]

On 15 cognitive bias questions developed by Keith Stanovich, subjects’ accuracy rates ranged from 92.2% (for a gambler’s fallacy question) to 15.6% (for a sample size neglect question). The average score was 6.88 (46%) with a standard deviation of 2.32.  [8]

Many standard measures of cognitive bias find that a minority of subjects get the correct answer. There is significant individual variation in cognitive bias.

Correlation between cognitive bias tasks

Is “rationality” a cluster in thingspace?  “Rational” only makes sense as a descriptor of people if the same people are systematically better at cognitive bias tasks across the board.  This appears to be true.

Stanovich found that the Cognitive Reflection Test was correlated (r=0.49) with score on the 15-question cognitive bias test.  Performance also correlated (r=0.41) with IQ, measured with the Wechsler Abbreviated Scale of Intelligence.

Stanovich also found that four rational thinking tasks (a syllogistic reasoning task, a Wason selection task, a statistical reasoning task, and an argument evaluation task) were correlated at the 0.001 significance level (r = 0.2-0.4). These were also correlated with SAT score (r = 0.53) and more weakly with math background (r = 0.145).[9]

Stanovich found, however, that many types of cognitive bias tests failed to correlate with measures of intelligence such as SAT or IQ scores. [10]

The Cognitive Reflection Test was found to be significantly correlated with correct responses on the base rate fallacy, conservatism, and overconfidence bias, but not with the endowment effect. [12]

Philip Tetlock’s “super-forecasters” — the top 2% most successful predictors on current-events questions in IARPA’s Good Judgment Project — outperformed the average by 65% and the best learning algorithms by 35-60%.  The best forecasters scored significantly higher than average on IQ, the Cognitive Reflection Test, and political knowledge. [11]

Correct responses on probability questions correlate with lower rates of the conjunction fallacy. [16]

In short, there appears to be significant correlation between a variety of tests of cognitive biases. Higher IQ is also correlated with avoiding cognitive biases, though many individual cognitive biases are uncorrelated with IQ and the variation in cognitive bias is not fully explained by IQ differences.  The cognitive reflection test is correlated with less cognitive bias and with IQ, as well as with forecasting ability.  There’s a compelling case that “rationality” is a distinct skill, related to intelligence and math or statistics ability.

Cognitive bias performance and dispositions

“Dispositions” are personal qualities that reflect one’s priorities — “curiosity” would be an example of a disposition.  Performance on cognitive bias tests is correlated with the types of dispositions we’d associate with being a thoughtful and reasonable person.

People scoring higher on the Cognitive Reflection Test are more patient, as measured by how willing they are to wait for a larger financial reward.[7]

Higher scores on the Cognitive Reflection Test also correlate with utilitarian thinking (as measured by willingness to throw the switch on the trolley problem.) [13]

Belief in the paranormal is correlated with higher rates of the conjunction fallacy. [17]

Score on rational thinking tasks (argument evaluation, syllogisms, and statistical reasoning) is correlated (r = 0.413) with score on a Thinking Dispositions questionnaire (which measures Actively Open-Minded Thinking, Dogmatism, Paranormal Beliefs, etc.)

Basically, it appears that lower rates of cognitive bias correlate with certain behavioral traits one could intuitively characterize as “reasonable.”  They’re less dogmatic, and more open-minded. They’re less likely to believe in the supernatural. They behave more like ideal economic actors.  Most of this seems to add up to being more WEIRD, though this may be a function of the features that researchers chose to investigate.

Factors correlating with cognitive bias

Men score higher on the Cognitive Reflection Test than women — the group that answers all three questions correctly is two-thirds men, while the group that answers all three questions wrong is two-thirds women. [7]

Scientists [14] and mathematicians [15] performed no better than undergraduates on the Wason Selection Task, though mathematics undergraduates did better than history undergraduates.

Autistics [18] are less susceptible to the conjunction fallacy than neurotypicals.

Correct responses on conjunction fallacy and base rate questions correspond to better performance on “No-Go” tasks and greater N2, an EEG measure believed to reflect executive inhibition ability. [19]  Response inhibition is thought to be based in the striatum and associated with striatal dopamine receptors.

COMT mutations predict greater susceptibility to confirmation bias. [20]  COMT is involved in the degradation of dopamine. The Val/Met polymorphism makes the enzyme less efficient, which increases prefrontal cortex activation and working memory for abstract rules.  Met carriers exhibited more confirmation bias (p = 0.005).

There doesn’t seem to be that much data on the demographic characteristics of the most and least rational people.

There’s some suggestive neuroscience on the issue; the ability to avoid intuitive-but-wrong choices has to do with executive function and impulsivity, while the ability to switch tasks and avoid being anchored on earlier beliefs has to do with prefrontal cortex learning.  As we’ll see later, Stanovich (independently of the neuroscience evidence) categorizes cognitive biases into two distinct types, more or less matching this distinction between “consciously avoiding the intuitive-but-wrong answer” skills and the “considering that you might be wrong” skills.

Is there a hyper-rational elite?

It seems clear that there’s such a thing as individual variation in rationality, that people who are more rational in one area tend to be more rational in others, and that rationality correlates with the kinds of things you’d expect: intelligence, mathematical ability, and a flexible cognitive disposition.

It’s not obvious that “cognitive biases” are a natural category — some are associated with IQ, while some aren’t, and it seems quite probable that different biases have different neural correlates. But tentatively, it seems to make sense to talk about “rationality” as a single phenomenon.

A related question is whether there exists a small population of extreme outliers with very low rates of cognitive bias, a rationality elite.  Tetlock’s experiments seem to suggest this may be true — that there are an exceptional 2% who forecast significantly better than average people, experts, or algorithms.

In order for the “rationality elite” hypothesis to be generally valid, we’d have to see the same people score exceptionally high on a variety of cognitive bias tests.  There doesn’t yet appear to be evidence to confirm this.

Stanovich’s tripartite model

Stanovich proposes dividing “System II”, or the reasoning mind, into two further parts: the “reflective mind” and the “algorithmic mind.”  The reflective mind engages in self-skepticism; it interrupts processes and asks “is this right?”  The algorithmic mind is involved in working memory and cognitive processing capacity — it is what IQ tests and SATs measure.

This would explain why some cognitive biases, but not others, correlate with IQ.  Intelligence does not protect against myside bias, the bias blind spot, sunk costs, and anchoring effects.  Intelligence is correlated with various tests of probabilistic reasoning (base rate neglect, probability matching), tests of logical reasoning (belief bias, argument evaluation), expected value maximization in gambles, overconfidence bias, and the Wason selection test.

One might argue that the skills that correlate with intelligence are tests of symbolic manipulation skill, the ability to consciously follow rules of logic and math, while the skills that don’t correlate with intelligence require cognitive flexibility, the ability to change one’s mind and avoid being tied to past choices.

Stanovich talks about “cognitive decoupling”, the ability to block out context and experiential knowledge and just follow formal rules, as a main component of both performance on intelligence tests and performance on the cognitive bias tests that correlate with intelligence.  Cognitive decoupling is the opposite of holistic thinking. It’s the ability to separate, to view things in the abstract, to play devil’s advocate.

Cognitive flexibility, for which the “actively open-minded thinking scale” is a good proxy measure, is the ability to question your own beliefs.  It predicts performance on a forecasting task, because the open-minded people sought more information. [21]  Less open-minded individuals are more biased towards their own first opinions and do less searching for information.[22]  Actively open-minded thinking increases with age (in middle schoolers) and correlates with cognitive ability.[23]

Under this model, people with high IQs, and especially people with training in probability, economics, and maybe explicit rationality, will be better at the cognitive bias skills that have to do with cognitive decoupling, but won’t be better at the others.

Speculatively, we might imagine that there is a “cognitive decoupling elite” of smart people who are good at probabilistic reasoning and score high on the cognitive reflection test and the IQ-correlated cognitive bias tests. These people would be more likely to be male, more likely to have at least undergrad-level math education, and more likely to have utilitarian views.  Speculating a bit more, I’d expect this group to be likelier to think in rule-based, devil’s-advocate ways, influenced by economics and analytic philosophy.  I’d expect them to be more likely to identify as rational.

I’d expect them not to be much better than average at avoiding the cognitive biases uncorrelated with intelligence. The cognitive decoupling elite would be just as prone to dogmatism and anchoring as anybody else.  However, the subset that were cognitively flexible would probably be noticeably better at predicting the future.  Tetlock’s finding that the most accurate political pundits are “foxes” not “hedgehogs” seems to be related to this idea of the “reflective mind.”  Most smart abstract thinkers are not especially open-minded, but those who are, get things right a lot more than everybody else.

It’s also important to note that experiments on cognitive bias pinpoint a minority, but not a tiny minority, of less biased individuals.  17% of college students at all colleges, but 45% of college students at MIT, got all three questions on the cognitive reflection test right.  MIT has about 120,000 living alumni; 27% of Americans have a bachelor’s or professional degree.  The number of Americans getting the Cognitive Reflection Test right is probably on the order of a few million — that is, a few percent of the total population.  Obviously, conjunctively adding more cognitive bias tests should narrow down the population of ultra-rational people further, but we’re not talking about a tiny elite cabal here.  In the terminology of my previous post, the evidence points to the existence of unusually rational people, but only at the “One-Percenter” level.  If there are Elites, Ultra-Elites, and beyond, we don’t yet have the tests to detect them.

Conclusion

Yes, there are people who are consistently less cognitively biased than average.  They are a minority, but not a tiny minority.  They are smarter and more reasonable than average.  When you break down the measures of cognitive bias into two types, you find that intelligence is correlated with measures of ability to reason formally, but not with measures of ability to question one’s own judgment; the latter are more correlated with dispositions like “active open-mindedness.”  There’s no evidence to suggest that there’s a very small (e.g. less than 1% of the population) group of extremely rational people, probably because we don’t have enough experimental power to detect extremes of performance on cognitive bias tests.

References

[1] Wason, Peter C. “Reasoning about a rule.” The Quarterly Journal of Experimental Psychology 20.3 (1968): 273-281.

[2] Tversky, Amos, and Daniel Kahneman. “Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment.” Psychological review 90.4 (1983): 293.

[3] E. Stanovich, Keith, and Richard F. West. “Individual differences in framing and conjunction effects.” Thinking & Reasoning 4.4 (1998): 289-317.

[4] Tversky, Amos, and Daniel Kahneman. “Rational choice and the framing of decisions.” Journal of business (1986): S251-S278.

[5] Friedman, Daniel, et al. “Searching for the sunk cost fallacy.” Experimental Economics 10.1 (2007): 79-104.

[6] West, R. F., & Stanovich, K. E. (1997). The domain specificity and generality of overconfidence: Individual differences in performance estimation bias. Psychonomic Bulletin & Review, 4, 387-392.

[7] Frederick, Shane. “Cognitive reflection and decision making.” Journal of Economic perspectives (2005): 25-42.

[8]Toplak, Maggie E., Richard F. West, and Keith E. Stanovich. “The Cognitive Reflection Test as a predictor of performance on heuristics-and-biases tasks.”Memory & Cognition 39.7 (2011): 1275-1289.

[9] Stanovich, K. E., & West, R. F. (1998). Individual differences in rational thought. Journal of Experimental Psychology: General, 127, 161-188

[10] Stanovich, K. E., West, R. F., & Toplak, M. E. (2011).  Intelligence and rationality.  In R. J. Sternberg & S. B. Kaufman (Eds.), Cambridge Handbook of Intelligence (pp. 784-826).  New York: Cambridge University Press.

[11] Ungar, Lyle, et al. “The Good Judgment Project: A Large Scale Test of Different Methods of Combining Expert Predictions.” 2012 AAAI Fall Symposium Series. 2012.

[12] Hoppe, Eva I., and David J. Kusterer. “Behavioral biases and cognitive reflection.” Economics Letters 110.2 (2011): 97-100.

[13] Paxton, Joseph M., Leo Ungar, and Joshua D. Greene. “Reflection and reasoning in moral judgment.” Cognitive Science 36.1 (2012): 163-177.

[14] Griggs, Richard A., and Sarah E. Ransdell. “Scientists and the selection task.”Social Studies of Science 16.2 (1986): 319-330.

[15]Inglis, Matthew, and Adrian Simpson. “Mathematicians and the selection task.”Proceedings of the 28th International Conference on the Psychology of Mathematics Education. Vol. 3. 2004.

[16]Benassi, Victor A., and Russell L. Knoth. “The intractable conjunction fallacy: Statistical sophistication, instructional set, and training.” Journal of Social Behavior & Personality (1993).

[17] Rogers, Paul, Tiffany Davis, and John Fisk. “Paranormal belief and susceptibility to the conjunction fallacy.” Applied cognitive psychology 23.4 (2009): 524-542.

[18]Morsanyi, Kinga, Simon J. Handley, and Jonathan SBT Evans. “Decontextualised minds: Adolescents with autism are less susceptible to the conjunction fallacy than typically developing adolescents.” Journal of autism and developmental disorders 40.11 (2010): 1378-1388.

[19] De Neys, Wim, et al. “What makes a good reasoner?: Brain potentials and heuristic bias susceptibility.” Proceedings of the Annual Conference of the Cognitive Science Society. Vol. 32. 2010.

[20] Doll, Bradley B., Kent E. Hutchison, and Michael J. Frank. “Dopaminergic genes predict individual differences in susceptibility to confirmation bias.” The Journal of neuroscience 31.16 (2011): 6188-6198.

[21] Haran, Uriel, Ilana Ritov, and Barbara A. Mellers. “The role of actively open-minded thinking in information acquisition, accuracy, and calibration.”Judgment & Decision Making 8.3 (2013).

[22] Baron, Jonathan. “Beliefs about thinking.” Informal reasoning and education(1991): 169-186.

[23] Kokis, Judite V., et al. “Heuristic and analytic processing: Age trends and associations with cognitive ability and cognitive styles.” Journal of Experimental Child Psychology 83.1 (2002): 26-52.

 

 

Exit, Voice, and Empire

Economist Otto Hirschman defined the concepts of “voice” and “exit” to refer to firms or political institutions.

If you don’t like the way your group works, you can exercise voice by participating in the decision-making process: voting, registering grievances, lobbying, writing letters to the editor, making your case in a meeting, and so on.  Or, you can exercise exit by leaving the group: emigrating, quitting your job, buying from a different company, forking the project, starting your own meetup, etc.

Exit, in many ways, is more attractive than voice. Voice requires conflict, persuasion, coalition-building: in short, politics. Voice is slow; exit is fast. Voice is often coercive; exit is peaceful.  Voice is messy; exit is clean. Balaji Srinivasan thinks exit is just plain better than voice.

In politics, ideas like seasteading, intentional communities, free cities, federalism, Archipelago, and so on, which revolve around a patchwork of voluntary communities, are based on increasing the role of exit relative to voice.  In democracies, most of what we think of as “politics” is voice.  A whole nation votes on whether we choose X or Y.  Instead, some say, we should side-step the conflict by letting the X-lovers have X and the Y-lovers have Y.  Let people vote with their feet or their dollars.

The problem with exit is that it’s not always practical to fragment groups into ever smaller splinters. There are returns to scale in large companies. There are network effects to living in large cities that become commercial and cultural hubs.  There are advantages to having a common language, common technological conventions, shared communication networks, and so on, across wide numbers of people.  And when one group is hugely dominant and successful, it’s more in your interest to try to shift it slightly towards your point of view than to try to “build your own” from scratch.

As long as there are network effects and advantages to large-scale organizations, there will be reasons to use voice rather than exit.

Empires are the original large-scale organizations. And empires have provided many historical prototypes for the advantages of unified institutions. Roman roads and Roman law.  Qin Shi Huang Di’s unification of language, weights, and measures. The railroads, telegraphs, and trade routes of the British Empire.  The metric system and the Napoleonic Code.

Empires, by definition, do not rule over a single “people”, so they must accommodate cultural diversity. Sometimes they were remarkably tolerant.  (The Jews have traditionally remembered Darius and Alexander fondly.)  Imperial rules have a quality of impartiality, compared with local customs; they must be applicable to a vast and diverse population.

It’s often in your interest to belong to an empire.  The empire has the technology, the comforts of civilization, the military power.  Secede, and you’ll be “free”, but poor, provincial, and vulnerable.

There are profound problems with academic science, for instance. That doesn’t mean it’s obvious that one should just do science outside of academia.  The universities still have the top people, the funds, the equipment, and so on.  It’s not clear in every case that your new fledgling institute will do better than the old Leviathan.

You might choose voice over exit if you want the dominant institution to be more inclusive. Is it better to give gays civil marriage, or for gays to champion non-marital romantic arrangements?  The pro-marriage argument says that marriage is the dominant social institution, it comes with useful advantages, so gays should want to be included in that institution.  Gay marriage proponents want in, not out.  Marriage is nifty; it’s easier to gain access to an existing nifty thing than to create an alternative of equal niftiness.

Imperial structures — by which I mean, rules and institutions meant to work at large scale, for diverse populations — can be made more universal, abstract, and customizable, up to a point.  Chinese writing is standardized; Chinese pronunciation is local.  That which needs to be shared across an empire should follow a single rule; everything else should be up to local or individual choice.  Think of this like a structure with a steel skeleton clothed in colorful tissue paper.  A few rules are firm and universal; everything else is up to choice.

This heuristic shows up everywhere where there are network effects, not just in politics.  Think of any social media platform.  The steel skeleton is the site’s code; a page/feed/tumblr/etc has a certain structure.  The tissue paper is the content, which is endlessly customizable.  The skeleton is not truly impartial: the structure of the site does shape the culture.  But it aims at impartiality.  It has an impartial flavor. Wikipedia is more successful than Conservapedia because Wikipedia is structured to be as universal and neutral as possible.

In an imperial structure, there isn’t much voice.  The rules are hard to change, and the emperor’s power is absolute.  Ideally, though, the rules are somewhat abstract or unbiased.  This allows them to be more persistent over time: the strength of different factions may rise and fall, and you’d like to have a structure that endures those shifts.  This makes some kinds of “tolerance” or “inclusiveness” or “cosmopolitanism” very much in the interest of the empire.

A limited form of exit is used within the empire, to choose local customs within subgroups that still have access to the imperial resources and play by the “skeleton” of imperial ground rules.  True exit, leaving the empire altogether, is much more costly, and usually not worth it.

Empires, I think, basically map to “no voice, plenty of freedom to mini-‘exit’ within the boundaries of the empire, true exit is high opportunity cost to the emigrant but harmless to the empire.”

Small, tight-knit personal communities have pretty much the opposite structure.  Imagine a five-person startup, or a nuclear family.  Voice obviously plays a big role here — if you don’t like something your spouse is doing, you talk to them about it.  When you have a group so small that it would cease to function if it split, communal cooperation begins to make sense.  It even makes sense to have the intuition that unanimity or consensus should be necessary for a decision; if one disgruntled person could destroy a project, it’s important to make sure everybody’s on board.

Internal diversity is impractical in very small groups; if you’re making turkey for Thanksgiving, everybody has to eat turkey, or at least be satisfied with a plate full of sides.  Hard-and-fast rules also don’t make a lot of sense.  The right thing to do in a situation is always a function of the people involved. Things get very granular at small scales, and it matters that Bob can’t stand Alice and Eve is having a family emergency and Dave is being a prima donna but he’s the best geneticist we have.

So very small, personal groups are more like “lots of voice, no mini-‘exit’, true exit is dangerous to the group but can be cheap for the emigrant.”

Should you reform or reject a failing institution?

Would you rather operate in something more like an empire or more like a family?

In a family, you can negotiate if you don’t like how things are being done.  In an empire, you can (up to a point) go off and do your own thing, but the ground rules of the empire are rigid.  An empire has the advantages of scale — network effects, organizational infrastructure, lots of resources.  A family has the advantages of smallness — it can take account of individual needs and situations, it’s “closer to the ground.”

What I’d like to propose is taking account of tradeoffs and being aware of what tactics are appropriate to what situations.

You can’t have a “national discussion about X” because America is a nation of 300 million people, not a friend cluster.  You also can’t split up your meetup group or activist organization every time somebody has a disagreement, because you won’t have a group any more.

Exit always has costs. If you leave an empire, you lose its large-scale resources. If you leave a family, you can break the family.  Exit is worth it if you can easily get what you need outside the group, but it’s not free.

Balaji’s idea of tech companies building better alternative versions of existing institutions is promising, but not because exit is always awesome. Rather, it works to the extent that technological infrastructure can substitute for institutional infrastructure. If what you really need to run a school, say, is superstar teachers, good programmers, and adaptive learning algorithms (in the Coursera/Khan Academy vein), then the infrastructure of the public school system or traditional academia is just not very useful, and you can exit without much opportunity cost.

It’s telling that Balaji talks about web-based education, not about homeschooling, which is also a form of exit from public school. But homeschooling is not scalable — you do it one family at a time.  That makes it harder to make homeschooling a real alternative for vast numbers of people.  Using the tech industry to make independent education convenient and memetically viral — that’s a different story.  It has the potential to make independent education into a new kind of “empire.”

I’d say that Silicon Valley is a growing empire (or interlocking collection of empires) that is beginning to poach people from the post-New Deal American empire(s).  It’s not about people leaving the big city to homestead on the lonesome prairie; it’s people leaving the big city to go to another big city.

Exit from a big institution is easy in two kinds of situations: either you don’t need big institutions, or you have another big institution to emigrate to.  “Tune in, turn on, drop out” means telling people they don’t need an empire at all.  “Go to App Academy, not college” means telling people they can switch from one empire to a different one.  They’re both forms of exit, but they’re structurally very different.

My own view is that empires are very useful in a lot of contexts, and that the ideal (not always attainable) way to deal with a dying empire is to build a new empire to compete with it.  Radical decentralization (like 19th-century homesteading) tends not to last forever; people will always be building cities, businesses will always be trying to become big, frontiers get populated, there are normal human pressures towards centralization.  Institutions start off small and scrappy, grow to mature success, and then become cargo-culted and corrupt.  It doesn’t make sense to fight that life cycle; it makes sense to join it, by being the scrappy upstart David taking on an already-failing Goliath.