# Uncertainty and Confidence

To have a probability distribution over events is sort of like being infinitely opinionated.

Event 1? Oh, I’ll give it 4% probability. Event 2? Hmm, I think that’s a 10%. Event 3? Wow, that’s unlikely, I’ll give it 0.1%.  You’ve got an opinion about everything under the sun.

If you are highly uncertain, your probability distribution is very flat; you believe that lots of things are possible and you aren’t sure which is true.  If you are highly certain, your probability distribution is pointy; you believe that one thing is much more likely to be true than the others.

But even if you are very uncertain, very full of doubt, you are still opinionated.  You have the opinion that there are many possible correct answers, all about equally likely.  For example, you could believe that the field of possible candidates for President in 2016 is very wide, and argue vigorously against someone who says “No, it’s pretty much sewn up for Hillary Clinton.” To have a flat probability distribution over candidates is to have a distinct belief — the belief that it is not sewn up for Hillary Clinton or anyone else.

To be uncertain is different from the state of lacking confidence.

Suppose you suffer a setback.  You fail, publicly, humiliatingly. Your confidence takes a hit. Now, when people ask you your opinion, you say “I don’t know.” Not “well, I’m uncertain, it could be a lot of different things,” but “I don’t have a probability distribution at all. I don’t have hypotheses, I don’t have opinions.”  Of course you shouldn’t have opinions — you’re a failure! It would be socially inappropriate to claim the right to an opinion.

Psychologically, we have a sense of the “unknown” that’s more profoundly blank than mere uncertainty.  The Ellsberg Paradox points at this. People will prefer a known risk to an unknown risk, even when this causes them to lose in expected value.  A good Bayesian, when given unknown odds, would have some prior, some guess at what the odds might be, and would act accordingly.  But that’s not what people do in real life. When given unknown odds, people shudder and turn away.

Knightian uncertainty is the attempt to put a mathematical formalism to this kind of lack of confidence.  And once you try to articulate this rigorously, you’ll notice that it requires one to reject the basic axioms of probability — in the case of Dempster-Shafer Theory, you have to reject the assumption of sigma-additivity.

My interpretation is that Knightian uncertainty doesn’t actually make sense.  It leads to predictable ways to construct situations in which your decision theory loses.  It’s a flaw — an exploitable flaw — in human psychology.

The attitude that considers it “arrogant” to “be opinionated” is coming from the model of high confidence and low confidence. People who have earned the right to be confident are allowed to have opinions; everyone else shouldn’t pontificate.

Ordinary probability theory says “but implicitly you have an opinion, or at least a hypothesis, any time you make a decision! You do have an opinion on whether extraterrestrials exist, even if you’ve never explicitly thought about it; you obviously don’t think they’re a serious threat, otherwise you’d be putting a lot more effort into defense against alien invasion.”

In the probabilistic worldview, the “right to an opinion” doesn’t make sense. Everyone, smart or dumb, right or wrong, has working hypotheses; that’s what it means to have a mind at all.  You might still be very uncertain, but you work under uncertainty, you take expected values under uncertainty. You may face uncertain risks, but you make a guess as to how bad they’re likely to be, in order to act at all.  You move in darkness.

It’s not arrogant, in itself. You’re not claiming you’re knowledgeable relative to other people. It’s just impossible not to have a best guess, a thought, however rough.  You have to think something, at least until you learn something new and think something else.

The intuition behind Knightian uncertainty really wants there to be a way to insist, “No, really, I don’t know whether there are aliens, I don’t even have a guess, I think nothing, I can say nothing.”

And, while I’m not sure exactly why, I find this chilling.

## 9 thoughts on “Uncertainty and Confidence”

1. You are not sure exactly why, but you do have an idea about why you find it chilling for anyone to want to have the right to remain silent at all. You chose to not share your not-so-thought-through opinion about it. You are exercising the right you don’t want others to have. That’s what’s so chilling about it, right? I am ready to be wrong, but I have the right to guess. 🙂

2. Ben Kuhn says:

An interesting thing I’ve picked up from reading some literature on automated market making is the idea that agents can have a distinction between “internal probabilities”–the kind that you can force people to have with things like Cox’s theorem–and “actionable probabilities” that you would be (for instance) willing to bet on.

These can be different basically due to pervasive adverse selection, and I suspect that this is part of what people find compelling about Knightian uncertainty: it’s a heuristic to avoid being adversely selected. For instance, if someone asks me what I think about extraterrestrials, it’s generally because they have thought more about it than I have, and so there’s asymmetric information that makes it unfavorable for me (on expectation) to commit myself to a number.

Another reason to claim Knightian uncertainty is bounded computation. Sometimes when I say “I have no opinion on X” I really mean “I think my opinion on X is extremely unstable under more reflection about it.” I’ve had several quite frustrating conversations with “rationalists” where I tried and failed to explain that their questions about my internal probabilities weren’t going to be informative for this reason.

• Siddharth Nishar says:

These are really good points. Thank you for sharing them.

3. I think that the intuition of “confidence” is that propositions aren’t hypotheses. People are hypotheses held by the collective. Each person is the hypothesis that that person is the leader and others should follow, imitate and support the leader. A person’s level of confidence is the strength of that hypothesis. This is the only sort of hypothesis that a collective of humans of the default tribal type *can* have.

4. Thanks for the thoughtful post and the introduction to some math which I had not heard of. That said, I wonder if your assumptions and choice of formalism are not leading you astray.

When people talk about “the right to have an opinion” they seem to usually mean “the right to advocate that opinion in public.” If someone believes that they are influenced by opinions which they hear, that assessing these opinions takes energy, and that opinions on a matter vary widely in accuracy, its rational for them to encourage only people whose opinions are better than average to express them. This is not about status, but about attention.

If Theodore wants to believe true things about Alexander the Great, and thinks that details which are first attested in the last thousand years are much less likely to be correct than details attested in antiquity, its rational for him to ignore details first attested in the last thousand years and to discourage people who accept them from speaking. If mathematicians differ about the relationship between the sets P and NP, it would certainly be reasonable to see a strong opinion as arrogant unless it is backed up (by a proof, by research on related problems which was well-received by peers, or by something else).

Of course, people often use the heuristic that sources with status are more credible, and this heuristic has limits. But epistemology is definitely a hard problem.

5. Will says:

There is a reason that people use Dempster-Shafer theory, it “wins” in the problem of sensor fusion- its less brittle to conflicting evidence than doing Bayesian updates.
Also, human psychology is only exploitable by dutch book if people actually are willing to take the bet. Its not clear people would do this.

6. Cool.

As an aside, I personally find the notion of Knightian uncertainty to be empowering and liberating. (I’m not speaking to mathematical formalism.) Knightian uncertainty is an explicit acknowledgment that reality and “outcomes” don’t come prepackaged in composable, mutually exclusive chunks. We *impose* chunks on reality, if we can and if we dare. Sometimes you can, and you get civilization. Sometimes you can’t, and you get authoritarian high modernism and a horrible mess.

Language, science, software, computability theory, probability theory, etc., are the ongoing attempts to take the Knightian uncertainty out of reality. Maybe, someday, artificial intelligence will get us arbitrarily close to completely papering over reality with composable, mutually exclusive symbolic chunks. But concepts like Knightian uncertainty, apeiron, etc., explicitly remind us that the map is not the territory. (Preaching to choir, here.)

For me, Knightian uncertainty is a psychoactive concept that reminds me to look behind formalisms and apprehend as-yet-uncomputable territory as directly as I can with my puny brain. Then, formalisms work *for* me instead of distracting me from an attempt to figure out what’s actually going on.

For me, Knightian uncertainty is not about what I can’t say, refuse to say, or refuse to try to say. For me, it’s a mental move that encompasses what I can say, what I can apprehend but can’t yet say, my tacit (ironic) model of unknown unknowns (this is physically instantiated in my brain with a phenomenological marker, if not necessarily iconic phenomenological structure), and the boundaries and interactions between all of these things. It’s not a copout; it’s an invitation and a challenge.