The Tale of Alice Almost: Strategies for Dealing With Pretty Good People

Suppose you value some virtue V and you want to encourage people to be better at it.  Suppose also you are something of a “thought leader” or “public intellectual” — you have some ability to influence the culture around you through speech or writing.

Suppose Alice Almost is much more V-virtuous than the average person — say, she’s in the top one percent of the population at the practice of V.  But she’s still exhibited some clear-cut failures of V.  She’s almost V-virtuous, but not quite.

How should you engage with Alice in discourse, and how should you talk about Alice, if your goal is to get people to be more V-virtuous?

Well, it depends on what your specific goal is.

Raising the Global Median

If your goal is to raise the general population’s median Vlevel, for instance, if V is “understanding of how vaccines work” and your goal is to increase the proportion of people who vaccinate their children, you want to support Alice straightforwardly.

Alice is way above the median V level. It would be great if people became more like Alice. If Alice is a popular communicator, signal-boosting Alice will be more likely to help rather than harm your cause.

For instance, suppose Alice makes a post telling parents to vaccinate their kids, but she gets a minor fact wrong along the way.  It’s still OK to quote or excerpt the true part of her post approvingly, or to praise her for coming out in favor of vaccines.

Even spreading the post with the incorrect statement included, while it’s definitely suboptimal for the cause of increasing the average person’s understanding of vaccines, is probably net positive, rather than net negative.

Raising the Median Among The Virtuous

What if, instead, you’re trying to promote V among a small sub-community who excel at it?  Say, the top 1% of the population in terms of V-virtue?

You might do this if your goal only requires a small number of people to practice exceptional virtue. For instance, to have an effective volunteer military doesn’t require all Americans to exhibit the virtues of a good soldier, just the ones who sign up for military service.

Now, within the community you’re trying to influence, Alice Almost isn’t way above average any more.  Alice is average. 

That means, you want to push people, including Alice, to be better than Alice is today.  Sure, Alice is already pretty V-virtuous compared to the general population, but by the community’s standards, the general population is pathetic.  

In this scenario, it makes sense to criticize Alice privately if you have a personal relationship with her.  It also makes sense to, at least sometimes, publicly point out how the Alice Almosts of the community are falling short of the ideal of V.  (Probably without naming names, unless Alice is already a famous public figure.)

Additionally, it makes sense to allow Alice to bear the usual negative consequences of her actions, and to publicly argue against anyone trying to shield her from normal consequences. For instance, if people who exhibit Alice-like failures of V are routinely fired from their jobs in your community, then if Alice gets fired, and her supporters get outraged about it, then it makes sense for you to argue that Alice deserved to be fired.

It does not make sense here to express outrage at Alice’s behavior, or to “punish” her as though she had committed a community norm violation.  Alice is normal — that means that behavior like Alice’s happens all the time, and that the community does not currently have effective, reliably enforced norms against behavior like hers.

Now, maybe the community should have stronger norms against her behavior!  But you have to explicitly make the case for that.  If you go around saying “Alice should be jailed because she did X”, and X isn’t illegal under current law, then you are wrong.  You first have to argue that X should be illegal.

If Alice’s failures of V-virtue are typical, then you do want to communicate the message that people should practice V more than Alice does.  But this will be news to your audience, not common knowledge, since many of them are no better than Alice.  To communicate effectively, you’ll have to take a tone of educating or sharing information: “Alice Almost, a well-known member of our community, just did X.  Many of us do X, in fact. But X is not good enough. We shouldn’t consider X okay any more. Here’s why.”

Enforcing Community Norms

What if Alice is inside the community of top-1%-V-virtue you care about, but noticeably worse than average at V or violating community standards for V?

That’s an easy case. Enforce the norms! That’s what they’re there for!

Continuing to enforce the usual penalties against failures of V, and making it common knowledge that you do so, and support others who enforce penalties, keeps the “floor” of V in your community from falling, either by deterrence or expulsion or both.

In terms of tone, it now makes sense for you to communicate in a more “judgmental” way, because it’s common knowledge that Alice did wrong.  You can say something like “Alice did X.  As you know, X is unacceptable/forbidden/substandard in our community. Therefore, we will be penalizing her in such-and-such a way, according to our well-known, established traditions/code/policy.”

Splintering off a “Remnant”

The previous three cases treated the boundaries of your community as static. What if we made them dynamic instead?

Suppose you’re not happy with the standard of V-virtue of “the top 1% of the population.”  You want to create a subcommunity with an even higher standard — let’s say, drawing from the top 0.1% of the population.

You might do this, for instance, if V is “degree of alignment/agreement with a policy agenda”, and you’re not making any progress with discourse/collaboration between people who are only mostly aligned with your agenda, so you want to form a smaller task force composed of a core of people who are hyper-aligned.

In that case, Alice Almost is normal for your current community, but she’s notably inferior in V-virtue compared to the standards of the splinter community you want to form.

Here, not only do you want to publicly criticize actions like Alice’s, but you even want to spend most of your time talking about how the Alice Almosts of the world fall short of the ideal V, as you advocate for the existence of your splinter group.  You want to reach out to the people who are better at V than Alice, even if they don’t know it themselves, and explain to them what the difference between top-1% V-virtue and top 0.1% V-virtue looks like, and why that difference matters.  You’re, in effect, empowering and encouraging them to notice that they’re not Alice’s peers any more, they’ve leveled up beyond her, and they don’t have to make excuses for her any more.

Just like in the case where Alice is a typical member of your community and you want to push your community to do better, your criticisms of Alice will be news to much of your audience, so you have to take an “educational/informational” tone. Even the people in the top 0.1% “remnant” may not be aware yet that there’s anything wrong with Alice’s behavior.

However, you’re now speaking primarily to the top 0.1%, not the top 1%, so you can now afford to be somewhat more insulting towards Alice.  You’re trying to create norms for a future community in which Alice’s behavior will be considered unacceptable/substandard, so you can start to introduce the frame where Alice-like behavior is “immoral”, “incompetent”, “outrageous”, or otherwise failing to meet a reasonable person’s minimum expectations.

Expanding Community Membership

Let’s say you’re doing just the opposite. You think your community is too selective.  You want to expand its boundaries to, say, a group drawn from the top 10% of the population in V-virtue.  Your goals may require you to raise the V-levels of a wider audience than you’d been speaking to before.

In this case, you’re more or less in the same position as in the first case where you’re just trying to raise the global median.  You should support Alice Almost (as much as possible without yourself imitating or compounding her failures), laud her as a role model, and not make a big public deal about the fact that she falls short of the ideal; most of the people you’re trying to reach fall short even farther.

What if Alice is Diluting Community Values?

Now, what if Alice Almost is the one trying to expand community membership to include people lower in V-virtue … and you don’t agree with that?

Now, Alice is your opponent.

In all the previous cases, the worst Alice did was drag down the community’s median V level, either directly or by being a role model for others.  But we had no reason to suppose she was optimizing for lowering the median V level of the community.  Once Alice is trying to “popularize” or “expand” the community, that changes. She’s actively trying to lower median V in your community — that is, she’s optimizing for the opposite of what you want.

This means that, not only should you criticize Alice, enforce existing community norms that forbid her behavior, and argue that community standards should become stricter against Alice-like, 1%-level failures of V-virtue, but you should also optimize against Alice gaining more power generally.

(But what if Alice succeeds in expanding the community size 10x and raising the median V level within the larger community by 10x or more, such that the median V level still increases from where it is now? Wouldn’t Alice’s goals be aligned with your goals then?  Yeah, but we can assume we’re in a regime where increasing V levels is very hard — a reasonable assumption if you think about the track record of trying to teach ethics or instill virtue in large numbers of people — so such a huge persuasive/rhetorical win is unlikely.)

Alice, for her part, will see you as optimizing against her goals (she wants to grow the community and you want to prevent that) so she’ll have reason to optimize generally against you gaining more power.

Alice Almost and you are now in a zero-sum game.  You are direct opponents, even though both of you are, compared to the general population, both very high in V-virtue.

Alice Almost in this scenario is a Sociopath, in the Chapman sense — she’s trying to expand and dilute the subculture.   And Sociopaths are not just a little bad for the survival of the subculture, they are an existential threat to it, even though they are only a little weaker in the defining skills/virtues of the subculture than the Geeks who founded it.  In the long run, it’s not about where you are, it’s where you’re aiming, and the Sociopaths are aiming down.

Of course, getting locked into a zero-sum game is bad if you can avoid it.  Misidentifying Alice as a Sociopath when she isn’t, or missing an opportunity to dialogue with her and come to agreement about how big the community really needs to be, is costly.  You don’t want to be hasty or paranoid in reading people as opponents.  But there’s a very, very big difference between how you deal with someone who just happened to do something that blocked your goal, and how you deal with someone who is persistently optimizing against your goal.

Humans Who Are Not Concentrating Are Not General Intelligences

Recently, OpenAI came out with a new language model that automatically synthesizes text, called GPT-2.

It’s disturbingly good.  You can see some examples (cherry-picked, by their own admission) in OpenAI’s post and in the related technical paper.

I’m not going to write about the machine learning here, but about the examples and what we can infer from them.

The scary thing about GPT-2-generated text is that it flows very naturally if you’re just skimming, reading for writing style and key, evocative words.  The “unicorn” sample reads like a real science press release. The “theft of nuclear material” sample reads like a real news story. The “Miley Cyrus shoplifting” sample reads like a real post from a celebrity gossip site.  The “GPT-2” sample reads like a real OpenAI press release. The “Legolas and Gimli” sample reads like a real fantasy novel. The “Civil War homework assignment” reads like a real C-student’s paper.  The “JFK acceptance speech” reads like a real politician’s speech.  The “recycling” sample reads like a real right-wing screed.

If I just skim, without focusing, they all look totally normal. I would not have noticed they were machine-generated. I would not have noticed anything amiss about them at all.

But if I read with focus, I notice that they don’t make a lot of logical sense.

For instance, in the unicorn sample:

The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.

Wait a second, “Ovid” doesn’t refer to a “distinctive horn”, so why would naming them “Ovid’s Unicorn” be naming them after a distinctive horn?  Also, you just said they had one horn, so why are you saying they have four horns in the next sentence?

While their origins are still unclear, some believe that perhaps the creatures were created when a human and a unicorn met each other in a time before human civilization. According to Pérez, “In South America, such incidents seem to be quite common.”

Wait, unicorns originated from the interbreeding of humans and … unicorns?  That’s circular, isn’t it?

Or, look at the GPT-2 sample:

We believe this project is the first step in the direction of developing large NLP systems without task-specific training data. That is, we are developing a machine language system in the generative style with no explicit rules for producing text.

Except the second sentence isn’t a restatement of the first sentence — “task-specific training data” and “explicit rules for producing text” aren’t synonyms!  So saying “That is” doesn’t make sense.

Or look at the LOTR sample:

Aragorn drew his sword, and the Battle of Fangorn was won. As they marched out through the thicket the morning mist cleared, and the day turned to dusk.

Yeah, day doesn’t turn to dusk in the morning.

Or in the “resurrected JFK” sample:

(1) The brain of JFK was harvested and reconstructed via tissue sampling. There was no way that the tissue could be transported by air. (2) A sample was collected from the area around his upper chest and sent to the University of Maryland for analysis. A human brain at that point would be about one and a half cubic centimeters. The data were then analyzed along with material that was obtained from the original brain to produce a reconstruction; in layman’s terms, a “mesh” of brain tissue.

His brain tissue was harvested…from his chest?!  A human brain is one and a half cubic centimeters?!

So, ok, this isn’t actually human-equivalent writing ability. OpenAI doesn’t claim it is, for what it’s worth — I’m not trying to diminish their accomplishment, that’s not the point of this post.  The point is, if you skim text, you miss obvious absurdities.  The point is OpenAI HAS achieved the ability to pass the Turing test against humans on autopilot.

The point is, I know of a few people, acquaintances of mine, who, even when asked to try to find flaws, could not detect anything weird or mistaken in the GPT-2-generated samples.

There are probably a lot of people who would be completely taken in by literal “fake news”, as in, computer-generated fake articles and blog posts.  This is pretty alarming.  Even more alarming: unless I make a conscious effort to read carefully, I would be one of them.

Robin Hanson’s post Better Babblers is very relevant here.  He claims, and I don’t think he’s exaggerating, that a lot of human speech is simply generated by “low order correlations”, that is, generating sentences or paragraphs that are statistically likely to come after previous sentences or paragraphs:

After eighteen years of being a professor, I’ve graded many student essays. And while I usually try to teach a deep structure of concepts, what the median student actually learns seems to mostly be a set of low order correlations. They know what words to use, which words tend to go together, which combinations tend to have positive associations, and so on. But if you ask an exam question where the deep structure answer differs from answer you’d guess looking at low order correlations, most students usually give the wrong answer.

Simple correlations also seem sufficient to capture most polite conversation talk, such as the weather is nice, how is your mother’s illness, and damn that other political party. Simple correlations are also most of what I see in inspirational TED talks, and when public intellectuals and talk show guests pontificate on topics they really don’t understand, such as quantum mechanics, consciousness, postmodernism, or the need always for more regulation everywhere. After all, media entertainers don’t need to understand deep structures any better than do their audiences.

Let me call styles of talking (or music, etc.) that rely mostly on low order correlations “babbling”. Babbling isn’t meaningless, but to ignorant audiences it often appears to be based on a deeper understanding than is actually the case. When done well, babbling can be entertaining, comforting, titillating, or exciting. It just isn’t usually a good place to learn deep insight.

I used to half-joke that the New Age Bullshit Generator was actually useful as a way to get myself to feel more optimistic. The truth is, it isn’t quite good enough to match the “aura” or “associations” of genuine, human-created inspirational text. GPT-2, though, is.

I also suspect that the “lyrical” or “free-associational” function of poetry is adequately matched by GPT-2.  The autocompletions of Howl read a lot like Allen Ginsberg — they just don’t imply the same beliefs about the world.  (Moloch whose heart is crying for justice! sounds rather positive.)

I’ve noticed that I cannot tell, from casual conversation, whether someone is intelligent in the IQ sense.

I’ve interviewed job applicants, and perceived them all as “bright and impressive”, but found that the vast majority of them could not solve a simple math problem.  The ones who could solve the problem didn’t appear any “brighter” in conversation than the ones who couldn’t.

I’ve taught public school teachers, who were incredibly bad at formal mathematical reasoning (I know, because I graded their tests), to the point that I had not realized humans could be that bad at math — but it had no effect on how they came across in friendly conversation after hours. They didn’t seem “dopey” or “slow”, they were witty and engaging and warm.

I’ve read the personal blogs of intellectually disabled people — people who, by definition, score poorly on IQ tests — and they don’t read as any less funny or creative or relatable than anyone else.

Whatever ability IQ tests and math tests measure, I believe that lacking that ability doesn’t have any effect on one’s ability to make a good social impression or even to “seem smart” in conversation.

If “human intelligence” is about reasoning ability, the capacity to detect whether arguments make sense, then you simply do not need human intelligence to create a linguistic style or aesthetic that can fool our pattern-recognition apparatus if we don’t concentrate on parsing content.

I also noticed, upon reading GPT2 samples, just how often my brain slides from focused attention to just skimming. I read the paper’s sample about Spanish history with interest, and the GPT2-generated text was obviously absurd. My eyes glazed over during the sample about video games, since I don’t care about video games, and the machine-generated text looked totally unobjectionable to me. My brain is constantly making evaluations about what’s worth the trouble to focus on, and what’s ok to tune out. GPT2 is actually really useful as a *test* of one’s level of attention.

This is related to my hypothesis in https://srconstantin.wordpress.com/2017/10/10/distinctions-in-types-of-thought/ that effortless pattern-recognition is what machine learning can do today, while effortful attention, and explicit reasoning (which seems to be a subset of effortful attention) is generally beyond ML’s current capabilities.

Beta waves in the brain are usually associated with focused concentration or active or anxious thought, while alpha waves are associated with the relaxed state of being awake but with closed eyes, before falling asleep, or while dreaming. Alpha waves sharply reduce after a subject makes a mistake and begins paying closer attention. I’d be interested to see whether ability to tell GPT2-generated text from human-generated text correlates with alpha waves vs. beta waves.

The first-order effects of highly effective text-generators are scary. It will be incredibly easy and cheap to fool people, to manipulate social movements, etc. There’s a lot of opportunity for bad actors to take advantage of this.

The second-order effects might well be good, though. If only conscious, focused logical thought can detect a bot, maybe some people will become more aware of when they’re thinking actively vs not, and will be able to flag when they’re not really focusing, and distinguish the impressions they absorb in a state of autopilot from “real learning”.

The mental motion of “I didn’t really parse that paragraph, but sure, whatever, I’ll take the author’s word for it” is, in my introspective experience, absolutely identical to “I didn’t really parse that paragraph because it was bot-generated and didn’t make any sense so I couldn’t possibly have parsed it”, except that in the first case, I assume that the error lies with me rather than the text.  This is not a safe assumption in a post-GPT2 world. Instead of “default to humility” (assume that when you don’t understand a passage, the passage is true and you’re just missing something) the ideal mental action in a world full of bots is “default to null” (if you don’t understand a passage, assume you’re in the same epistemic state as if you’d never read it at all.)

Maybe practice and experience with GPT2 will help people get better at doing “default to null”?