Suppose you value some virtue V and you want to encourage people to be better at it. Suppose also you are something of a “thought leader” or “public intellectual” — you have some ability to influence the culture around you through speech or writing.
Suppose Alice Almost is much more V-virtuous than the average person — say, she’s in the top one percent of the population at the practice of V. But she’s still exhibited some clear-cut failures of V. She’s almost V-virtuous, but not quite.
How should you engage with Alice in discourse, and how should you talk about Alice, if your goal is to get people to be more V-virtuous?
Well, it depends on what your specific goal is.
Raising the Global Median
If your goal is to raise the general population’s median V–level, for instance, if V is “understanding of how vaccines work” and your goal is to increase the proportion of people who vaccinate their children, you want to support Alice straightforwardly.
Alice is way above the median V level. It would be great if people became more like Alice. If Alice is a popular communicator, signal-boosting Alice will be more likely to help rather than harm your cause.
For instance, suppose Alice makes a post telling parents to vaccinate their kids, but she gets a minor fact wrong along the way. It’s still OK to quote or excerpt the true part of her post approvingly, or to praise her for coming out in favor of vaccines.
Even spreading the post with the incorrect statement included, while it’s definitely suboptimal for the cause of increasing the average person’s understanding of vaccines, is probably net positive, rather than net negative.
Raising the Median Among The Virtuous
What if, instead, you’re trying to promote V among a small sub-community who excel at it? Say, the top 1% of the population in terms of V-virtue?
You might do this if your goal only requires a small number of people to practice exceptional virtue. For instance, to have an effective volunteer military doesn’t require all Americans to exhibit the virtues of a good soldier, just the ones who sign up for military service.
Now, within the community you’re trying to influence, Alice Almost isn’t way above average any more. Alice is average.
That means, you want to push people, including Alice, to be better than Alice is today. Sure, Alice is already pretty V-virtuous compared to the general population, but by the community’s standards, the general population is pathetic.
In this scenario, it makes sense to criticize Alice privately if you have a personal relationship with her. It also makes sense to, at least sometimes, publicly point out how the Alice Almosts of the community are falling short of the ideal of V. (Probably without naming names, unless Alice is already a famous public figure.)
Additionally, it makes sense to allow Alice to bear the usual negative consequences of her actions, and to publicly argue against anyone trying to shield her from normal consequences. For instance, if people who exhibit Alice-like failures of V are routinely fired from their jobs in your community, then if Alice gets fired, and her supporters get outraged about it, then it makes sense for you to argue that Alice deserved to be fired.
It does not make sense here to express outrage at Alice’s behavior, or to “punish” her as though she had committed a community norm violation. Alice is normal — that means that behavior like Alice’s happens all the time, and that the community does not currently have effective, reliably enforced norms against behavior like hers.
Now, maybe the community should have stronger norms against her behavior! But you have to explicitly make the case for that. If you go around saying “Alice should be jailed because she did X”, and X isn’t illegal under current law, then you are wrong. You first have to argue that X should be illegal.
If Alice’s failures of V-virtue are typical, then you do want to communicate the message that people should practice V more than Alice does. But this will be news to your audience, not common knowledge, since many of them are no better than Alice. To communicate effectively, you’ll have to take a tone of educating or sharing information: “Alice Almost, a well-known member of our community, just did X. Many of us do X, in fact. But X is not good enough. We shouldn’t consider X okay any more. Here’s why.”
Enforcing Community Norms
What if Alice is inside the community of top-1%-V-virtue you care about, but noticeably worse than average at V or violating community standards for V?
That’s an easy case. Enforce the norms! That’s what they’re there for!
Continuing to enforce the usual penalties against failures of V, and making it common knowledge that you do so, and support others who enforce penalties, keeps the “floor” of V in your community from falling, either by deterrence or expulsion or both.
In terms of tone, it now makes sense for you to communicate in a more “judgmental” way, because it’s common knowledge that Alice did wrong. You can say something like “Alice did X. As you know, X is unacceptable/forbidden/substandard in our community. Therefore, we will be penalizing her in such-and-such a way, according to our well-known, established traditions/code/policy.”
Splintering off a “Remnant”
The previous three cases treated the boundaries of your community as static. What if we made them dynamic instead?
Suppose you’re not happy with the standard of V-virtue of “the top 1% of the population.” You want to create a subcommunity with an even higher standard — let’s say, drawing from the top 0.1% of the population.
You might do this, for instance, if V is “degree of alignment/agreement with a policy agenda”, and you’re not making any progress with discourse/collaboration between people who are only mostly aligned with your agenda, so you want to form a smaller task force composed of a core of people who are hyper-aligned.
In that case, Alice Almost is normal for your current community, but she’s notably inferior in V-virtue compared to the standards of the splinter community you want to form.
Here, not only do you want to publicly criticize actions like Alice’s, but you even want to spend most of your time talking about how the Alice Almosts of the world fall short of the ideal V, as you advocate for the existence of your splinter group. You want to reach out to the people who are better at V than Alice, even if they don’t know it themselves, and explain to them what the difference between top-1% V-virtue and top 0.1% V-virtue looks like, and why that difference matters. You’re, in effect, empowering and encouraging them to notice that they’re not Alice’s peers any more, they’ve leveled up beyond her, and they don’t have to make excuses for her any more.
Just like in the case where Alice is a typical member of your community and you want to push your community to do better, your criticisms of Alice will be news to much of your audience, so you have to take an “educational/informational” tone. Even the people in the top 0.1% “remnant” may not be aware yet that there’s anything wrong with Alice’s behavior.
However, you’re now speaking primarily to the top 0.1%, not the top 1%, so you can now afford to be somewhat more insulting towards Alice. You’re trying to create norms for a future community in which Alice’s behavior will be considered unacceptable/substandard, so you can start to introduce the frame where Alice-like behavior is “immoral”, “incompetent”, “outrageous”, or otherwise failing to meet a reasonable person’s minimum expectations.
Expanding Community Membership
Let’s say you’re doing just the opposite. You think your community is too selective. You want to expand its boundaries to, say, a group drawn from the top 10% of the population in V-virtue. Your goals may require you to raise the V-levels of a wider audience than you’d been speaking to before.
In this case, you’re more or less in the same position as in the first case where you’re just trying to raise the global median. You should support Alice Almost (as much as possible without yourself imitating or compounding her failures), laud her as a role model, and not make a big public deal about the fact that she falls short of the ideal; most of the people you’re trying to reach fall short even farther.
What if Alice is Diluting Community Values?
Now, what if Alice Almost is the one trying to expand community membership to include people lower in V-virtue … and you don’t agree with that?
Now, Alice is your opponent.
In all the previous cases, the worst Alice did was drag down the community’s median V level, either directly or by being a role model for others. But we had no reason to suppose she was optimizing for lowering the median V level of the community. Once Alice is trying to “popularize” or “expand” the community, that changes. She’s actively trying to lower median V in your community — that is, she’s optimizing for the opposite of what you want.
This means that, not only should you criticize Alice, enforce existing community norms that forbid her behavior, and argue that community standards should become stricter against Alice-like, 1%-level failures of V-virtue, but you should also optimize against Alice gaining more power generally.
(But what if Alice succeeds in expanding the community size 10x and raising the median V level within the larger community by 10x or more, such that the median V level still increases from where it is now? Wouldn’t Alice’s goals be aligned with your goals then? Yeah, but we can assume we’re in a regime where increasing V levels is very hard — a reasonable assumption if you think about the track record of trying to teach ethics or instill virtue in large numbers of people — so such a huge persuasive/rhetorical win is unlikely.)
Alice, for her part, will see you as optimizing against her goals (she wants to grow the community and you want to prevent that) so she’ll have reason to optimize generally against you gaining more power.
Alice Almost and you are now in a zero-sum game. You are direct opponents, even though both of you are, compared to the general population, both very high in V-virtue.
Alice Almost in this scenario is a Sociopath, in the Chapman sense — she’s trying to expand and dilute the subculture. And Sociopaths are not just a little bad for the survival of the subculture, they are an existential threat to it, even though they are only a little weaker in the defining skills/virtues of the subculture than the Geeks who founded it. In the long run, it’s not about where you are, it’s where you’re aiming, and the Sociopaths are aiming down.
Of course, getting locked into a zero-sum game is bad if you can avoid it. Misidentifying Alice as a Sociopath when she isn’t, or missing an opportunity to dialogue with her and come to agreement about how big the community really needs to be, is costly. You don’t want to be hasty or paranoid in reading people as opponents. But there’s a very, very big difference between how you deal with someone who just happened to do something that blocked your goal, and how you deal with someone who is persistently optimizing against your goal.
Interesting thought exercise. Reminds me of the spat between Sam Harris and Ezra Klein, how two people with such (relatively) similar political views could become such bitter enemies and find so much to vehemently disagree about.
I tend to advocate conciliation in most circumstances.
I’ve seen the following basic pattern play out too many times:
1. 5 man butterfly collector club wants more people.
2. No one wants to join, because people would rather not collect butterflies, they have the internet.
3. They ask people why they don’t want to join.
4. It is awkward to tell butterfly collectors that no one shares their passion, so they point to the smelliest butterfly collector and say he is driving everyone away.
5. The butterfly collectors drive that guy out.
6. Go to 1, but the club is now a 4 man butterfly collector club.
Alice is part of your team. Forgive small faults from a person who shares your norms. You have no guarantee that her absence will cause others to arrive. My experience suggests it will simply shrink your ranks.
The problem construction seems a little odd to me, and if this is an abstracted recasting of a real world situation that I’m not involved in, that might explain why it seems odd, but the two goals that are cast as being in opposition don’t really seem in opposition to me.
Like, there is some value V, and you want to maximize the amount of V in the world. Most people don’t care about V at all and produce V that rounds to zero. Maybe 1% of the population cares about V and produces 10V per capita. Maybe 0.1% of the population works much harder or cares more or is just more lucky in a way externally indistinguishable from work / caring and produces 100V per capita.
If you take the 1% and motivate them more and they increase V production by 10% you have more V. If you expanded the population of people who care about V and bring in another 10% of the population each of whom produce 1V per capita, you also have more V.
The best solution is to just do both. If for some reason you can’t do both at once, then do one this year and do the other next year. The problem statement looks at this as a one-time thing, but in reality time just keeps passing. You can do different things at different times and there will be a next time.
In some sense this is an explore / exploit problem. Bringing more people into V production community is analogous to exploring. Pushing people who care about V to produce more V is exploiting. You need to do both and you need to move back and forth between doing each because consistently doing one or the other leaves opportunities untapped.
Your model is already smuggling in some assumptions I don’t buy. Like, that if you change the boundaries of the community everyone’s behavior stays the same. There’s effects like role-modeling (people imitate other members of their social circles) and reciprocity (some behaviors, like cooperation, are much harder to practice if the people around you aren’t also doing them).
Also, “maximize V” isn’t actually what we’re doing. We’re maximizing some function that’s increasing in V, but different people may disagree on what that function *is*, especially if they have different models of how you *use* V or different preferences for what to do with it. The difference between the person who wants to expand the community and the person who doesn’t is that they’re maximizing different functions, which are both increasing in V.
I find it odd that you think these are separate cases. Your decision on whether or not to criticize Alice will probably operate on all of these levels at once. You can’t do one without the others.
In general, the dominant effect is likely to happen at the level of the general populace. If Alice is at the 99th percentile in V-virtue and you call her out on her failings, you’re sending a signal to the bottom 95+% that trying to be more V-virtuous is unrewarding because you can’t be satisfied by any realistic effort. This is how SJWs contributed to Trump’s election.
BTW I am *not* vagueblogging about a particular interpersonal situation. This comes up very often as an issue & I think it’s important to solve in general, i.e. the “question of moderates”, the question of whether the person who is *almost* right is your ally or your enemy.
It think it’s fair to say my take may have some implicit assumptions, but I think they aren’t a totally crazy set of simplifying assumption. If value V is specifically reciprocity, then your concern seems reasonable, but for most outward facing values across a really wide range of areas (everything from reduction of third world poverty, production of more punk music, funding of scientific research, etc) I feel like the actual effect of more or less V should swamp the secondary effects.
I also think that its fair to point out that you want to maximize functions increasing in V rather than directly maximize V, but again I wonder if the ‘conflict’ is being framed in the wrong way. Like, if you are maximizing V*H*S and Alice Almost is Maximizing (V+D)^O, then the fact that you want to increase V by exhorting existing producers of V to raise production and Alice want to motivate more people to begin V production isn’t a conflict. The conflict is that you value H and S and Alice values D and O. In this case, it seems like potentially a straight forward ordinary values difference and the agreement on V is a red herring. You have a conflict of HS vs DO. But if you want to maximize V*H*S and Alice wants to maximize V*(H+S) then your values are ‘close enough’ and focusing on things that boost V seems like an obvious point of alliance.
If what you value is log(Max(V1, V2, V3, V4,… Vn))*H*S and Alice values Sum(V1, V2, V3, V4,… Vn)*H*S then your framing makes a lot more sense, but I’d almost say the V is so transformed that I don’t know if I would say you and Alice really both value V there.
// btw I didn’t mean to accuse of vague blogging or anything like that, framing just felt odd to me, so I figured I might be missing context
It’s clear that before PBS etc astrologers had 99th percentile knowledge about stars. Different values from astronomers.
I liked this post. Good framing/decomposition
It seems a little quick to conclude Alice is a sociopath because she actively wants to recruit members with lower V. For instance, maybe Alice believes that the benefits involved to people outside the community from being more inclusive more than make up for the harms to the community,
To what extent does the person wanting to increase V need to be a good exemplar of V?