Contrite Strategies and The Need For Standards

Epistemic Status: Confident

There’s a really interesting paper from 1996 called The Logic of Contrition, which I’ll summarize here.  In it, the authors identify a strategy called “Contrite Tit For Tat”, which does better than either Pavlov or Generous Tit For Tat in Iterated Prisoner’s Dilemma.

In Contrite Tit For Tat, the player doesn’t only look at what he and the other player played on the last term, but also another variable, the standing of the players, which can be good or bad.

If Bob defected on Alice last round but Alice was in good standing, then Bob’s standing switches to bad, and Alice defects against Bob.

If Bob defected on Alice last round but Alice was in bad standing, then Bob’s standing stays good, and Alice cooperates with Bob.

If Bob cooperated with Alice last round, Bob keeps his good standing, and Alice cooperates.

This allows two Contrite Tit For Tat players to recover quickly from accidental defections without defecting against each other forever;

D/C -> C/D -> C/C

But, unlike Pavlov, it consistently resists the “always defect” strategy

D/C -> D/D -> D/D -> D/D …

Like TFT (Tit For Tat) and unlike Pavlov and gTFT (Generous Tit For Tat), cTFT (Contrite Tit For Tat) can invade a population of all Defectors.

A related contrite strategy is Remorse.  Remorse cooperates only if it is in bad standing, or if both players cooperated in the previous round. In other words, Remorse is more aggressive; unlike cTFT, it can attack cooperators.

Against the strategy “always cooperate”, cTFT always cooperates but Remorse alternates cooperating and defecting:

C/C -> C/D -> C/C -> C/D …

And Remorse defends effectively against defectors:

D/C -> D/D -> D/D -> D/D…

But if one Remorse accidentally defects against another, recovery is more difficult:

C/D -> D/C -> D/D -> C/D -> …

If the Prisoner’s Dilemma is repeated a large but finite number of times, cTFT is an evolutionarily stable state in the sense that you can’t do better for yourself when playing against a cTFT player through doing anything that deviates from what cTFT would recommend. This implies that no other strategy can successfully invade a population of all cTFT’s.

REMORSE can sometimes be invaded by strategies better at cooperating with themselves, while Pavlov can sometimes be invaded by Defectors, depending on the payoff matrix; but for all Prisoner’s Dilemma payoff matrices, cTFT resists invasion.

Defector and a similar strategy called Grim Trigger (if a player ever defects on you, keep defecting forever) are evolutionarily stable, but not good outcomes — they result in much lower scores for everyone in the population than TFT or its variants.  By contrast, a whole population that adopts cTFT, gTFT, Pavlov, or Remorse on average gets the payoff from cooperating each round.

The bottom line is, adding “contrition” to TFT makes it quite a bit better, and allows it to keep pace with Pavlov in exploiting TFT’s, while doing better than Pavlov at exploiting Defectors.

This is no longer true if we add noise in the perception of good or bad standing; contrite strategies, like TFT, can get stuck defecting against each other if they erroneously perceive bad standing.

The moral of the story is that there’s a game-theoretic advantage to not only having reciprocity (TFT) but standards (cTFT), and in fact reciprocity alone is not enough to outperform strategies like Pavlov which don’t map well to human moral maxims.

What do I mean by standards?

There’s a difference between saying “Behavior X is better than behavior Y” and saying “Behavior Y is unacceptable.”

The concept of “unacceptable” behavior functions like the concept of “standing” in the game theory paper.  If I do something “unacceptable” and you respond in some negative way (you get mad or punish me or w/e), I’m not supposed to retaliate against your negative response, I’m supposed to accept it.

Pure reciprocity results in blood feuds — “if you kill one of my family I’ll kill one of yours” is perfectly sound Tit For Tat reasoning, but it means that we can’t stop killing once we’ve started.

Arbitrary forgiveness fixes that problem and allows parties to reconcile even if they’ve been fighting, but introduces the new problem that now you’re vulnerable to an attacker who just won’t quit.

Contrite strategies are like having a court system. (Though not an enforcement system!  They are still “anarchist” in that sense — all cTFT bots are equal.)  The “standing” is an assessment attached to each person of whether they are in the wrong and thereby restricted in their permission to retaliate.

In general, for actions not covered by the legal system and even for some that are, we don’t have widely shared standards of acceptable vs. unacceptable behavior.  We’re aware (and especially so given the internet) that these standards differ from subculture to subculture and context to context, and we’re often aware that they’re arbitrary, and so we have enormous difficulty getting widely shared clarity on claims like “he was deceptive and that’s not OK”.  Because…was he deceptive in a way that counts as fraud? Was it just “puffery” of the kind that’s normal in PR?  Was it a white lie to spare someone’s feelings?  Was it “just venting” and thus not expected to be as nuanced or fact-checked as more formal speech?  What level or standard of honesty could he reasonably have been expected to be living up to?

We can’t say “that’s not OK” without some kind of understanding that he had failed to live up to a shared expectation.  And where is that bar?  It’s going to depend who you ask and what local context they’re living in.  And not only that, but the fact that nobody is keeping track of where even the separate, local standards are, eventually standards will have to be dropped to the lowest common denominator if not made explicit.

MBTI isn’t science but it’s illustrative descriptively, and it seems to me that the difference between “Perceivers” and “Judgers”, which is basically the difference between the kinds of people who get called “judgmental” in ordinary English and the people who don’t, is that “Judgers” have a clear idea of where the line is between “acceptable” and “unacceptable” behavior, while Perceivers don’t.  I’m a Perceiver, and I’ve often had this experience where someone is saying “that’s just Not OK” and I’m like “whoa, where are you getting that? I can certainly see that it’s suboptimal, this other thing would be better, but why are you drawing the line for acceptability here instead of somewhere else?”

The lesson of cTFT is that having a line in the first place, having a standard that you can either be in line with or in violation of, has survival value.

 

The Pavlov Strategy

Epistemic Status: Common knowledge, just not to me

The Evolution of Trust is a deceptively friendly little interactive game.  Near the end, there’s a “sandbox” evolutionary game theory simulator. It’s pretty flexible. You can do quick experiments in it without writing code. I highly recommend playing around.

One of the things that surprised me was a strategy the game calls Simpleton, also known in the literature as Pavlov.  In certain conditions, it works pretty well — even better than tit-for-tat or tit-for-tat with forgiveness.

Let’s set the framework first. You have a Prisoner’s dilemma type game.

  • If both parties cooperate, they each get +2 points.
  • If one cooperates and the other defects, the defector gets +3 points and the cooperator gets -1 point
  • If both defect, both get 0 points.

This game is iterated — you’re randomly assigned to a partner and you play many rounds.   Longer rounds reward more cooperative strategies; shorter rounds reward more defection.

It’s also evolutionary — you have a proportion of bots each playing their strategies, and after each round, the bots with the most points replicate and the bots with the least points die out.  Successful strategies will tend to reproduce while unsuccessful ones die out.  In other words, this is the Darwin Game.

Finally, it’s stochastic — there’s a small probability that any bot will make a mistake and cooperate or defect at random.

Now, how does Pavlov work?

Pavlov starts off cooperating.  If the other player cooperates with Pavlov, Pavlov keeps doing whatever it’s doing, even if it was a mistake; if the other player defects, Pavlov switches its behavior, even if it was a mistake.

In other words, Pavlov:

  • cooperates when you cooperate with it, except by mistake
  • “pushes boundaries” and keeps defecting when you cooperate, until you retaliate
  • “concedes when punished” and cooperates after a defect/defect result
  • “retaliates against unprovoked aggression”, defecting if you defect on it while it cooperates.

If there’s any randomness, Pavlov is better at cooperating with itself than Tit-For-Tat. One accidental defection and two Tit-For-Tats are stuck in an eternal defect cycle, while Pavlov’s forgive each other and wind up back in a cooperate/cooperate pattern.

Moreover, Pavlov can exploit CooperateBot (if it defects by accident, it will keep greedily defecting against the hapless CooperateBot, while Tit-For-Tat will not) but still exerts some pressure against DefectBot (defecting against it half the time, compared to Tit-For-Tat’s consistent defection.)

The interesting thing is that Pavlov can beat Tit-For-Tat or Tit-for-Tat-with-Forgiveness in a wide variety of scenarios.

If there are only Pavlov and Tit-For-Tat bots, Tit-For-Tat has to start out outnumbering Pavlov quite significantly in order to win. The same is true for a population of Pavlov and Tit-For-Tat-With-Forgiveness.  It doesn’t change if we add in some Cooperators or Defectors either.

Why?

Compared to Tit-For-Tat, Pavlov cooperates better with itself.  If two Tit-For-Tat bots are paired, and one of them accidentally defects, they’ll be stuck in a mutual defection equilibrium.  However, if one Pavlov bot accidentally defects against its clone, we’ll see

C/D -> D/D -> C->C

which recovers a mutual-cooperation equilibrium and picks up more points.

Compared to Tit-For-Tat-With-Forgiveness, Pavlov cooperates *worse* with itself (it takes longer to recover from mistakes) but it “exploits” TFTWF’s patience better. If Pavlov accidentally defects against TFTWF, the result is

D/C -> D/C -> D/D -> C/D -> D/D -> C/C,

which leaves Pavlov with a net gain of 1 point per turn, (over the first five turns before a cooperative equilibrium) compared to TFTWF’s 1/5 point per turn.

If TFTWF accidentally defects against Pavlov, the result is

C/D -> D/C -> D/C -> D/D -> C/D

which cycles eternally (until the next mistake), getting Pavlov an average of 5/4 points per turn, compared to TFTWF’s 1 point per turn.

Either way, Pavlov eventually overtakes TFTWF.

If you add enough DefectBots to a mix of Pavlovs and TFT’s (and it has to be a large majority of the total population being DefectBots) TFT can win, because it’s more resistant against DefectBots than Pavlov is.  Pavlov cooperates with DefectBots half the time; TFT never does except by mistake.

Pavlov isn’t perfect, but it performs well enough to hold its own in a variety of circumstances.  An adapted version of Pavlov won the 2005 iterated game theory tournament.

Why, then, don’t we actually talk about it, the way we talk about Tit-For-Tat?  If it’s true that moral maxims like the Golden Rule emerge out of the fact that Tit-For-Tat is an effective strategy, why aren’t there moral maxims that exemplify the Pavlov strategy?  Why haven’t I even heard of Pavlov until now, despite having taken a game theory course once, when everybody has heard of Tit-For-Tat and has an intuitive feeling for how it works?

In Wedekind and Milinski’s 1996 experiment with human subjects, playing an iterated prisoner’s dilemma game, a full 70% of them engaged in Pavlov-like strategies.  The human Pavlovians were smarter than a pure Pavlov strategy — they eventually recognized the DefectBots and stopped cooperating with them, while a pure-Pavlov strategy never would — but, just like Pavlov, the humans kept “pushing boundaries” when unopposed.

Moreover, humans basically divided themselves into Pavlovians and Tit-For-Tat-ers; they didn’t switch strategies between game conditions where one strategy or another was superior, but just played the same way each time.

In other words, it seems fairly likely not only that Pavlov performs well in computer simulations, but that humans do have some intuitive model of Pavlov.  And, even more suggestively, it might be that “there are two kinds of people” — some people always play Pavlov while others always play Tit-For-Tat.

Human players are more likely to use generous Tit-For-Tat strategies rather than Pavlov when they have to play a working-memory game at the same time as they’re playing iterated Prisoner’s Dilemma.  In other words, Pavlov is probably more costly in working memory than generous Tit for Tat.

If you look at all 16 theoretically possible strategies that only have memory of the previous round, and let them evolve, evolutionary dynamics can wind up quite complex and oscillatory.

A population of TFT players will be invaded by more “forgiving” strategies like Pavlov, who in turn can be invaded by DefectBot and other uncooperative strategies, which again can be invaded by TFT, which thrives in high-defection environments.  If you track the overall rate of cooperation over time, you get very regular oscillations, though these are quite sensitive to variation in the error and mutation rates and nonperiodic (chaotic) behavior can occur in some regimes.

This is strangely reminiscent of Peter Turchin’s theory of secular cycles in history.  Periods of peace and prosperity alternate with periods of conflict and poverty; empires rise and fall.  Periods of low cooperation happen at the fall of an empire/state/civilization; this enables new empires to rise when a subgroup has better ability to cooperate with itself and fight off its enemies than the surrounding warring peoples; but in peacetime, at the height of an empire, more forgiving and exploitative strategies like Pavlov can emerge, which themselves are vulnerable to the barbaric defectors.  This is a vastly simplified story compared to the actual mathematical dynamics or the actual history, of course, but it’s an illustrative gist.

The big takeaway from learning about evolutionary game theory is that it’s genuinely complicated from a player-perspective.

“It’s complicated” sometimes functions as a curiosity-stopper; you conclude “more research is needed” instead of looking at the data you have and drawing preliminary conclusions, if you want to protect your intellectual “territory” without putting yourself out of a job.

That isn’t the kind of “complexity” I’m talking about here.  Chaos in dynamical systems has a specific meaning: the system is so sensitive to initial conditions that even a small measurement error in determining where it starts means you cannot even approximately predict where it will end up.

“Chaos: When the present determines the future, but the approximate present does not approximately determine the future.”

Optimal strategy depends sensitively on who else is in the population, how many errors you make, and how likely strategies are to change (or enter or leave).  There are a lot of moving parts here.

Argue Politics* With Your Best Friends

Epistemic Status: I endorse this strongly but don’t think I’m being original or clever at all.

Until recently — yesterday, in fact — I was seriously wrong about something.

I thought that it was silly when I saw people spending lots of energy arguing with their closest friends who almost completely agreed with them, but not quite.

That’s some People’s Front Of Judaea shit, I thought.  Don’t you know that guy you’re arguing with so vehemently is your friend?  He likes you!  He’s a pretty good guy!  He even shares your values and models, almost completely! He’s only wrong about this one, itty bitty, relatively abstract thing!

Meanwhile, there are people out there in the world who don’t share your values. And there are people out there who are actually evil and do awful things.

It’s like “ok, saying mean things about Muslims can be bad, but being a Muslim terrorist is a hell of a lot worse! Why do the people who are so quick to penalize Islamophobic speech never have anything bad to say about actual mass murder?  C’mon, get a sense of proportion!”

I still think, obviously, that really bad actions are worse than slightly bad actions.

But I was seriously misunderstanding why people argue with their close friends.

Have you noticed my mistake yet?  Give it a moment.

. . .

. . .

. . .

Ok, here it is.

Arguing is not a punishment.

Again.

Arguing is not a punishment.

Sure, serious wrongdoing should be penalized, and socially disapproved of, more than mild wrongdoing.   (Murder is worse than prejudiced speech.)

Also, fixing big problems should take priority over fixing little problems. (Saving money on rent is worth more of your attention than saving money on apples.)

But let’s frame it differently.

Cooperation is really valuable. Stable cooperation, that is; when even in the future, when you know each other better, and you’ve had more time to think, you’ll still want to cooperate.

Trust is really valuable, and scarce.  Justified trust, that is; when you can rely on what somebody says to be true and base your decisions on information you get from them.

Having “true friends” — people you can cooperate with and trust, stably, to a high degree — is valuable.

Yeah, you can get along and even thrive in a low-trust environment if you have the right skills for it.  Havamal, the medieval Icelandic wisdom literature, attributed to the god Odin, is my favorite advice for how to be a savvy customer in a low-trust world. (Exercise for the reader: think about how it applies to the replication crisis in science.) But especially in a low-trust world, true friends are valuable, as Havamal will remind you again and again.

How do you get more trust and cooperation with your friends?

It’s a hard problem; I haven’t solved it or even really started trying yet, the following are just ideas at the conceptual level rather than things I’ve found successful.

But communicating with them to get on the same page is clearly part of the puzzle.  Cooperation means “you and I agree to do X, and then we follow through and actually do X.”  The part about willingness to follow through is about loyalty, conscientiousness, motivation, integrity, all those kinds of virtues.  The part about agreeing to do X, though?  That’s not possible unless you both clearly understand what X is, which is much harder than it sounds!  It takes a lot of discussion, in my experience and from what I’ve heard, to get people on the same page about what exactly they’ve committed to doing.

Moreover, if I don’t understand why X is so important to you, and I say “yeah, ok, sure, X”, and then I go home and back to my life, but X still seems pointless to me, then I’m going to be less motivated to do X.

Because we didn’t have the argument about “is X pointless or not?”

We didn’t resolve it. We let it drop, to be nice, because we’re friends and we like each other.  But we didn’t get on the same page, and now a ball got dropped and you’re unhappy with me.

That getting-on-the-same-page process is not a punishment.

It’s something you’d only do with a friend close enough that you really might cooperate on work that you care about getting done.  (Mundane example: household chores.  Gotta get on the same page about who’s responsible for what!  Negotiating for fewer/different responsibilities is better than shirking!  That can be a really hard thing to internalize, though.)

“I spend more time communicating and getting on the same page with my friends than I do on having discussions with people I hate” — frame it that way, and suddenly that doesn’t sound like pointless infighting, it sounds mature and practical, right?

Of course you’d focus most on clarifying communication with your closest friends! They’re the people you’re most likely to be able to cooperate with!

Ok, so what kind of agreement is most valuable and attainable?  After all, nobody, even your closest friends, agrees with you on everything.

Short term, the answer is obvious: agreement on the details that are practical and relevant to the tasks you share.  Share an apartment?  Gotta come to agreement on chores, and share world-models relevant for those. (It’s no good if I agree to sweep but I don’t know where we keep the broom.)

But how about the long-run and more meta problem of living in a low-cooperation world itself?

Here’s one example: we’re in a real trade war with China now. Chinese investment in the US dropped 92 percent in the first half of 2018!  I’ve tuned out financial markets for most of my life, but I’m essentially a professional fundraiser now, and let me tell you, a drop in Chinese-US investment that drastic affects a US organization’s ability to raise capital.  Trade wars, like real wars, can come along all of a sudden and destroy value. Cooperation in this sense is less about singing kumbaya and more about not taking a wrecking ball to your own house.  The Hobbesian war of all against all ruins things that people were trying to build.

You want collaborators on fixing that kind of a problem?

The relevant things to agree (and disagree!) on are about the nature of cooperation and trust themselves. How are alliances and coalitions formed and maintained and broken?  How, and how well, do enforcement mechanisms and incentive strategies work?  You can think of these questions through the lenses of a number of fields:

  • game theory
  • evolutionary psychology
  • some branches of economics (mechanism design, public choice, price theory in general)
  • international relations (I know none of this)
  • Marxism (I haven’t read Marx either, but I’ve heard that his class analysis can be seen as applied iterated game theory, where a “class” refers to a coalition)

In all cases, the things to get on the same page about are positive not normative aspects of fundamental theory not immediate policy.

We want long-term cooperation, right? That means fundamentals need to be gotten right. Why? If you focus on object-level policy, it’s too easy for your friend to concur without agreeing (“I agree we should do X, but not with your reason for doing X”), which means that on the next policy question that comes up, your friend might not even concur!

(I have a friend — a good guy! a smart guy! — who concurs with me on 100% of object-level political controversies, and in every case, he concurs for a reason I think is dumb.  You may know someone like that too.  For the purposes of building long-term cooperation, your friend Mr. Concur is harder to get on the same page with, and thus lower priority to have discussions with, than your friend Ms. Dissent, who starts with the same premises as you but takes them in a totally different direction.  This is counterintuitive, because often you will initially get along better with Mr. Concur! That is because the mechanism that produces “getting along with” and makes friendships closer or weaker is itself a short-term, object-level policy! For instance, people in the same political tribe are nicer to each other.)

So, that’s why fundamental principles, not immediate policy.

Why positive and not normative?  So you’ll avoid unnecessary hostility.

Hostility, after all, in game-theory-land, is what it feels like from the inside to decide that your interests are opposed to someone else’s.  You can come to this conclusion mistakenly.  To avoid becoming hostile by mistake, first try to clearly understand and communicate what the landscape of interests and incentives even looks like.  That’s what professional negotiators harp on all the time — more often than most people assume, it’s in your interests to keep asking clarifying questions until you understand wtf is going on, and stay cordial enough to keep talking until you understand wtf is going on, because that increases the odds you’ll find a mutually agreeable deal, should one exist.  (Notwithstanding this, there are cases in which obfuscating your negotiating position is in your interest.  That’s less true, I expect, the more meta you go.  Another reason to start with foundations rather than policies.)

Sticking around for a technical discussion is, itself, a gesture of trust. It invests resources.

That’s why it’s hard to get this stuff started. As I write this, I haven’t washed up yet, I’m not cleaning the house or reading science papers or adding stuff to the LRI blog, and I’m ignoring my baby (who, luckily, is happily playing with his toys and smiling at me every so often.)  I’m of the opinion that laying these things out in writing is one of the better ways I have to start coordinated conversations, but, let’s be real, it does involve being a little…spendthrift.  Feeling like “sure, I can afford to do this.”  I’m also reading Law’s Order, currently.  That’s also a resource investment into this whole maybe-doomed “understand the micro-foundations of politics” goal, and it also looks kinda like goofing off, and lookit, aren’t there already economists for this who do it better?  I’m in a remarkably privileged position at the moment when I have a bunch of time flexibility, and something tells me that this is one of the ways I want to be using it.  It is kind of the future of humanity, after all.  But actually spending hours chatting merrily — or furiously — with a friend about what is effectively politics for nerds — well, that’s what people usually call “wasting time”, isn’t it?

It’s not a waste if you do it well.  But I get that there are a lot of incentives pushing against it.

What friendly theory talk has going for it is the very long term — getting to be the future’s equivalent of Confucius or Boethius and their friends, or maybe even the Amoraim— and the very short term, in which it’s fun to hang out with your friends and talk about interesting things and have some sense that you’re getting somewhere.

Example question to explore:

The nitty-gritty of the  “forgiveness” part of “tit-for-tat-with-forgiveness” in iterated games.  There are a lot of slightly different variants of this, I know, which are viable enough to see play.  Algorithms for recovery of cooperation after defection — how do different ones work? Advantages or disadvantages?  Do any of them correspond to known human behaviors or historical/current institutions?  As a practical matter, what kind of heuristics do people use as to whether or how to revive relationships with friends that have grown distant, pitch to leads that have gone cold, collect debts that have gone unpaid for a long time, etc?

 

Player vs. Character: A Two-Level Model of Ethics

Epistemic Status: Confident

This idea is actually due to my husband, Andrew Rettek, but since he doesn’t blog, and I want to be able to refer to it later, I thought I’d write it up here.

In many games, such as Magic: The Gathering, Hearthstone, or Dungeons and Dragons, there’s a two-phase process. First, the player constructs a deck or character from a very large sample space of possibilities.  This is a particular combination of strengths and weaknesses and capabilities for action, which the player thinks can be successful against other decks/characters or at winning in the game universe.

The choice of character often determines the strategies that character can use in the second phase, which is actual gameplay.  In gameplay, the character can only use the affordances that it’s been previously set up with.

This means that there are two separate places where a player needs to get things right: first, in designing a strong character/deck, and second, in executing the optimal strategies for that character/deck during gameplay.

(This is in contrast to games like chess or go, which are single-level; the capacities of black and white are set by the rules of the game, and the only problem is how to execute the optimal strategy. Obviously, even single-level games can already be complex!)

The idea is that human behavior works very much like a two-level game.

The “player” is the whole mind, choosing subconscious strategies.  The “elephant“, not the “rider.”  The player is very influenced by evolutionary pressure; it is built to direct behavior in ways that increase inclusive fitness.  The player directs what we perceive, do, think, and feel.

The player creates what we experience as “personality”, fairly early in life; it notices what strategies and skills work for us and invests in those at the expense of others.  It builds our “character sheet”, so to speak.

Note that even things that seem like “innate” talents, like the savant skills or hyperacute senses sometimes observed in autistic people, can be observed to be tightly linked to feedback loops in early childhood. In other words, savants practice the thing they like and are good at, and gain “superhuman” skill at it.  They “practice” along a faster and more hyperspecialized path than what we think of as a neurotypical “practicing hard,” but it’s still a learning process.  Savant skills are more rigidly fixed and seemingly “automatic” than non-savant skills, but they still change over time — e.g. Stephen Wiltshire, a savant artist who manifested an ability to draw hyper-accurate perspective drawings in early childhood, has changed and adapted his art style as he grew up, and even acquired new savant talents in music.  If even savant talents are subject to learning and incentives/rewards, certainly ordinary strengths, weaknesses, and personality types are likely to be “strategic” or “evolved” in this sense.

The player determines what we find rewarding or unrewarding.  The player determines what we notice and what we overlook; things come to our attention if it suits the player’s strategy, and not otherwise.  The player gives us emotions when it’s strategic to do so.  The player sets up our subconscious evaluations of what is good for us and bad for us, which we experience as “liking” or “disliking.”

The character is what executing the player’s strategies feels like from the inside.  If the player has decided that a task is unimportant, the character will experience “forgetting” to do it.  If the player has decided that alliance with someone will be in our interests, the character will experience “liking” that person.  Sometimes the player will notice and seize opportunities in a very strategic way that feels to the character like “being lucky” or “being in the right place at the right time.”

This is where confusion often sets in. People will often protest “but I did care about that thing, I just forgot” or “but I’m not that Machiavellian, I’m just doing what comes naturally.”  This is true, because when we talk about ourselves and our experiences, we’re speaking “in character”, as our character.  The strategy is not going on at a conscious level. In fact, I don’t believe we (characters) have direct access to the player; we can only infer what it’s doing, based on what patterns of behavior (or thought or emotion or perception) we observe in ourselves and others.

Evolutionary psychology refers to the player’s strategy, not the character’s. (It’s unclear which animals even have characters in the way we do; some animals’ behavior may all be “subconscious”.)  So when someone speaking in an evolutionary-psychology mode says that babies are manipulating their parents to not have more children, for instance, that obviously doesn’t mean that my baby is a cynically manipulative evil genius.  To him, it probably just feels like “I want to nurse at night. I miss Mama.”  It’s perfectly innocent. But of course, this has the effect that I can’t have more children until I wean him, and that’s to his interest (or, at least, it was in the ancestral environment when food was more scarce.)

Szaszian or evolutionary analysis of mental illness is absurd if you think of it as applying to the character — of course nobody wakes up in the morning and decides to have a mental illness. It’s not “strategic” in that sense. (If it were, we wouldn’t call it mental illness, we’d call it feigning.)  But at the player level, it can be fruitful to ask “what strategy could this behavior be serving the person?” or “what experiences could have made this behavior adaptive at one point in time?” or “what incentives are shaping this behavior?”  (And, of course, externally visible “behavior” isn’t the only thing the player produces: thoughts, feelings, and perceptions are also produced by the brain.)

It may make more sense to frame it as “what strategy is your brain executing?” rather than “what strategy are you executing?” since people generally identify as their characters, not their players.

Now, let’s talk morality.

Our intuitions about praise and blame are driven by moral sentiments. We have emotional responses of sympathy and antipathy, towards behavior of which we approve and disapprove. These are driven by the player, which creates incentives and strategic behavior patterns for our characters to play out in everyday life.  The character engages in coalition-building with other characters, forms and breaks alliances with other characters, honors and shames characters according to their behavior, signals to other characters, etc.

When we, speaking as our characters, say “that person is good” or “that person is bad”, we are making one move in an overall strategy that our players have created.  That strategy is the determination of when, in general, we will call things or people “good” or “bad”.

This is precisely what Nietzsche meant by “beyond good and evil.”  Our notions of “good” and “evil” are character-level notions, encoded by our players.

Imagine that somewhere in our brains, the player has drawn two cartoons, marked “hero” and “villain”, that we consult whenever we want to check whether to call another person “good” or “evil.” (That’s an oversimplification, of course, it’s just for illustrative purposes.)  Now, is the choice of cartoons itself good or evil?  Well, the character checks… “Ok, is it more like the hero cartoon or the villain cartoon?”  The answer is “ummmm….type error.”

The player is not like a hero or a villain. It is not like a person at all, in the usual (character-level) sense. Characters have feelings! Players don’t have feelings; they are beings of pure strategy that create feelings.  Characters can have virtues or vices! Players don’t; they create virtues or vices, strategically, when they build the “character sheet” of a character’s skills and motivations.  Characters can be evaluated according to moral standards; players set those moral standards.  Players, compared to we characters, are hyperintelligent Lovecraftian creatures that we cannot relate to socially.  They are beyond good and evil.

However! There is another, very different sense in which players can be evaluated as “moral agents”, even though our moral sentiments don’t apply to them.

We can observe what various game-theoretic strategies do and how they perform.  Some, like “tit for tat”, perform well on the whole.  Tit-for-tat-playing agents cooperate with each other. They can survive pretty well even if there are different kinds of agents in the population; and a population composed entirely of tit-for-tat-ers is stable and well-off.

While we can’t call cellular automata performing game strategies “good guys” or “bad guys” in a sentimental or socially-judgmental way (they’re not people), we can totally make objective claims about which strategies dominate others, or how strategies interact with one another. This is an empirical and theoretical field of science.

And there is a kind of “”morality”” which I almost hesitate to call morality because it isn’t very much like social-sentiment-morality at all, but which is very important, which simply distinguishes the performance of different strategies.  Not “like the hero cartoon” or “like the villain cartoon”, but “win” and “lose.”

At this level you can say “look, objectively, people who set up their tables of values in this way, calling X good and Y evil, are gonna die.”  Or “this strategy is conducting a campaign of unsustainable exploitation, which will work well in the short run, but will flame out when it runs out of resources, and so it’s gonna die.”  Or “this strategy is going to lose to that strategy.”  Or “this strategy is fine in the best-case scenario, but it’s not robust to noise, and if there are any negative shocks to the system, it’s going to result in everybody dying.

“But what if a losing strategy is good?” Well, if you are in that value system, of course you’ll say it’s good.  Also, you will lose.

Mother Teresa is a saint, in the literal sense: she was canonized by the Roman Catholic Church. Also, she provided poor medical care for the sick and destitute — unsterilized needles, no pain relief, conditions in which tuberculosis could and did spread.  Was she a good person? It depends on your value system, and, obviously, according to some value systems she was.  But, it seems, that a population that places Mother Teresa as its ideal (relative to, say, Florence Nightingale) will be a population with more deaths from illness, not fewer, and more pain, not less.  A strategy that says “showing care for the dying is better than promoting health” will lose to one that actually can reward actions that promote health.  (To be fair, for most of human history we didn’t have ways to heal the sick that were clearly better than Mother Teresa’s, and even today we don’t have credit-allocation systems that reliably reward the things that keep people alive and healthy; it would be wrong to dump on Catholicism too much here.)  That’s the “player-level” analysis of the situation.

Some game-theoretic strategies (what Nietzsche would call “tables of values”) are more survival-promoting than others.  That’s the sense in which you can get from “is” to “ought.”  The Golden Rule (Hillel’s, Jesus’s, Confucius’s, etc) is a “law” of game theory, in the sense that it is a universal, abstract fact, which even a Lovecraftian alien intelligence would recognize, that it’s an effective strategy, which is why it keeps being rediscovered around the world.

But you can’t adjudicate between character strategies just by being a character playing your strategy.  For instance, a Democrat usually can’t convert a Republican just by being a Democrat at him. To change a player’s strategy is more like “getting the bodymind to change its fundamental assessments of what is in its best interests.”  Which can happen, and can happen deliberately and with the guidance of the intellect! But not without some…what you might call, wiggling things around.

The way I think the intellect plays into “metaprogramming” the player is indirect; you can infer what the player is doing, do some formal analysis about how that will play out, comprehend (again at the “merely” intellectual level) if there’s an error or something that’s no longer relevant/adaptive, plug that new understanding into some change that the intellect can affect (maybe “let’s try this experiment”), and maybe somewhere down the chain of causality the “player”‘s strategy changes. (Exposure therapy is a simple example, probably much simpler than most: add some experiences of the thing not being dangerous and the player determines it really isn’t dangerous and stops generating fear emotions.)

You don’t get changes in player strategies just by executing social praise/blame algorithms though; those algorithms are for interacting with other characters.  Metaprogramming is… I want to say “cold” or “nonjudgmental” or “asocial” but none of those words are quite right, because they describe character traits or personalities or mental states and it’s not a character-level thing at all.  It’s a thing Lovecraftian intelligences can do to themselves, in their peculiar tentacled way.

 

Norms of Membership for Voluntary Groups

Epistemic Status: Idea Generation

One feature of the internet that we haven’t fully adapted to yet is that it’s trivial to create voluntary groups for discussion.  It’s as easy as making a mailing list, group chat, Facebook group, Discord server, Slack channel, etc.

What we don’t seem to have is a good practical language for talking about norms on these mini-groups — what kind of moderation do we use, how do we admit and expel members, what kinds of governance structures do we create.

Maybe this is a minor thing to talk about, but I suspect it has broader impact. In past decades voluntary membership in organizations has declined in the US — we’re less likely to be members of the Elks or of churches or bowling leagues — so lots of people who don’t have any experience in founding or participating in traditional types of voluntary organizations are now finding themselves engaged in governance without even knowing that’s what they’re doing.

When we do this badly, we get “internet drama.”  When we do it really badly, we get harassment campaigns and calls for regulation/moderation at the corporate or even governmental level.  And that makes the news.  It’s not inconceivable that Twitter moderation norms affect international relations, for instance.

It’s a traditional observation about 19th century America that Americans were eager joiners of voluntary groups, and that these groups were practice for democratic participation.  Political wonks today lament the lack of civic participation and loss of trust in our national and democratic institutions. Now, maybe you’ve moved on; maybe you’re a creature of the 21st century and you’re not hoping to restore trust in the institutions of the 20th. But what will be the institutions of the future?  That may well be affected by what formats and frames for group membership people are used to at the small scale.

It’s also relevant for the future of freedom.  It’s starting to be a common claim that “give people absolute ‘free speech’ and the results are awful; therefore we need regulation/governance at the corporate or national level.”  If you’re not satisfied with that solution (as I’m not), you have work to do — there are a lot of questions to unpack like “what kind of ‘freedom’, with what implementational details, is the valuable kind?”, “if small-scale voluntary organizations can handle some of the functions of the state, how exactly will they work?”, “how does one prevent the outcomes that people consider so awful that they want large institutions to step in to govern smaller groups?”

Thinking about, and working on, governance for voluntary organizations (and micro-organizations like online discussion groups) is a laboratory for figuring this stuff out in real time, with fairly low resource investment and risk. That’s why I find this stuff fascinating and wish more people did.

The other place to start, of course, is history, which I’m not very knowledgeable about, but intend to learn a bit.  David Friedman is the historian I’m familiar with who’s studied historical governance and legal systems with an eye to potential applicability to building voluntary governance systems today; I’m interested in hearing about others. (Commenters?)

In the meantime, I want to start generating a (non-exhaustive list) of types of norms for group membership, to illustrate the diversity of how groups work and what forms “expectations for members” can take.

We found organizations based on formats and norms that we’ve seen before.  It’s useful to have an idea of the range of formats that we might encounter, so we don’t get anchored on the first format that comes to mind.  It’s also good to have a vocabulary so we can have higher-quality disagreements about the purpose & nature of the groups we belong to; often disagreements seem to be about policy details but are really about the overall type of what we want the group to be.

Civic/Public Norms

  • Roughly everybody is welcome to join, and free to do as they like in the space, so long as they obey a fairly minimalist set of ground rules & behavioral expectations that apply to everyone.
  • We expect it to be easy for most people to follow the ground rules; you have to be deviant (really unusually antisocial) to do something egregious enough to get you kicked out or penalized.
  • If you dislike someone’s behavior but it isn’t against the ground rules, you can grumble a bit about it, but you’re expected to tolerate it. You’ll have to admit things like “well, he has a right to do that.”
  • Penalties are expected to be predictable, enforced the same way towards all people, and “impartial” (not based on personal relationships). If penalties are enforced unfairly, you’re not expected to tolerate it — you can question why you’re being penalized, and kick up a public stink, and it’s even praiseworthy to do so.
  • Examples: “rule of law”, public parks and libraries, stores and coffeeshops open to the public, town hall meetings

Guest Norms

  • The host can invite, or not invite, anyone she chooses, based on her preference.  She doesn’t have to justify her preferences to anyone.  Nobody is entitled to an invitation, and it’s very rude to complain about not being invited.
  • Guests can also choose to attend or not attend, based on their preferences, and they don’t have to justify their preferences to anyone either; it’s rude to complain or ask for justification when someone declines an invitation.
  • Personal relationships and subjective feelings, in particular, are totally legitimate reasons to include or exclude someone.
  • The atmosphere within the group is expected to be pleasant for everyone.  If you don’t want to be asked to leave, you shouldn’t do things that will predictably bother people.
  • Hosts are expected to be kind and generous to guests; guests are expected to be kind and generous to the host and each other; the host is responsible for enforcing boundaries.
  • Criticizing other people at the gathering itself is taboo. You’re expected to do your critical/judgmental pruning outside the gathering, by deciding whom you will invite or whether you’ll attend.
  • We don’t expect that everyone will be invited to be a guest at every gathering, or that everyone will attend everything they’re invited to. It can be prestigious to be invited to some gatherings, and embarrassing to be asked to leave or passed over when you expected an invitation, but it’s normal to just not be invited to some things.
  • Examples: private parties, invitation-only events, consent ethics for sex

Kaizen Norms

  • Members of the group are expected to be committed to an ideal of some kind of excellence and to continually strive to reach it.
  • Feedback or critique on people’s performance is continuous, normal, and not considered inherently rude. It’s considered praiseworthy to give high-quality feedback and to accept feedback willingly.
  • Kaizen groups may have very specific norms about the style or format of critique/feedback that’s welcome, and it may well be considered rude to give feedback in the wrong style.
  • Receiving some negative feedback or penalties is normal and not considered a sign of failure or shame.  What is shameful is responding defensively to negative feedback.
  • You can lose membership in the group by getting too much negative feedback (in other words, failing to live up to the minimum standards of the group’s ideal.)  It’s not expected to be easy for most people to meet these standards; they’re challenging by design.  The group isn’t expected to be “for everyone.”
  • The feedback and incentive processes are supposed to correlate tightly to the ideal. It’s acceptable and even praiseworthy to criticize those processes if they reward and punish people for things unrelated to the ideal.
  • Conflict about things unrelated to the ideal isn’t taboo, but it’s somewhat discouraged as “off-topic” or a “distraction.”
  • Examples: competitive/meritocratic school and work environments, sports teams, specialized religious communities (e.g. monasteries, rabbinical schools)

Coalition Norms

  • The degree to which one is “welcome” in the coalition is the degree to which one is loyal, i.e. contributes resources to the coalition.  (Either by committing one’s own resources or by driving others to contribute their resources.   The latter tends to be more efficient, and hence makes you more “welcome.”)
  • Membership is a matter of degree, not a hard-and-fast boundary.  The more solidly loyal a member you are, the more of the coalition’s resources you’re entitled to.  (Yes, this means membership is defined recursively, like PageRank.)
  • People can be penalized or expelled for not contributing enough, or for doing things that have the effect of preventing the coalition gaining resources (like making it harder to recruit new members.)
  • Conflict, complaint, and criticism over the growth of the coalition (and whether people are contributing enough, or whether they’re taking more than their fair share) is acceptable and even praiseworthy; criticisms about other things are discouraged, because they make people less willing to contribute resources or pressure others to do so.
  • Membership in the coalition is considered praiseworthy.  Non-membership is considered shameful.
  • Examples: political coalitions, proselytizing religions

Tribal Norms

  • Membership in the group is defined by an immutable, unchosen characteristic, like sex or heredity (or, to a lesser extent, geographic location.)  It is difficult to join, leave, or be expelled from the group; you are a member as a matter of fact, regardless of what you want or how you behave.
  • It’s not considered shameful not to be a member of the group; after all, it isn’t up to you.
  • Since expulsion is difficult, behavioral norms for the group are maintained primarily by persuasion/framing, reward, and punishment, so these play a larger role than they do in voluntary groups.  Important norms are framed as commandments or simply how things are.
  • Examples: families, public schools, governments, traditional cultures

Some comparisons-and-contrasts:

Honor and Shame

Kaizen and Guest group norms say that being a member of the group is an honor and comes with high expectations, but that not being a member is normal and not especially shameful.

Civic norms say that being a member of the group is normal and easy to attain, but not being a member is shameful, because it indicates egregiously bad behavior.

Coalition norms say that being a member is an honor and comes with high expectations and that not being a member is shameful.  This means that most people will have something to be ashamed of.

Tribal norms say that being a member is not an honor (though it may be a privilege), and that not being a member is no shame.

Protest

Civic and Kaizen norms say that it’s okay to protest “unfair” treatment by the governing body.  In a Civic context, “fair” means “it’s possible for everyone to stay out of trouble by following the rules” — it’s okay for rules to be arbitrary, but they should be clear and consistent and not so onerous that most people can’t follow them.  In a Kaizen context, “fair” means “corresponding to the ideal” — it’s okay to “not do things by the book”  if that gets you better performance, but it’s not okay if you’re rewarding bad performance and punishing good.

Guest and Coalition norms say that it’s not okay to protest “unfair” treatment; if you get kicked out, arguing can’t help you get back in.  Offering the decisionmakers something they value might work, though.

In Tribal norms, protest and argument can be either licit or taboo; it depends on the specific tribe and its norms.

Examples of debates that are about what type of group you want to be in:

Asking for “inclusiveness” is usually a bid to make the group more Civic or Coalitional.

Making accusations of “favoritism” is usually a bid to make the group more Civic or Kaizen.

Complaining about “problem members” is usually a bid to make the group more Coalitional, Guest, or Kaizen.

Not A Taxonomy

I don’t think these are the definitive types of groups. The idea is to illustrate how you can have different starting assumptions about what kind of thing the group is for. (Is it for achieving a noble goal? For providing a public forum or service open to all? For meeting the needs of its members?)

I suspect these kinds of aims are prior to mechanisms (things like “what is a bannable offense” or “what incentive systems do we set up”?)  Before diving into the technical stuff about the rules of the game, you want to ask what kinds of outcomes or group dynamics you want the “game structure” to achieve.

 

Playing Politics

Epistemic Status: Guesses Based on Personal Experience

Lately I’ve been going through a family of learning experiences in the world of how to get things done cooperatively.  It’s hard for me.  Even very basic things in this area have been stumping me, overwhelming me, leaving me way more tired and drained than I’d expect. My productivity has gone to hell and — worse — I didn’t even notice for a while.  This is hard stuff, and rarely written about by the people for whom it’s hard, so my hope is that processing in public helps someone. I generally think that data-sharing is good and helpful.

Collective Deliberation Isn’t Working For Me

At a conference, I was in a room full of people having a really good discussion. I wanted to get people together to have a follow-up discussion later — nothing elaborate, just a room with whiteboards and snacks and maybe moving towards some action items.

What I did:

  • Passed around a sheet for emails to sign up
  • Sent out an email proposing the parameters of the event
  • Waited for people to propose dates that worked for them.

Radio silence.

Somebody else suggested a poll where people could put down their preferred times and dates.  Out of thirteen people, five signed up.  Nobody volunteered “ok, we’re doing it on this date then,” so I did.  I reserved a conference room at my office and bought a bunch of snacks.

The front door was locked on the weekend and my key card didn’t work even though it was supposed to, so I had to switch locations at the last minute. It wouldn’t have mattered anyhow, because one person showed up on time, and one other person several hours late.

Conclusion: it is harder than I thought to get ten people to show up in a room and talk to each other.

And I probably shouldn’t have expected an event to coalesce naturally from the mailing list.  I have a strong “egalitarian” instinct that if I’m trying to do something with a group and in some sense for the benefit of everyone in the group, then I shouldn’t be too “bossy” in terms of unilaterally declaring what we’re all going to do.  But if I leave it up to the group to discuss, it seems like they generally…don’t.

I’m also on a policy committee for a community organization, and it’s been a whole lot of heartache because I want to change some things about our policies and internal processes, and the process of trying to communicate that has resulted in a lot of hurt feelings, mine and other people’s.

The first thing I did was write up a document explaining why I thought the existing policies were harmful, and share it with the mailing list.  This resulted in DRAMA because people heard it as a personal accusation.  (I never meant to imply that my fellow committee members were bad people, but I felt strongly about the policy changes and my writing tone may have come out angrier than I intended.)

In retrospect, I should never have led with complaints — I should have started by proposing solutions.  My intention had been to raise the issues I cared about while minimizing bossiness — this is an organization for the benefit of a larger community, and I’m only one member of a committee, so I thought it would leave more degrees of freedom open to the group to say “here’s why the existing policies have problems, what do you think we should do?” rather than “here’s how I’d suggest improving the existing policies.”  I thought this was the considerate way to communicate.  But from the committee’s perspective, it must have sounded like “You’re doing it wrong. Here’s a bunch more work you have to do to fix it. You’re welcome!”  They were actually much more receptive once I wrote up a revised set of policies that I’d be happier with.  Once again, being “unbossy” and hoping that collaborative discussion would resolve the issue was a total failure, because people had less bandwidth to engage in discussion than I’d anticipated.

Private Discussions Are A Flawed Solution

I’ve noticed that in a lot of deliberative bodies or organizations, the real decision-making doesn’t happen in groups.  (Meanwhile Madison is grappling with the fact that/ Not every issue can be settled in committee.)  The people who have “real power” meet in private and hash things out off the record.  Nobody really shares their full thoughts on the internet or on an email list.  It’s not necessarily “secrecy”, but it’s secrecy-adjacent.

I know this is how things are frequently done, but it bothers me.  When an issue is officially the jurisdiction of a committee, everyone on the committee is equally entitled to be part of the discussion, and entitled to know what’s going on; having secret side conversations creates a hierarchy between those “in the know” and those who aren’t.  (No-one else was in the room where it happened/ the room where it happened/ the room where it happened.)  Still more, when your project is supposed to be for the sake of, and with the participation of, a broader community, it seems like fairness demands being transparent with that community.

Maybe this is just the geek-kid issue, or what people today tend to call the geek social fallacies.  I’m deeply uncomfortable when I see what looks like an elite subgroup, a group of “cool kids” or “VIPs” or whatever, talking behind closed doors because hoi polloi just wouldn’t understand. I mean, yes, sometimes people wouldn’t understand!  I get it. There do exist people who will be offended by my honest opinion (god knows), or who literally aren’t bright enough or knowledgeable enough to contribute to a discussion.  I understand why it’s easier to talk in private with people who are already more-or-less on the same page.  But still…there’s a pattern that gives me the willies. It’s “elites get to know what’s going on, randos are kept out of the loop,” and even when somebody says that I qualify as an elite, not a rando, it still bothers me, because I’m much more comfortable having rights than being favored.

This is part of what gives me a bad feeling about the discourse around “demon threads” (that is: big, addictive, internet debates) and in praise of “taking things private“, where tensions will be easier to defuse.  There are real costs to acrimonious debate, in time and emotional energy, and I appreciate that people are trying to find ways to reduce those costs.  But I feel nervous about anything that looks like it’s trying to sweep real conflicts under the rug.  It’s like “don’t fight in front of the children” — except that in this case the members of the public are being placed in the role of “the children,” whether or not we want to be.

I occasionally find myself in situations where I feel I’m being asked to take a sort of Straussian stance — if you want to get important things done, you can’t be totally transparent about what you’re doing, because the general public will stop you.  I’m not sure these people are wrong.  But I really hope they are.  I have a bad feeling about maintaining information asymmetries as a general policy.  I have a dangerous temperamental temptation towards concealment — it’s just “minor” stuff like trying to hide my failures, but in the long run, that’s neither ethical nor practical — so I’ve developed a counter-tendency towards transparency, as a sort of partial safeguard.  If I tell people what I’m up to, early and often, I can’t slip down the road of dishonesty.

Therapeutic Language: Another Flawed Solution

Peace is good, all things being equal. Fighting hurts.  And many fights are unnecessary, borne of misunderstanding more than actual disagreement. I’ve seen this a lot firsthand.  It’s much more likely that someone literally doesn’t comprehend your idea than that they oppose it.

And one of the most common types of misunderstanding is when people falsely assume you are damning them as a person.  This is something I learned from Malcolm Ocean, who gave me the first really clear explanation I ever got as to what people are doing when they use NVC or Circling language or other types of very careful and mannered speech to avoid the perception of blame or judgment.  Surely, I asked him, sometimes you do need to judge?  To distinguish between good and bad behavior?  To enforce norms?

After a while, we came up with this analogy:

There’s a difference between saying “You’re fired” and “You’re fired, and also fuck you.”

In the course of life, one absolutely does have to say things like “you’re fired.”  Or “you can’t behave like that in this space”, “this work does not merit publication”, or “I don’t want to go on a date with you.”  In other words, drawing boundaries is necessary for life.  But drawing boundaries doesn’t always have to involve damning someone, as though sending them to Hell, utterly condemning their essential being.  (What Madeleine l’Engle would call X-ing.)  One can fire a person from a job, or reject their manuscript, or turn them down romantically, without saying it is bad that you exist and you should hate yourself.  One can even, I believe, convict someone of a crime, or kill them in self-defense, without damning them, while wishing that they had not done the thing that forced you to draw an extremely severe boundary.

Boundaries are necessary; self-defense is necessary; damning people might not be necessary, and I’m inclined to believe it isn’t.

And yet, people do damn each other, very frequently; and even more frequently, as a result of these bad experiences, they assume they’re being damned when they’re merely being criticized.  “You did a thing with negative consequences” gets read as “your essence is stained, you are a Terrible Person, it’s time to hate yourself.”  So, as an imperfect attempt to forestall these misunderstandings, people have developed these extremely artificial locutions that, yes, make you sound like a therapist, and, yes, aren’t as natural as just speaking in plain language.  But the hope is that they create enough distance to allow people to avoid immediately jumping to the conclusion that you’re accusing them of being Generally Terrible and Worthy of Eternal Hellfire.

Of course, the human mind being devious and wily at figuring out how to make us miserable, it’s possible to be easily set off by therapeutic language itself!  It turns out I have such a sensitivity.  “You’re insinuating that I’m having bad feelings — this means you’re saying that I’m Weak and Can’t Hack It and need Special Treatment — which means you’re calling me Generally Terrible!  Screw you!”   (This isn’t completely irrational; it is the appropriate norm for situations like work or school, where hiding physical and mental pain is expected and where people are penalized for failing to do so.)

Now, of course, I do have bad feelings sometimes, being a human.  And, a lot of the time, the person using therapeutic language is trying to deal productively with that fact of the matter, rather than condemning me for it — they’ve moved on to Step 2, What Do We Do Now, while I’m still on Step 1, Is Sarah Terrible Y/N?

But you really can’t have good conversations while anyone’s still on Step 1.  If you haven’t yet resolved “Do You Think I’m Terrible?” with a resounding “No,” then every other conversation that’s nominally about some topic will actually be about the vital issue of Do You Think I’m Terrible?

And, because the human mind is devious, Step 1 doesn’t stay resolved; you have to keep reaffirming it, because people will forget.  You have to put what seems like a colossal amount of unsubtle effort into saying “I like you and I think you’re good” in order to keep discussions from becoming about “I’m good and not terrible! See, I’ll prove it!”

I have not mastered this art, or even close, but I basically agree with the need for it.

I have totally observed people being blunt and irreverent without hurting others’ feelings and while getting very productive discussions done — but I think what’s going on is not that these people don’t validate each other, but that they validate each other very well through different means than therapeutic language.  Some people can get away with speaking styles that are very “offensive” by conventional standards, but that’s because they also show deep affection and regard for the people they’re talking to.

I think there are people who are more robust than others at independently maintaining a sense that they’re Okay and Good and Liked and Valid (and that’s great!) but I don’t think this in any way disproves the need for validation, any more than the existence of plants proves that organisms don’t need chemical energy.

Nobody (Exactly) Agrees With You

I’ve been struggling a bunch with the fact that people seem to disagree fractally and at every turn.  It’s really, really hard to get exact alignment on worldviews and desires, to the point that I’m beginning to doubt it’s possible.  I see someone who seems to see part of the world the same way I do, and I go “can we talk? can we be buds? can we be twinsies? are we on the same team?” and then I realize “oh, no, outside of this tiny little area, they…really don’t agree with me at all.  Dammit.”

It would be nice to have someone to talk to who was basically the same person as you, right?  Someone you could just melt into,  the way all of humanity melted into a single sea of neon-orange thought-fluid in that anime.

But, in my experience, that just keeps not happening.  Friendship and mutual respect, sure, I’m very fortunate to have lots of that; but merging doesn’t happen.  There’s always me, or the other person, saying “no, not exactly” instead of “yes, and”.

Is it just that I’m unusual?  Surely people who build movements get people to agree with each other?

The thing is, I’m starting to suspect they don’t.  I recently went to TEDWomen, and saw a bunch of talks about activism and organizing, including by such luminaries as Dolores Huerta and Marian Wright Edelman.  And here are some takeaways I got from them:

  • Activists view the main goal as fighting apathy, that is, getting people to participate, literally activating people.  Getting people to show up to vote or show up to a protest or to raise issues in conversations.
  • Everybody in a coalition supports everybody else. It’s very “all for one and one for all.” They explicitly talk about how you shouldn’t allow anyone to frame things as “the environment” vs “women’s issues” vs “labor issues” vs “immigration” — everyone’s encouraged to push for everyone’s agenda together, for every sub-group in the progressive coalition.
  • Activists endorse being moved more by individual stories and art and emotional appeals than by facts and figures.  They don’t just talk about how “emotional appeals work better on the public” but they talk about how emotional appeals and personal connections work on themselves.

If you think of everybody’s beliefs as a forest of trees, where consequences branch out from premises, then “trying to get agreement” is building trees as big as they can get and trying to hash out what’s going on when two people’s trees differ. What seems to be going on in an activist frame is not building out the trees very big at all, only getting agreement on rather basic things like “children shouldn’t live in poverty” and trying to move straight to voting and fundraising and other object-level actions, without really hashing out in much detail “ok, what ways of avoiding child poverty are effective and/or morally acceptable?”  They recognize that getting people to participate at all is difficult (in my shoes, they would have invested a lot more effort in getting people to show up to the event), and they don’t seem to even try to get people to agree in a deep sense, to agree on world-models and general principles and moral foundations.

Just because everyone is shouting the same slogan doesn’t mean they really agree with each other.  They agree on the slogan.  It might mean different things to different people.  That’s not necessarily a bad thing, but it’s worth being aware that it isn’t true unity.

The Greek for “with one accord” is ὁμοθυμαδόν, which appears frequently in the New Testament; it means literally “same passion” or “same spirit”, the seat of courage and emotion that lives in the heart.  “Unanimity” is an exact translation into Latin — “one spirit.”  You can have large groups of people who feel the same, who are filled with the same passion.  It is much harder for all those people to have the same belief structure, to stay on the same page on the nitty-gritty details.  Just getting groups of people to “weak unanimity,” namely, active participation, good will, and agreement on ideal goals, is a challenging full-time job by itself — and it doesn’t even touch getting worldview alignment.

The Cost of Complaint

One weird and maybe trivial thing that’s been nagging at me is trying to get a handle on the underlying worldview expressed by the Incredibles movies.  Yeah, it’s pop culture, but there’s clearly an attempt to communicate a moral, and it’s a weird one.

Sure, there’s the inspiring, defiant pro-superhero note of “people shouldn’t be pressured to hide their excellence”, which often gets labeled Randian (but could just as easily be Nietzchean or Harrison Bergeron-esque).

But it gets weird when you look at the villains.  The villains of both movies are genius technologists.  Syndrome, the villain of the first movie, is a bitter, pimpled male nerd, resentful of superheroes’ elevated status, who wants to provide technology to give everyone superpowers.  Evelyn Deavor, the villain of the second movie, is a bitter, urbane, worldly feminist, a technologist who dislikes the way technology has “dumbed down” its users, resentful of the public’s passive reliance on screens and superheroes.  For plot reasons, of course, both supervillains pull dangerous stunts that put the public at risk, and need to be stopped by the superheroes.  But their motivations are actually empowering humanity, weirdly enough.  Syndrome is, effectively, a transhumanist, while Evelyn is an “ethical techie” type reminiscent of the people at the Center for Humane Technology.  Their obsession is using their talents and hard work to make all people more self-reliant and capable of greater things — a mission that would actually sit well with Rand or Nietzsche, and, outside the world of the films, could easily work as a heroic cause.

What’s wrong with the villains, in the world of The Incredibles, is that they’re grouchy.  They’re social critics. They complain.

Notice that, before we know she’s a villain, Evelyn tries to get Mrs. Incredible to commiserate about sexism; the heroine doesn’t take the bait.  Before his villainous reveal, Syndrome is a whiny kid who wants to be Mr. Incredible’s sidekick and complains about not getting to tag along.  And the initial controversy that drove superheroes underground was a suicidal man who sued Mr. Incredible for saving his life.  The common thread among the antagonists is unhappiness.  (And the misfit, gender-ambiguous, Tumblresque minor superheroes in Incredibles 2 are depicted as not exactly antagonists but vulnerable to being coopted by the villainous Evelyn because of their unhappiness.)

Also, notice that Brad Bird is taking a very firm stance in favor of optimism and against gloom, in the Incredibles movies and others; his movies overtly defend his creative choice to keep things positive and brightly colored in a world where critical acclaim usually comes in shades of gray. (The antagonist in Ratatouille, not accidentally, is a restaurant critic.)  I think it’s really that simple: Brad Bird likes unity and positivity, and doesn’t like complaining.  Critics like the New Yorker’s Richard Brody are right to see a threat in the Incredibles movies — their real enemy is criticism.

(If you look at Brad Bird’s actual words, he isn’t any kind of a libertarian or Randian, and says so; he’s a centrist, he’s big on finding common ground, staying positive, focusing on unity, and so on.)

It’s almost impossible to talk about the world intelligently while refraining from any complaint.  Try finding a blog to read that never criticizes society, from any direction.  Where you find interesting and articulate people, you’ll find people who express dissatisfaction with things as they are.  There’s no principled way to say “hey I think everyone’s pretty much right,” because people don’t remotely agree with each other if you ask about any details at all.

And yet, people (like Bird, but also like me, and like many) get heartsick when we’re exposed to too much complaint or disagreement.  Moods are contagious, and criticism is very often depressing, for all we try to tell ourselves that it’s merely an intellectual awareness.  Sometimes I feel like “for god’s sake, World, for once could you give me a social context where literally nobody expresses dislike or disapproval about anything?  Could we have a Happy Zone please?”

But I’m genuinely not sure if that’s possible.  It may be a feature of language or logic itself that it’s hard to talk at all if you restrict yourself firmly to avoiding critical speech.  I certainly would have a hard time sticking strictly to Happy Zone rules.

I don’t have solutions here.  I’m just trying to figure things out.  It ought to be possible, I think, to deliberate and collaborate with people, allowing “the group” to decide, rather than just deciding what want individually and letting people collaborate with me to the extent that it sounds good to them.  I know how to be an individualist; I’m trying to learn how to also do the collective thing, “voice” rather than “exit”.  But I’m just stumped by the fact that people want different things, and think different things, and actual, far-reaching unity doesn’t seem to exist.