In Defense of Individualist Culture

Epistemic Status: Pretty much serious and endorsed.

College-educated Western adults in the contemporary world mostly live in what I’d call individualist environments.

The salient feature of an individualist environment is that nobody directly tries to make you do anything.

If you don’t want to go to class in college, nobody will nag you or yell at you to do so. You might fail the class, but this is implemented through a letter you get in the mail or on a registrar’s website.  It’s not a punishment, it’s just an impersonal consequence.  You can even decide that you’re okay with that consequence.

If you want to walk out of a talk in a conference designed for college-educated adults, you can do so. You will never need to ask permission to go to the bathroom. If you miss out on the lecture, well, that’s your loss.

If you slack off at work, in a typical office-job environment, you don’t get berated. And you don’t have people watching you constantly to see if you’re working. You can get bad performance reviews, you can get fired, but the actual bad news will usually be presented politely.  In the most autonomous workplaces, you can have a lot of control over when and how you work, and you’ll be judged by the results.

If you have a character flaw, or a behavior that bothers people, your friends might point it out to you respectfully, but if you don’t want to change, they won’t nag, cajole, or bully you about it. They’ll just either learn to accept you, or avoid you. There are extremely popular advice columns that try to teach this aspect of individualist culture: you can’t change anyone who doesn’t want to change, so once you’ve said your piece and they don’t listen, you can only choose to accept them or withdraw association.

The basic underlying assumption of an individualist environment or culture is that people do, in practice, make their own decisions. People believe that you basically can’t make people change their behavior (or, that techniques for making people change their behavior are coercive and thus unacceptable.)  In this model, you can judge people on the basis of their decisions — after all, those were choices they made — and you can decide they make lousy friends, employees, or students.  But you can’t, or shouldn’t, cause them to be different, beyond a polite word of advice here and there.

There are downsides to these individualist cultures or environments.  It’s easy to wind up jobless or friendless, and you don’t get a lot of help getting out of bad situations that you’re presumed to have brought upon yourself. If you have counterproductive habits, nobody will guide or train you into fixing them.

Captain Awkward’s advice column is least sympathetic to people who are burdens on others — the depressive boyfriend who needs constant emotional support and can’t get a job, the lonely single or heartbroken ex who just doesn’t appeal to his innamorata and wants a way to get the girl.  His suffering may be real, and she’ll acknowledge that, but she’ll insist firmly that his problems are not others’ job to fix.  If people don’t like you — tough! They have the right to leave.

People don’t wholly “make their own decisions”.  We are, to some degree, malleable, by culture and social context. The behaviorist or sociological view of the world would say that individualist cultures are gravely deficient because they don’t put any attention into setting up healthy defaults in environment or culture.  If you don’t have rules or expectations or traditions about food, or a health-optimized cafeteria, you “can” choose whatever you want, but in practice a lot of people will default to junk.  If you don’t have much in the way of enforcement of social expectations, in practice a lot of people will default to isolation or antisocial behavior. If you don’t craft an environment or uphold a culture that rewards diligence, in practice a lot of people will default to laziness.  “Leaving people alone”, says this argument, leaves them in a pretty bad place.  It may not even be best described as “leaving people alone” — it might be more like “ripping out the protections and traditions they started out with.”

Lou Keep, I think, is a pretty good exponent of this view, and summarizer of the classic writers who held it. David Chapman has praise for the “sane, optimistic, decent” societies that are living in a “choiceless mode” of tradition, where people are defined by their social role rather than individual choices.  Duncan Sabien is currently trying to create a (voluntary) intentional community designed around giving up autonomy in order to be trained/social-pressured into self-improvement and group cohesion.  There are people who actively want to be given external structure as an aid to self-mastery, and I think their desires should be taken seriously, if not necessarily at face value.

I see a lot of writers these days raising problems with modern individualist culture, and it may be an especially timely topic. The Internet is a novel superstimulus, and it changes more rapidly, and affords people more options, than ever before.  We need to think about the actual consequences of a world where many people are in practice being left alone to do what they want, and clearly not all the consequences are positive.

But I do want to suggest some considerations in favor of individualist culture — that often-derided “atomized modern world” that most of us live in.

We Aren’t Clay

It’s a common truism that we’re all products of our cultural environment. But I don’t think people have really put together the consequences of the research showing that it’s not that easy to change people through environmental cues.

  • Behavior is very heritable. Personality, intelligence, mental illness, and social attitudes are all well established as being quite heritable.  The top ten most replicated findings in behavioral genetics starts with “all psychological traits show significant and substantial genetic influence”, which Eric Turkheimer has called the “First Law of behavioral genetics.”  A significant proportion of behavior is also explained by “nonshared environment”, which means it isn’t genetic and isn’t a function of the family you were raised in; it could include lots of things, from peers to experimental error to individual choice.
  • Brainwashing doesn’t work. Cult attrition rates are high, and “brainwashing” programs of POWs by the Chinese after the Korean War didn’t result in many defections.
  • There was a huge boom in the 1990’s and 2000’s in “priming” studies — cognitive-bias studies that showed that seemingly minor changes in environment affected people’s behavior.  A lot of these findings didn’t replicate. People don’t actually walk slower when primed with words about old people. People don’t actually make different moral judgments when primed with words or videos of cleanliness or disgusting bathrooms.  Being primed with images of money doesn’t make people more pro-capitalist.  Girls don’t do worse on math test when primed with negative stereotypes. Daniel Kahneman himself, who publicized many of these priming studies in Thinking, Fast and Slow, wrote an open letter to priming researchers that they’d have to start replicating their findings or lose credibility.
  • Ego depletion failed to replicate as well; using willpower doesn’t make you too “tired” to use willpower later.
  • The Asch Conformity Experiment was nowhere near as extreme as casual readers generally think: the majority of people didn’t change their answers to wrong ones to conform with the crowd, only 5% of people always conformed, and 25% of people never conformed.
  • The Sapir-Whorf Hypothesis has generally been found to be false by modern linguists: the language one speaks does not determine one’s cognition. For instance, people who speak a language that uses a single word for “green” and “blue” can still visually distinguish the colors green and blue.

Scott Alexander said much of this before, in Devoodooifying Psychology.  It’s been popular for many years to try to demonstrate that social pressure or subliminal cues can make people do pretty much anything.  This seems to be mostly wrong.  The conclusion you might draw from the replication crisis along with the evidence from behavioral genetics is “People aren’t that easily malleable; instead, they behave according to their long-term underlying dispositions, which are heavily influenced by inheritance.”  People may respond to incentives and pressures (the Milgram experiment replicated, for instance), but not to trivial external pressures, and they can actually be quite resistant to pressure to wholly change their lives and values (becoming a cult member or a Communist.)

Those who study culture think that we’re all profoundly shaped by culture, and to some extent that may be true. But not as much or as easily as social scientists think.  The idea of mankind as arbitrarily malleable is an appealing one to marketers, governments, therapists, or anyone who hopes that it’s easy to shift people’s behavior.  But this doesn’t seem to be true.  It might be worth rehabilitating the notion that people pretty much do what they’re going to do.  We’re not just swaying in the breeze, waiting for a chance external influence to shift us. We’re a little more robust than that.

People Do Exist, Pretty Much

People try to complicate the notion of “person” — what is a person, really? Do individuals even exist?  I would argue that a lot of this is not as true as it sounds.

A lot of theorists suggest that people have internal psychological parts (Plato, Freud, Minsky, Ainslie) or are part of larger social wholes (Hegel, Heidegger, lots and lots of people I haven’t read).  But these, while suggestive, are metaphors and hypotheses. The basic, boring fact, usually too obvious to state, is that most of your behavior is proximately caused by your brain (except for reflexes, which are controlled by your spinal cord.)  Your behavior is mostly due to stuff inside your body; other people’s behavior is mostly due to stuff inside their bodies, not yours.  You do, in fact, have much more control over your own behavior than over others’.

“Person” is, in fact, a natural category; we see people walking around and we give them names and we have no trouble telling one person apart from another.

When Kevin Simler talks about “personhood” being socially constructed, he means a role, like “lady” or “gentleman.” The default assumptions that are made about people in a given context. This is a social phenomenon — of course it is, by design!  He’s not literally arguing that there is no such entity as Kevin Simler.

I’ve seen Buddhist arguments that there is no self, only passing mental states.  Derek Parfit has also argued that personal identity doesn’t exist.  I think that if you weaken the criterion of identity to statistical similarity, you can easily say that personal identity pretty much exists.  People pretty much resemble themselves much more than they resemble others. The evidence for the stability of personality across the lifespan suggests that people resemble themselves quite a bit, in fact — different timeslices of your life are not wholly unrelated.

Self-other boundaries can get weird in certain mental conditions: psychotics often believe that someone else is implanting thoughts inside their heads, people with DID have multiple personalities, and some kinds of autism involve a lot of suggestibility, imitation, and confusion about what it means to address another person.  So it’s empirically true that the sense of identity can get confused.

But that doesn’t mean that personal identity doesn’t usually work in the “normal” way, or that the normal way is an arbitrary convention. It makes sense to distinguish Alice from Bob by pointing to Alice’s body and Bob’s body.  It’s a distinction that has a lot of practical use.

If people do pretty much exist and have lasting personal characteristics, and are not all that malleable by small social or environmental influences, then modeling people as individual agents who want things isn’t all that unreasonable, even if it’s possible for people to have inconsistent preferences or be swayed by social pressure.

And cultural practices which acknowledge the reality that people exist — for example, giving people more responsibility for their own lives than they have over other people’s lives — therefore tend to be more realistic and attainable.


How Ya Gonna Keep Em Down On The Farm

Traditional cultures are hard to keep, in a modern world.  To be fair, pro-traditionalists generally know this.  But it’s worth pointing out that ignorance is inherently fragile.  As Lou Keep points out , beliefs that magic can make people immune to bullets can be beneficial, as they motivate people to pull together and fight bravely, and thus win more wars. But if people find out the magic doesn’t work, all that benefit gets lost.

Is it then worth protecting gri-gri believers from the truth?  Or protecting religious believers from hearing about atheism?  Really? 

The choiceless mode depends on not being seriously aware that there are options outside the traditional one.  Maybe you’ve heard of other religions, but they’re not live options for you. Your thoughts come from inside the tradition.

Once you’re aware that you can pick your favorite way of life, you’re a modern. Sorry. You’ve got options now.

Which means that you can’t possibly go back to a premodern mindset unless you are brutally repressive about information about the outside world, and usually not even then.  Thankfully, people still get out.

Whatever may be worth preserving or recreating about traditional cultures, it’s going to have to be aspects that don’t need to be maintained by forcible ignorance.  Otherwise it’ll have a horrible human cost and be ineffective.

Independence is Useful in a Chaotic World

Right now, anybody trying to build a communitarian alternative to modern life is in an underdog position.  If you take the Murray/Putnam thesis seriously — that Americans have less social cohesion now than they did in the mid-20th century, and that this has had various harms — then that’s the landscape we have to work with.

Now, that doesn’t mean that communitarian organizations aren’t worth building. I participate in a lot of them myself (group houses, alloparenting, community events, mutual aid, planning a homeschooling center and a baugruppe).  Some Christians are enthusiastic about a very different flavor of community participation and counterculture-building called the Benedict Option, and I’m hoping that will work out well for them.

But, going into such projects, you need to plan for the typical failure modes, and the first one is that people will flake a lot.  You’re dealing with moderns! They have options, and quitting is an option.

The first antidote to flaking that most people think of — building people up into a frenzy of unanimous enthusiasm so that it doesn’t occur to them to quit — will probably result in short-lived and harmful projects.

Techniques designed to enhance group cohesion at the expense of rational deliberation — call-and-response, internal jargon and rituals, cults of personality, suppression of dissent  — will feel satisfying to many who feel the call of the premodern, but aren’t actually that effective at retaining people in the long term.  Remember, brainwashing isn’t that strong.

And we live in a complicated, unstable world.  When things break, as they will, you’d like the people in your project to avoid breaking.  That points in the direction of  valuing independence. If people need a leader’s charisma to function, what are they going to do if something happens to the leader?

Rewarding Those Who Can Win Big

A traditionalist or authoritarian culture can help people by guarding against some kinds of failure (families and churches can provide a social safety net, rules and traditions can keep people from making mistakes that ruin their lives), but it also constrains the upside, preventing people from creating innovations that are better than anything within the culture.

An individualist culture can let a lot of people fall through the cracks, but it rewards people who thrive on autonomy. For every abandoned and desolate small town with shrinking economic opportunity, there were people who left that small town for the big city, people whose lives are much better for leaving.  And for every seemingly quaint religious tradition, there are horrible abuse scandals under the surface.  The freedom to get out is extremely important to those who aren’t well-served by a traditional society.

It’s not that everything’s fine in modernity. If people are getting hurt by the decline of traditional communities — and they are — then there’s a problem, and maybe that problem can be ameliorated.

What I’m saying is that there’s a certain kind of justice that says “at the very least, give the innocent and the able a chance to win or escape; don’t trade their well-being for that of people who can’t cope well with independence.”  If you can’t end child abuse, at least let minors run away from home. If you can’t give everybody a great education, at least give talented broke kids scholarships.  Don’t put a ceiling on anybody’s success.

Immigrants and kids who leave home by necessity (a lot of whom are LGBT and/or abused) seem to be rather overrepresented among people who make great creative contributions.  “Leaving home to seek your freedom and fortune” is kind of the quintessential story of modernity.  We teach our children songs about it.  Immigration and migration is where a lot of the global growth in wealth comes from.  It was my parents’ story — an immigrant who came to America and a small-town girl who moved to the city.  It’s also inherently a pattern that disrupts traditions and leaves small towns with shrinking populations and failing economies.

Modern, individualist cultures don’t have a floor — but they don’t have a ceiling either. And there are reasons for preferring not to allow ceilings. There’s the justice aspect I alluded to before — what is “goodness” but the ability to do valuable things, to flourish as a human? And some if people are able to do really well for themselves, isn’t limiting them in effect punishing the best people?

Now, this argument isn’t an exact fit for real life.  It’s certainly not the case that everything about modern society rewards “good guys” and punishes “bad guys”.

But it works as a formal statement. If the problem with choice is that some people make bad choices when not restricted by rules, then the problem with restricting choice is that some people can make better choices than those prescribed by the rules. The situations are symmetrical, except that in the free-choice scenario, the people who make bad choices lose, and in the restricted scenario, the people who make good choices lose.  Which one seems more fair?

There’s also the fact that in the very long run, only existence proofs matter.  Does humanity survive? Do we spread to the stars?  These questions are really about “do at least some humans survive?”, “do at least some humans develop such-and-such technology?”, etc.  That means allowing enough diversity or escape valves or freedom so that somebody can accomplish the goal.  You care a lot about not restricting ceilings.  Sure, most entrepreneurs aren’t going to be Elon Musk or anywhere close, but if the question is “does anybody get to survive/go to Mars/etc”, then what you care about is whether at least one person makes the relevant innovation work.  Playing to “keep the game going”, to make sure we actually have descendants in the far future, inherently means prioritizing best-case wins over average-case wins.


I’m not arguing that it’s never a good idea to “make people do things.”  But I am arguing that there are reasons to be hesitant about it.

It’s hard to make people do what you want; you don’t actually have that much influence in the long term; people in their healthy state generally are correctly aware that they exist as distinct persons; surrendering judgment or censoring information is pretty fragile and unsustainable; and restricting people’s options cuts off the possibility of letting people seek or create especially good new things.

There are practical reasons why “leave people alone” norms became popular, despite the fact that humans are social animals and few of us are truly loners by temperament.

I think individualist cultures are too rarely explicitly defended, except with ideological buzzwords that don’t appeal to most people. I think that a lot of pejoratives get thrown around against individualism, and I’ve spent a lot of time getting spooked by the negative language and not actually investigating whether there are counterarguments.  And I think counterarguments do actually exist, and discussion should include them.




Regulatory Arbitrage for Medical Research: What I Know So Far

Epistemic status: pretty ignorant. I’m sharing now because I believe in transparency.

I’ve been interested in the potential of regulatory arbitrage (that is, relocating to less regulated polities) for medical research for a while. Getting drugs or devices FDA-approved is expensive and extremely slow.  What if you could speed it up by going abroad to do your research?

I talked to some people who work in the field, and so far this is my distillation of what I got out of those conversations.  It’s a very rough draft and I expect to learn more.

Q: Why don’t pharma companies already run trials in developing countries?

A: They do! A third of clinical trials run by US-based pharma companies are outside the US, and that number is rapidly growing — a more than 2000% increase over the past two decades. Labor costs in India, China, and Russia are much lower, and it’s easier to recruit participants in countries where a clinical trial may be the only chance people have to get access to the latest treatments.

But in order to sell to American markets, those overseas trials still have to be conducted to FDA standards (with correspondingly onerous reporting requirements.) Many countries, like China, are starting to harmonize their regulatory standards with the FDA.  It’s not the Wild West.

Q: Ok, but why not sell drugs to foreign countries and bypass the US entirely?

A: The US is by far the biggest pharmaceutical market. As of 2014, US sales made up about 38% of global pharmaceutical sales; the European market was about 31%, and is roughly as tightly regulated. The money in pharma comes from selling to the developed world, which has strict standards for demonstrating safety and efficacy.

Q: Makes sense. But why not run cheap, preliminary, unofficial trials just to confirm for yourself whether drugs work, before investing in bigger and more systematic FDA-compliant trials for the successful ones?

A: I don’t know for sure, but it seems like pharma companies are generally not very interested in choosing their drug portfolio based on the likely efficacy of early-stage drug candidates.  When I’ve tried to do research into how they decide which drug candidates to pursue through clinical trials, what I found was that there’s a lot of portfolio management: mathematical models, sometimes quite complex, based on discounted cash flow analysis.  A drug candidate is treated as a random variable which has some distribution over future returns, based on the market size and the average success rate of trials.

What doesn’t seem to be involved in the decision-making process is analysis of which drug candidates are more likely to succeed in trials than others. Most drug candidates don’t work: 92% of preclinical drug candidates fail to be efficacious when tested in humans, and that attrition rate is only growing.  As clinical trials grow more expensive, failed trials are a serious and increasing drag on the pharma industry, but I’m not sure there’s interest in trying to cut those costs by choosing drug candidates more selectively.

On the few occasions when I’ve tried to pitch to large pharma companies the idea of trying to “pick winners” among early-stage drugs based on data analysis (of preclinical results, the past performance of the drug class, whatever), the idea was rejected.

Investors in biotech startups, of course, do try to pick winners among preclinical drug candidates; but an investor told me that, based on his experience, it wouldn’t be much easier to raise money if you had a successful but non-FDA-compliant preliminary human trial than if you had no human trials at all.

My impression is that (perhaps as a rational reaction to high rates of noise or fraud) decisionmakers in the industry aren’t very interested in making bets based on weak or preliminary evidence, and tend to round it down to no evidence at all.

Q: So are there any options left for trying to do medical research outside of an onerous regulatory environment?

A: Well, one option is legal exemptions. For example, the FDA’s Rare Disease Program can offer faster options for reviewing applications for a drug candidate that treats a life-threatening disease where no adequate treatment exists.

Another option is selling supplements, which do not need FDA approval. You need to make sure they’re safe, you can’t sell controlled substances, and you can’t claim that supplements treat any disease, but other than that, for better or worse, you can sell what you want.  One company, Elysium Health, is actually trying to develop serious anti-aging therapies and market them as supplements; Leonard Guarente, one of the pioneers of geroscience and the head of MIT’s aging lab, is the co-founder.

The problem with supplements, of course, is that you can’t sell them as treatments. Aging isn’t legally a disease, and the FDA is not approving anti-aging therapies, so Elysium’s model makes sense. But if you had a cure for cancer, you’d have a hard time selling it as a supplement without running afoul of the law.

There’s also medical tourism, which is a $10bn industry as of 2012, and expected to reach $32bn by 2019.  Most medical tourism is for conventional medical procedures, especially cosmetic surgery and dentistry, as customers seek cheaper options abroad.  Sometimes there are also experimental procedures, like stem cell therapies, though a lot of those are fraudulent and dangerous.  It might be possible to open a high-quality translational-research clinic in a developing country, and eventually collect enough successful results to advertise it globally as a medical tourism destination.  The key challenge, from what people in the field tell me, is to get the official blessing of the local government.

Q: Could you do it on a ship?

A: Maybe, but it would be hard.

Yes, technically international waters are not under any country’s jurisdiction.  But if a government really doesn’t want you doing your thing, they can still stop you. Pirate radio (unlicensed radio broadcasting from ships in international waters) was technically legal in the 1960’s, where it was very popular in the UK, but by 1967 the legal loophole had been shut down.

Also, ships are in the water. If you compare a cruise ship to a building of equivalent square-footage, the ship needs to be staffed with people with nautical expertise, and it needs more regular maintenance.  In most situations, I’d expect it to be much more expensive to run a ship clinic than a land clinic.

There’s also the sobering example of BlueSeed, which was to be a cruise ship where international entrepreneurs could live and work in international waters, without the need for a US visa. It was put “on hold” in 2013 due to lack of investor funding.  And, obviously, a “floating condo/office” is a much easier goal than a “floating clinic.”

Q: Would cryptocurrencies help?

A: Noooooo. No no no no no.

You’re probably thinking about black markets, which are risky in themselves; and anyway, cryptocurrencies do not help with black markets because they are not anonymous.

Bitcoin helpfully points out that Bitcoin is not anonymous.  It is incredibly not anonymous.  It is literally a public record of all your transactions.  Ross Ulbricht of Silk Road, tragically, didn’t understand this.

Q: So, can regulatory arbitrage work?

A: It’s definitely not trivial, but I haven’t ruled it out yet. The medical tourism model currently seems like the most promising method.

I think that transparency would be essential to any big win — yes, there’s lots of shady gray-market stuff out there, but even aside from ethical concerns, if you have to fly under the radar, it’s hard to grow big.  If you’re doing clinical research, it’s impossible to get anything done unless you’re transparent with the scientific community.  If you’re trying to push medical tourism towards the mainstream, you have to inspire trust in patients.  Controversy is inevitable, but if a model like this can work at all, the results would have to be good enough to speak for themselves.


  1. I have a Twitter feed. It’s just journal articles (and commentary on them), I don’t use it as a social network, but if you want to see what’s on my mind, check it out.  For instance, what are the implications if most polygenic traits are affected by nearly all genes?
  2. I quit cross-posting to LessWrong because the discussion didn’t seem that good and I didn’t have the energy to single-handedly try to shift the flow. That may be changing now that they’re setting up a new, more troll-proof website, now in private beta. I’ll see how it goes and link when it’s open to the public.
  3. I highly recommend Lapham’s Quarterly, a magazine that brings together excerpts from historical and contemporary writers on a common theme. It’s an easy way to get some perspective, since we live in a really ahistorical culture.
  4. Elizabeth of Aceso Under Glass is now trying to go pro with her writing and research:

    My passion is the things I do for this blog- research, modeling, writing.  So obviously a lot of my newfound free time will go here.  But I’d also like to look for paid opportunities to use those skills.  If you are or know of someone who needs writing or research like I do for this blog (deep scientific investigation, synthesizing difficult sources into something easy to read, effectiveness analysis, media reviews, all of these together), please reach out to me via elizabeth at this domain.   Have a thing you really want me to blog about?  Now’s a good time to ask.

If you like my lit reviews, and want to commission someone to research the answer to a question, go to Elizabeth. She’s excellent, and she actually has the time and opportunity to do freelance work, which I currently don’t.

Momentum, Reflectiveness, Peace

Epistemic Status: Personal

I’ve been writing a lot lately about the mental habits that make calm and reflection possible. This is because a lot of “rationality” seems to depend on dispositions — things like the propensity to question your first assumptions, seek new information, examine evidence in a fair or dispassionate manner, and so on.  It’s very difficult to be motivated towards reflective behavior if you’re so upset that the mental motion of “stop and think” is impossible for you.  Knowing about cognitive biases isn’t much use if you don’t want to do anything except your default reactions to stimuli.

Reflectiveness, I think, is simply the capacity to question, “Is this what I want to be doing?”  The opposite of reflectiveness is momentum: when you feel like “whatever I happen to be doing, I want to keep doing it, good and hard!”  Reflectiveness is “Hmm, could things be otherwise than they are?” Momentum is “Things shall be exactly as they are! Except more so!”

Social media feedback loops are an example of momentum. You happened to start fooling around on social media, so you want to continue.  Similarly, you notice that something is beginning to trend, so you want to jump on the trend and ride it higher.  This is momentum in the sense of the momentum term in a stochastic process.

I suspect that psychological reactance and momentum are linked. When you think, “whatever I’m doing, I don’t want to change, and if you suggest I change, I’ll only do it more!” there’s something of a momentum flavor.

“Do whatever is being done, but more so” is what Michele Reilly calls “pragmatism”:

Pragmatism creates a call for conformity, implicit pressure for agreement and unquestioning support for whatever is representative of power. Its philosophy is a submission to threats.  Intellectualism as I am using the term, points directly away from those things.

Reflectiveness, then, is “consider what is not being done, what is not representative of power, what is not in agreement with the default.”  Consider deviations and alternatives and original approaches.  Consider whether the current direction of society might not be optimal. Consider whether what you’re doing might not be for the best.  Consider whether the last thing you read might not be correct. Consider whether to turn in a different direction.

This is the mental motion of “stop, think, ask a question.”

As I understand it, it is similar to sattva, the peaceful, aware state of mind.  Like air, it is mobile; it can change direction.  Like air, it is light; it feels mildly pleasant to be intellectually engaged.

But getting to reflectiveness is often scary and threatening. If you really want something at the moment, you have to let go long enough to think about “do I want to want this?” If you are doing something at the moment, you have to stop long enough to think about “do I want to do this?”  And if you had to change your behavior, or change an entire chunk of the world, that would be a lot of work.  The prospect of extra work, or of stepping back from the object of your present desire, is really stressful.

My current hack towards reflectiveness is to simply start with the stop.

Rest is the first thing. Sleep deprivation makes people more emotionally reactive and less reflective.  I found that a day of focused rest — when I deliberately spent all day sleeping whenever I wanted, eating as much as I wanted, quietly daydreaming or meditating without talking to anyone or consuming any media, and focusing on regaining a sense of wellness and satiety, was really helpful.

A related thing is cultivating a sort of contentment. “All is well, literally everything is fine, I don’t have to do anything except be.  Everything can be left in peace.”

I know that there are a lot of problems with contentment, if I were to present it as a totalizing philosophy. Lots of people are not fine. Many things are worth doing. Eternal apathy isn’t most people’s idea of a great life plan.

But I’m not thinking of contentment as the whole of one’s life or mind. I’m thinking of it as a base. There is a very low-level sense of “things are all right, I can rest and be nourished, I am welcome in the universe” that I think is probably important for living things.  And to cultivate that base, sometimes you have to stop doing things and rest your body and mind.  You don’t have to do anything right now. No obligations bind. You can rest in peace and freedom.

And out of that restful state, sometimes reflectiveness becomes more accessible. For instance, if you believe you don’t have an obligation to act on a particular idea you read about, you can begin to merely consider it, abstractly, hypothetically. With a certain airy gentleness.

(In a weird way, I think this may be akin to Kant’s notion of public reason. He says that in a state with a sovereign strong enough that one can be certain that mere intellectual discussion of reforms won’t lead to revolution, it becomes possible to actually achieve “enlightened” reforms, slowly and over time, whereas revolutions tend to merely replace one form of arbitrary power with another.  Similarly, if you can merely consider an idea intellectually, while temporarily promising yourself that you don’t have to do anything about it, then in the long run you might become more able to change your behavior on the basis of such reflections.)

Cultivating this sense of restful, contented peace made it more possible for me to engage with ideas without feeling pressured to agree with them.  If lots of alternatives are possible, but none are obligatory, then entertaining hypothetical concepts is a rather gossamer-light experience, like looking at a soap bubble or a rainbow.

It’s also easier to behave with gentleness and self-restraint to other people, if you tap into that sense of eternal peace; people can put no duties upon you, they are simply fellow-creatures sharing the world with you, and you can separate from them if you like.

I’ll have to wait and see if this leads to more thorough abilities to consider alternatives and act on the basis of reflection, but it seems promising.

My current motto is “Turn — slowly.”  I can only adapt slowly, improve slowly, originate useful ideas slowly.  I still need stretches of rest and peace. A slow positive trajectory is still worth it. (And can be more productive in the long run. I get dramatically more work done after rest.)  Turning slowly towards truth seems to be the best way available.

Update on Sepsis: Donations Probably Unnecessary

Epistemic Status: Pretty Confident

So, remember how I was urging people to donate for a randomized controlled trial of a new treatment for sepsis?

I’ve been informed by some people who work with the Open Philanthropy Project, which does research into giving opportunities that I really respect, that there are already foundations which are likely to fund an RCT for the treatment.  This means that donations from private individuals are no longer necessary.

(A quick rundown of the logic behind this: if you’re trying to give “optimally”, you want to pay attention to the marginal returns of your dollars.  If you give the first dollar to a great opportunity that nobody else will fund, your marginal impact is huge. If you give a dollar to the same great opportunity, but somebody else has already pledged $10M, then your dollar has become a lot less useful, because pretty much any goal has diminishing marginal returns on investment.  If your motivation for giving to charity is achieving a goal as cheaply as possible, you should move away from charities that are already adequately funded, and towards opportunities that are underfunded. This is a simple idea but it took me a surprisingly long time to understand!)

If you already gave to Eastern Virginia Medical School for the sepsis trial, your money’s not refundable, but it’s still being dedicated to sepsis research.

General implications I’d draw from this:

  • This is another example, as GiveWell and OpenPhil have found many times, of the principle that finding good giving opportunities is hard. It’s hard for the same reason finding good investment opportunities is hard. If something is obviously great, there’s a good chance that professionals have already invested in it. If something is undervalued, it’s probably not obviously great (it’s at least likely to be controversial.)
  • This is a positive update on the success of the philanthropic community, esp. in medicine.  Drug companies may not have an incentive to fund trials of cheap, unpatentable treatments, but perhaps foundations do.
  • Unfortunately for those of us on the awkward and scruffy side, this suggests that talking to rich people is a useful skill in finding out what’s actually going on in the world.


Kindness Against The Grain

Epistemic Status: Unformed Thoughts

I’ve heard from a number of secular-ish sources (Carse, Girard, Arendt) that the essential contribution of Christianity to human thought is the concept of forgiveness.  (Ribbonfarm also has a recent post on the topic of forgiveness.)

I have never been a Christian and haven’t even read all of the New Testament, so I’ll leave it to commenters to recommend Christian sources on the topic.

What I want to explore is the notion of kindness without a smooth incentive gradient.

Most human kindness is incentivized. We do things for others, and get things in return. Contracts and favors alike are reciprocal actions.  And this makes a lot of sense, because trade is sustainable. Systems of game-theoretic agents that do some variant of tit-for-tat exchange tend to thrive, compared to agents that are freeloaders or altruists. Freeloaders can only exploit so long until they destroy the system they’re exploiting, or suffer from the retribution of tit-for-tat players; pure altruists burn themselves out quickly.

Sometimes kindness is reciprocated at the genetic rather than the personal level (see kin selection.)

Sometimes it’s reciprocated by long-term or indirect means — you can sometimes get social credit for being kind, even if the person you help can’t directly reciprocate. A reputation for generosity to allies and innocents makes you look strong and worth allying with, so you come out ahead in the long run.

And one of the ways we implement the incentives towards kindness in practice is through sympathy. When we see another’s suffering, we feel an urge to be kind to them, and a warm fuzzy reward if we help them.  That way, kindness is feasible along local emotional incentive gradients.

But, of course, sympathy itself is carefully optimized to make sure we only sympathize with those whom we’d come out ahead by helping. Sympathy is not merely a function of suffering. It is easier to sympathize with children than with adults, with the grateful than the ungrateful, with those who have experienced culturally acceptable “grounds for sympathy” (such as divorce, loss of a loved one, illness, job loss, crime victimization, car trouble, or fatigue, according to this sociological study).  We sympathize more with those whose suffering is perceived as unjust — though this may be something of a circular notion.

This leaves out certain forms of suffering.

  • The stranger, who is not part of your group, will receive less sympathy.  So will the outsider or social deviant.
  • The person with a permanent problem that can’t be easily fixed will eventually receive less sympathy, because he cannot be restored to happiness and in a position to show gratitude or return favors.
  • The overly self-reliant person will receive less sympathy; if sympathy is like a “credit account”, the person who has never opened one will be offered less credit than one who maintains a modest balance. We require vulnerability and a show of weakness before our sympathy will turn on.
  • The angry or assertive person who does not show gratitude or deference will receive less sympathy.  Appeasement displays evoke sympathy and reconciliation.
  • The person whose suffering takes an illegible form will receive less sympathy.

To be a recipient of sympathy one must be both weak and strong; weak, to show one really has received a misfortune; strong, to show one can be a useful ally someday. Children are the perfect example, because they are small and vulnerable today, but can grow to be strong as adults.  The victims of temporary and easily reversible bad luck are in a similar position: vulnerable today, but soon to be restored to strength.  Permanently disadvantaged adults, people who may be poor/disabled/nonwhite/etc and have developed the self-reliance or resentment associated with coping with long-term deprivation that isn’t going away, are less easy to sympathize with.

Some of this has been shown experimentally; subjects in an experiment who viewed other subjects appearing to receive electric shocks were more unsympathetic when they were told the shocks would continue in a subsequent session, versus when they were told the shocks had ended, or when they were told that their choices could stop the shocks. Permanent suffering is less sympathetic than temporary or fixable suffering.

Sympathy provides an immediate emotional incentive to respond to suffering with kindness, and it’s pretty well calibrated to be “good game theory” — but it’s not perfect by any means.

Cooperation Without Sympathy

Imagine a space alien — a grotesque creature, one whose appearance makes you want to vomit — offers you a deal. Let’s say this alien is, like the creatures in Octavia Butler’s Xenogenesis trilogy, a “gene trader”, one who can splice DNA with its bodily organs, and has a drive towards genetic engineering analogous to what Earth animals experience as a sex drive.  If you have “sex” with the alien and produce part-alien babies, it will give you and your children access to the vastly advanced powers in its alien genes, in exchange for gratifying its biological urge and allowing it to benefit from your genes.

From an intuitive standpoint, this is grotesque. The alien is not sexy. You cannot feel compassion for its desires to trade genes with you. It feels violating, disgusting, unacceptable. You were never evolved to want to breed with aliens.

And yet the game theory is sound. Superpowers are a grand thing to have. Even sexiness exists as a way to incentivize you to have strong children — and your alien children will undoubtedly be strong.

It’s a game-theoretic win-win but not a sympathetic win-win. Other humans will not find your alien babies sympathetic, or your choice to cooperate with the aliens a pro-social one.

It’s a sort of betrayal against your fellow humans, in that you are breaking the local game of “sex is between humans” and unilaterally gaining superpowered alien babies; but it’s a choice that any human could make as easily as you, so you aren’t leaving others permanently worse off, or depleting a valuable commons. Since all humans would be better off with alien genes, it’s not really a “defection” if you take the lead in doing something that would be beneficial if done by everyone.

Butler is really good at expressing how a “peaceful win-win” — on paper, an obviously correct choice — can feel disgusting.  Sympathy incentives can’t get you to win-win cooperation, if the thing that the other person wants is not something that you can imagine wanting.

This is an example of incentives for cooperation being present but not smooth.  It is in your interest to “gene trade”, but you only know that intellectually; you cannot be guided to it naturally through sympathy.

In the same way, helping someone “unsympathetic” but valuable is a “good investment” but doesn’t feel like it.  You often hear about this in disability contexts. “All you have to do is give me a relatively cheap accommodation and suddenly I become way more productive! How is this not a good deal for you?”  Well, it may be a good deal but not a sympathetic deal, because people’s mental accounting doesn’t match reality; if they think that the person “ought to be able to” get along without the accommodation, sympathy doesn’t provoke them to help, and if they don’t have a strong intuitive sense of people being plastic, so that they function differently in different environments, they don’t really intuitively believe that a blind person can be an expert programmer if given a screen reader, for instance.  Abstractly it’s a good deal, but concretely it’s not being guided smoothly by emotional gradients, it requires an act of detached cognition.

In practice, you can guide a situation back to sympathy, and that’s usually the best way to get the trade done. Try to play up the sympathetic qualities of the trade partner, try to analogize the requested action to things that are considered moral duties in one’s social context.  Try to set up emotional guardrails, engineer the social environment so the deal can be done without abstract thought.

But this isn’t really feasible for a single individual to do.  If you’re alone and nobody wants to help you, even if you reciprocate, because you’re not a “sympathetic character”, you can’t reshape social pressures to make yourself sympathetic all by yourself.  If we aren’t going to brutally destroy the lives of valuable people who don’t already have a posse, somebody is going to have to think, to go beyond gradient-following.

I think that to get the best results, thought is actually necessary.  By “thought” I mean the God’s-eye view, the long-view, the ability to ask “where do I want to go?” and potentially have an answer that isn’t “whichever way I’m currently going.” But what emotional or psychological or behavioral scaffolding promotes thought?  We are, after all, made of meat.  Since sometimes humans do think, there must be a way to build thought out of meat.  I’m still trying to understand how that is done.

Forgiveness and the Very Long Term

Forgiveness, on a structural level, is choosing not to call in a debt. I’m entitled to compensation, according to the rules of whatever game I’m playing, but I don’t demand it.

Forgiveness is a local loss to the forgiver. If everyone forgave everything all the time, it wouldn’t be remotely sustainable.

But a little bit of forgiveness is useful, in exactly the same way that bankruptcy is useful.  Bankruptcy means that there’s a floor to how much debt you can get in, which allows loss-averse humans to be willing to take on debt at all, which means that more high-expected-value investments get made.

Tit-for-tat with forgiveness outperforms plain tit-for-tat.

You can also think of forgiveness as a function of time. If you expect that someone will be net positive to you in the long run, you can accept them costing you in the short run, and not demand payment now. In other words, you extend them cheap credit.  As your time horizon goes to infinity (or your discount rate goes to zero), it can become possible to not demand payment at all, to forgive the loan entirely.  If it doesn’t matter whether they pay you back tomorrow, or in a hundred years, or in a thousand, but you expect them to be able to pay someday, then you don’t really need the repayment at any time, and you can drop it.

This is sort of similar to the heuristic of “be tolerant and kind to all persons, you never know when they might be valuable.” The fairy tales and myths about being kind to strangers and old ladies, in case they’re gods in disguise. You don’t want to burn bridges with anybody, you don’t want to kick anybody wholly out of the game, if you expect that eventually (and eventually may be very long indeed, and perhaps not within your lifetime), this will pay off.

Tit-for-tat or reinforcement-learning or behaviorism — reward what you want to see, punish what you don’t — makes a lot of sense, except when you factor in time and death. If you punish someone so hard that they die before they have a chance to turn around and improve, you’ve lost them.

And, on a more abstract level: it can make sense to disincentivize the slightly worse thing in general, that’s how evolution works, but that leads to things like rare languages dying out. Yes, it’s perfectly rational to speak Spanish rather than Zapotec, and Zapotec-speakers need to make a living too, but my inner Finite and Infinite Games says “wouldn’t you like to preserve Zapotec from dying out altogether? Couldn’t it come in handy someday?”  Language preservation is an example of preserving a “loser” because, if the world went on forever, nothing would be permanently guaranteed to lose.

It’s like having a slightly noisy update mechanism. Mostly, you reinforce what works and penalize what doesn’t. But sometimes, or to a small degree, you forgive, you rescue someone or something that would ordinarily be penalized, and save it, in case you need it later. In gradient descent, a little stochasticity keeps you from getting stuck on local maxima. In economics, a little bankruptcy or the occasional jubilee keeps you from getting stuck in stagnant, monopolistic conditions. You don’t ruthlessly weed out the “bad” all the time.

Sometimes you throw some resources at someone who “doesn’t deserve them” just in case you’re wrong, or to get out of the nasty feedback loops where someone behaves badly in response to being treated badly.  If you unilaterally gave them some help, you might allow them to escape into a cooperative, reciprocal-benefit situation, which you’d actually like better!  Even if this didn’t work one particular time, doing it in general, at some frequency, might in expectation work out in your favor.

A sense of the very long term may also make sympathy easier, because in the very long term nothing is permanent and everything is eventually mutable. If permanent suffering is what makes people unsympathetic, then a sense of the very long term makes it possible to realize that under different circumstances that person might become fine, and thus their suffering is ultimately the “temporary kind” that can elicit sympathy.  “The stone that the builders rejected/ has become the cornerstone” — well, if you wait long enough, that might actually happen. Things could change; the “loser”‘s or “villain”‘s status on the bottom is not eternal; so with a long-enough-term mindset it’s not actually appropriate to treat him as definitively a “loser” or a “villain.”

Forgiveness can be a lot easier to implement than “cooperation without sympathy”, which requires you to actually ascertain where win-wins are, with your mind. You can mindlessly add a little forgiveness to a system.  Machine-learning algorithms can do it.  Which may make it a useful tool in the process of “trying to build thought out of meat.”


Finite and Infinite

Epistemic Status: Really, really informal

After being told I really need to read Finite and Infinite Games for god knows how long, I finally went and did it.  I’m still processing.

In my earlier post I described a polarity between “survival thinking” (which is physical-reality-oriented, man-vs-nature, serious, and non-competitive) vs “sexual-selection thinking” (which is social-reality-oriented, focuses on the human world, frivolous, and competitive.)

James Carse, in Finite and Infinite Games, sets up a completely different polarity, between infinite game-playing (which is open-ended, playful, and non-competitive) vs. finite game-playing (which is definite, serious, and competitive).

Like a lot of philosophical thinkers, he’s rather negative on trying to win social games, and believes that humanity-wide survival problems require us to set such games aside.  He’s also quite positive on thinking — self-awareness, going meta, asking “do I want to be doing what I’m doing?”

The distinction with my earlier post is that he believes the key alternative to unproductive social-status games is playfulness and open-endedness, rather than seriousness, strenuousness, survival-level urgency.  I think this is possible; it’s also possible that both playfulness and survival pressure provoke people into abandoning social-status jockeying; it’s possible that there are many other things that do so.

My Understanding of what James Carse Wants

Playing an infinite game, in my own words, means “Let things continue and get weird”.

Culture has a tendency to get “out of hand”, to shift definitions, to break once-hallowed rules.  Living things evolve over time. Languages shift. Peoples migrate.  Individuals don’t fit perfectly into demographic or ideological categories.  As I understand it, an infinite player embraces change and tries to keep life/culture/humanity going, knowing that whatever future form it takes will be unfamiliar.

In human relationships this would mean keeping lines of communication open and trying to allow space for deep, weird, vulnerable, unfamiliar, or surprising interactions to arise.  It would also mean questioning or poking fun at fixed or narrowly competitive behavior patterns.

In grammar this would mean being descriptivist rather than prescriptivist.

In art this would mean playing with new forms and dialoguing with old formalisms.

In technological social-engineering this would mean trying to design platforms that promote unexpected and fertile rather than reliable and predictable behavior.

In politics it would mean actively seeking to preserve diversity, questioning the assumptions of nationalism or officialness, lots of meta stuff combined with a desire not to lock anybody out of the discourse.

Carse’s examples of evil are genocides: the permanent silencing of an entire people.  He’s against absolutism because it will try to destroy those parts of reality that don’t fit its system. Sometimes those parts are people.  The Native Americans, or the Romany, or the Jews, didn’t fit into somebody’s system.

If you imagine actually being a 19th-century Anglo-American, and imagine that you don’t really like Indians — they’re often pagan, they’re not of your civilization, they cause you a not-insignificant amount of danger and inconvenience — and imagine what it would be like to think “yes, but they’re people, they exist, they have a point of view that may matter, it’s worth trying to work things out with them rather than utterly destroying them” — I think that gives something of the flavor of what Carse wants people to do. Octavia Butler’s Lilith’s Brood is a good example of what cooperating with literal outer-space aliens would feel like — cooperating and trading and negotiating with creatures whose values you cannot empathize with at all and which seriously creep you out.  “Mutual benefit” sounds nice when you have easy sympathy with the other party; it is stranger and less intuitive when you don’t.  To keep playing with the truly alien seems to be part of what infinite games are about.

My Resistance to FIG

I have some inner resistance to pretty much all philosophical or spiritual thinkers, and Carse is no exception. He’s favorably disposed to thinking, and unfavorably disposed to most temporal rewards, like pretty much all philosophers. I like temporal rewards, and am less inclined towards detached meta-level thinking than most of the people who like these kinds of books.  I like a hot meal and a soft bed and a cat to pet.  So that’s one thing.

I also don’t like any creed which requires me to have unbounded energy, and “playfulness” or “openness to change and ambiguity” are both things that cost energy. (The latter because it takes cognitive effort to wrap one’s mind around complex or unpredictable things.) Carse complains that traveling in airplanes and hotels insulates you from change, from truly experiencing the foreignness of foreign places. He’s not wrong. It’s just that if you are very tired, you do not want anything to surprise you if you can possibly avoid it.  I am usually tired, so an ideal of the world as being eternally surprising sounds exhausting.  Finite games end, and that is a great deal of their appeal; after they’re over, you get to stop playing.

I do appreciate the message of “if you want to simplify your life, you don’t get to do it by controlling other people.”  Trying to live in such a way that you don’t ruin other people’s games just because you don’t want to play.

Actual Flaws in FIG

I don’t think he has a complete theory of property. He’s talking about property as primarily a finite game that involves fairness and competition intuitions.  I don’t think that’s all it is. There’s also property-as-territoriality (something that even animals have), and property as an incentive-stable way of allocating resources.  I don’t think I’ve ever seen a complete and satisfying theory of why we have property and how far it can/should extend, even Locke, but I really think it’s not disposed of as easily as Carse does.

I also think the “technology is limiting, gardens are infinite-game-like” stuff could use a little more nuance.  Not all technology is oriented towards centralized/standardized behavior patterns.  I think Carse would have been rather interested in blockchains and 3d printers if they had existed when he wrote the book.


In a comment on my earlier post, Komponisto said “The correct dichotomy, it seems to me, is not “work” vs. “play/gossip”, but rather “work/play” vs. “gossip”.” This also seems to be something Carse would agree with.  The creative stuff he values is play-like; he thinks seriousness is usually a “theatrical” device that’s basically used only for gossip-oriented status-games.

I think I was just wrong to imply that work is always serious.  (And certainly if I gave the impression that art is always frivolous, that was wrong! But I’ve never really believed that.)

I like Carse’s idea of storytelling as contrasted with lecturing; using communication to open up possibilities or food for thought rather than to shut down unwanted behaviors.  I was much more into that in the past (in my current state of constant exhaustion I normally don’t want to open up any future possibilities because then I’d have to deal with them) but it is genuinely beautiful to create a world for people to play in.

The Face of the Ice

Epistemic status: Casual

And in the end, the dominant factor in Gethenian life is not sex or any other human thing: it is their environment, their cold world. Here man has a crueler enemy even than himself.

I am a woman of peaceful Chiffewar, and no experts on the attraction of violence or the nature of war. Someone else will have to think this out. But I really don’t see how anyone could put much stock in victory or glory after he had spent a winter on Winter, and seen the face of the Ice.

–Ursula K. Le Guin, The Left Hand of Darkness

My father was a serious alpinist in his youth, and taught us a few things about mountains when we were kids.  For instance: you always say hello to people you pass on the mountain.

Why? Because the mountain can kill you.  “Hi” means “You are a human like me, and this mountain is our common threat. If you scream for help, I will come for you.”

There’s a certain solidarity between humans that emerges when we’re faced with a potentially hostile natural world.  If we passed on the street, we’d be strangers. There’d be no bond between us.  We might even engage in conflict, under the right circumstances. But on the mountain, all of that falls away.  By default, in the wilderness, a human face is a friendly face, a glad thing to see if you are lost or hurt.

If you are in the desert and you see someone obviously suffering from dehydration, you’ll share your extra water with them. It won’t feel like some kind of altruism or charity, it will feel obvious. It’s instructive, if you’re used to thinking of giving as an unpleasant duty, to experience some situations where it’s natural to be kind.  Kindness becomes practical and natural and obvious when the physical environment is hostile.  Suddenly everything becomes simple: it’s human life against bitter nature, and nothing else matters.  “All men are brothers” becomes a concrete reality.

There’s something clarifying about man vs. nature situations, even at the minimal level you can experience by hiking a technical scramble alone.  I’ve found that a certain alertness kicks in when I have to figure out where to put my hands and feet; I’m scared of falling, I have a heightened awareness of physical/spatial reality, and I don’t care at all about looking foolish or getting dirt on my clothes, because the important thing is to get off the damn mountain without any injuries.  I also am much less lazy; “get to the top” or “get to the bottom” make it feel natural to push a lot harder than “run X miles” or “lift X pounds,” almost as though reaching topographical milestones taps into some primal source of motivation.  Mountains make life more real than it usually is in civilized life.

There’s a traditional overlap between mountain climbing and math; the Russians had their math camps in the Urals for decades. Alan Turing, Sophus Lie, Niels Abel , and many others were avid hikers; a disturbing number of mathematicians have died in hiking accidents.  If I can speculate about the connection, it might have something to do with love of solitude, tolerance for pain and effort, or this heightened-reality effect from spatial problem-solving.  I certainly get a disproportionate number of good ideas while on runs, bike rides, hikes, or long walks alone.

All traits in living things evolved either through natural selection or sexual selection — that is, either they make you less likely to die, or they make you likely to have more children.  (Compare survive-the-winter mode vs. grow-and-reproduce mode.) These are both important to inclusive fitness, but they sometimes work at cross-purposes — for example, sexual ornaments can become so large and unwieldy that they start to impair survival.

You can also map cognitive or behavioral traits to natural-selection vs. sexual-selection. For instance, dominance competitions and play-fighting (like stags butting antlers) are about sexual selection and competing for mates, but survival-related behaviors like escaping predators or figuring out how to survive natural hazards are about natural selection. We probably got much of our sensory processing and spatial awareness through natural selection, but social awareness is at least partially sexually selected.

Some behaviors are ambiguous. Was the evolution of hunting among humans due to the extra calories from meat (a natural-selection explanation) — or as a way of demonstrating fitness to mates (a sexual-selection explanation)?  Hunter-gatherers generally rely more on plant foods than animals, so maybe hunting large animals was more of an elaborate show of strength and bravery than a survival necessity; on the other hand, meat really is nutritious.  By analogy, I think it’s likely that human intelligence is part survival necessity, part ornament.

My intuition is that mountain-climbing and, generally, coping with wilderness, cultivates the parts of cognition that are about survival and natural selection, as opposed to competition and sexual selection.

There’s a kind of sexual-selection-related aggression in which fighting is fun.  You’re a red-blooded, well-fed, fit person with lots of energy to burn and plenty of ego; you want to compete, and maybe to pick a fight.  You’re full of rajas.  Maybe you’re a little bit of a Cavalier.  The highest expression of this mindset I’ve seen so far is the Iliad (modern translation by Ian Johnston here).

That is not the mindset that the wilderness cultivates. Survival-cognition is clear and practical and understated and a little cold. One of my coworkers has joked about the “Utah phenotype” of the kind of people who move to Utah for the outdoor endurance sports — small, unassuming-looking men, whom you wouldn’t guess as great athletes to look at, but who will keep up an absolutely crushing pace once you get them on a mountain.

War, contra Le Guin, is probably a blend of aggression and survival. There are several generations of men in my family who have been in the military, and they were small, serious, practical, extremely tough people; more survival-oriented than aggression-oriented (though of course survival sometimes means killing your enemies.) I suspect Ulysses S. Grant, for instance, of being this sort of fighter.  Pure aggression without survival is play-fighting, and modern warfare is definitely not that.

Technical skill is usually very survival-oriented, which I think is a big part of why the math/mountains connection is so strong.  The stories I’ve been told about what it’s like to be a sysadmin or ICU nurse suggest that this clarifying, no-bullshit, get-it-done attitude comes alive in their sort of man-vs-nature situations (well, man-vs-disease or man-vs-bug).  Status and competition and ego and sex simply don’t matter when you’re trying to keep people alive and their machines working.  You can’t lie to Reality or sweet-talk it or intimidate it, you have to figure out how it works.

Jeffrey Tucker has a nice essay  contrasting the (rather reactionary) view that fighting gives drama and purpose to life (the sexual-selection mindset) with the struggle to accomplish productive goals and build a free advanced civilization, a “fight” that isn’t about fighting.  The latter can be painted in a dramatic and glorious light too, despite being a fundamentally different thing.

For serious survival goals — things like overcoming disease, poverty, and violence — you really need survival-mindset. I suspect that sexual-selection-mindset is healthy and natural and joyful and many people will have fuller lives if they cultivate it to some extent. But humanity-wide goals are pretty much entirely about the survival-related stuff. Meredith Patterson has some thoughts about her shift towards survival-mentality that I pretty much agree with; when your enemy is something like “poverty”, you have to avoid the attractive nuisances of interpersonal conflict.  Serious goals require seriousness — not like “never laughing or taking a break”, but staying realistic and not getting caught up in drama.


Antipsychotics Might Cause Cognitive Impairment

Epistemic Statuspretty rough, a first pass

A friend of mine recounted a pretty terrifying description of his experience being on antipsychotics for years as a child:

Antipsychotics can make you dumber.  So can a lot of other medications.  But with antipsychotics it isn’t the normal sort of drug-induced dumbness – feeling tired, or distracted, or mentally sluggish, say.  It’s more qualitative than that.  It’s like your capacity for abstract thought is reduced.

And one of the consequences of this is that you may lose the ability to notice that you have lost anything.  You agree to give the new med a try, and you start taking it, and then when you see your prescriber again you don’t report any problems because you’ve lost the ability to form thoughtslike “my cognition has changed a lot recently, and the change coincided with the introduction of this new med.”

This can go on for years.  It did for me and for several people I know.

When I finally went off Risperdal – encouraged by my parents, I don’t remember really caring – it suddenly seemed obvious that I’d been cognitively altered for the past five years.  I didn’t remember the time before that very well (I had started Risperdal when I was about 10 years old), but there were objective indicators – for instance, I loved reading before Risperdal, and while on Risperdal I don’t think I read a single book cover-to-cover.

You’d think I would have noticed that I couldn’t read anymore.  Somehow I didn’t, for five years.  What did it feel like?  It’s hard to remember and also hard to describe.  Sort of a passivity.  The world acted upon me for mysterious reasons.  I did not draw correlations between present and past events, didn’t formulate ideas about the workings of things.  The present was simply given; I wasn’t frustrated when it refused to honor my theories.  “Reading is hard” was a datum, and was unpleasant, but I was not really surprised by it, or frustrated in the “this wasn’t supposed to happen!” way of abstract-reasoning-creatures.  It was a given datum and all I did was hope that given data would be pleasant and not unpleasant.

I think people should know that antipsychotics can do this.  They still may be worth trying, in certain situations.  But taking an antipsychotic is a special sort of decision, one that interferes with decision-making itself, like choosing to listen to the Sirens.

There are also cases of antipsychotics causing autistic catatonia, in which an autistic person, upon treatment with antipsychotics, suddenly loses speech and motor skills.  See examples: personal narrative,  case study, case study.

So, a natural question is: does this happen often? Do antipsychotics actually cause cognitive problems?

And here I’m referring to long-term cognitive problems. Many medications, including most atypical antipsychotics, are sedating; nobody thinks as clearly when they’re sleepy.  But when you stop taking a sedative, you generally become alert again. Do antipsychotics have any permanent effects?

Now, a major confounder is that schizophrenia causes cognitive impairment in itself, and antipsychotics seem to slightly relieve those problems.

Overall, atypical antipsychotics seem to help cognition in schizophrenia

One study of 533 patients having their first psychotic episode and randomized to either risperidone or haloperidol found that, on both drugs, there were slight but significant improvements in most cognitive tests after 3 months of treatment and patients did not significantly worsen on any tests.[1]

The CATIE trial, a randomized trial of 1460 schizophrenics given various antipsychotics, found significant (p < 0.001) improvement in a composite score consisting of speed, reasoning, working memory, verbal memory, and vigilance on all antipsychotic meds.[2]

There are dozens of studies like this. A meta-study found that various atypical antipsychotics were found to improve cognitive functioning in roughly half of studies, most of which were short-term (6 weeks).[3]

Typical Vs. Atypical Side Effects

“Typical” antipsychotics are older drugs, like haloperidol, whose primary effect is dopamine antagonism, while “atypical” antipsychotics are newer drugs, like risperidone, olanzapine, clozapine, and quetiapine, with a wider variety of targets including serotonin agonist, anticholinergic, and antihistamine effects.  The atypical drugs are often believed to be safer because they aren’t as likely to cause the motor disorders (extrapyramidal effects and tardive dyskinesia — that is, Parkinson’s-like stiffness and involuntary twitching movements) that the older drugs did, but the newer drugs often have other side effects (like sedation and large amounts of weight gain).

Clozapine is associated with an average of 14-25 pounds of weight gain over a period of several months of treatment; over 50% of patients become overweight when treated with clozapine.  A year of high-dose olanzapine causes an average of 26 pounds of weight gain.  Risperidone and quetiapine are associated with a weight gain of 4-5 pounds over the course of 5-6 weeks, and remaining stable over long-term use.[4] At the higher end of weight gain, this is no longer a cosmetic issue, but a serious diabetes risk.

Recently, it’s been observed that “atypical” antipsychotics can cause motor problems too, and their safety advantages have been overstated.

The CATIE study found that 8% of patients on olanzapine or risperidone had extrapyramidal symptoms. All agents, typical and atypical, had rates of tardive dyskinesias of 13-17% after a year of follow-up. Akasthisia rates for atypical antipsychotics are in the 10-20% range, compared to 20-52% with typical neuroleptics.[5]

While older studies found that atypical antipsychotics had lower rates of extrapyramidal effects than typical antipsychotics, these studies were comparing the new drugs to high-dose haloperidol, and the difference disappears when you compare to low-dose haloperidol or other typical antipsychotics (such as perphenazine, whose side effects are milder.)  The CATIE trial, which randomized schizophrenics to olanzapine, quetiapine, risperidone, perphenazine, or ziprasidone, found no differences in the incidence of extrapyramidal side effects. 12-month Parkinsonism rates were 37-44% for the four atypical antipsychotics and 37% for perphenazine. Akasthisia rates were 26-35% for the atypical antipsychotics and 35% for perphenazine.  Tardive dyskinesia was rarer, but also not different — 1.1%-4.5% for the atypical antipsychotics and 3.3% for perphenazine.[6]

In a meta-study, second-generation antipsychotics had significantly less use of antiparkinson medication than haloperidol (RR’s 0.17 for clozapine to 0.7 for risperidone), but not significantly less compared to low-potency typical antipsychotics (like perphenazine). Atypical antipsychotics caused significantly more weight gain than haloperidol, but not compared to low-potency typical antipsychotics.  Atypical antipsychotics caused the same amount of sedation as haloperidol and low-potency typical antipsychotics — except that clozapine causes more sedation than everything else.[7]

In other words, it’s definitely not true that the new drugs are safer than the old drugs across the board. Lower doses or better choices in the old drugs would have a comparable or strictly better side effect profile.

As a rough rule of thumb, the stronger the antihistamine effects, the more sedation and weight gain (that’s clozapine and olanzapine), and the stronger the dopamine-antagonist effects, the more movement disorders there are (that’s haloperidol and risperidone).

Evidence that Antipsychotics Impair Cognitive Abilities

While the majority of studies of antipsychotics find improvements or no change on cognitive tests, there are some exceptions, particularly on tests that have to do with spatial or procedural learning.

A study of 25 patients, after being on risperidone for 6 weeks, and throughout a 1-year follow-up period, found that risperidone worsened spatial working memory in first-episode schizophrenia. [8]

Procedural learning is impaired after months of treatment with haloperidol and risperidone — schizophrenic patients are slower to learn the Tower of Toronto task (though no difference is apparent after 6 weeks of treatment).  Olanzapine caused much less cognitive impairment (p < 0.001).[9]

A comparison of treatment-naive first-episode schizophrenics vs. schizophrenics treated with risperidone for six weeks found that the untreated patients were no worse at a procedural learning task than controls, while the treated patients were significantly worse.  This indicates that the medication, and not the schizophrenia, is responsible for the impairment.[10]

In a study of 35 schizophrenic and 45 control patients given a procedural learning task, the patients randomized to haloperidol performed worse than controls, while those randomized to risperidone or clozapine did not.[12]

In a study of 20 patients on haloperidol, 20 on risperidone, and 19 healthy controls tested on psychomotor tests related to driving ability, the medicated patients were significantly worse than controls on all tests, and haloperidol was worse than risperidone.[11]

Note that procedural learning — remembering how to complete a task, usually physical/motor, by practice — is associated with activity in the striatum and basal ganglia, the part of the brain that produces dopamine. This seems to fit with the fact that antipsychotics, in particular the most dopamine-inhibiting ones, particularly impair procedural learning.

Evidence that Tardive Dyskinesia Comes With Cognitive Impairment

Out of 28 studies identified in a meta-study, 22  reported patients with tardive dyskinesia to be more impaired along at least one cognitive measure.  The most common findings were an association with decreased orientation and memory (13 studies).  The finding persists even in studies that controlled for anticholinergic side effects.  Orofacial tardive dyskinesia seems to be especially associated with cognitive impairment (beta=0.23 of association with performance on the Trail Making Test, a measure of executive function and task switching).[13]

If cognitive dysfunction is a symptom of tardive dyskinesia, that supports the hypothesis that antipsychotic drugs cause cognitive dysfunction.

Animal Evidence that Antipsychotics Cause Cognitive Impairment

Animal studies give a usefully different perspective than human studies because you can ethically give antipsychotics to healthy animals. It may be that antipsychotics both remediate the cognitive problems caused by schizophrenia and cause additional cognitive problems of their own; with animals, you can see the latter in isolation.

Monkeys develop working memory deficits after 1-4 months of haloperidol administration (P = 0.0000004) and recover when given a D1 agonist.[15]

If you give a rhesus monkey haloperidol, it performs worse on a working-memory task (in which, if you can remember which window had a flashing light earlier, you get a treat when you stick your face in), and the higher the dose, the worse the accuracy.[16]

Haloperidol, olanzapine, risperidol, quetiapine, and clozapine all worsened marmosets’ performance at an object-retrieval task relative to baseline. Lurasidone, on the other hand, improved performance.[17]

Evidence that Antipsychotics Shrink Brains

A meta-study of longitudinal MRI effects of antipsychotics on brain volumes found that ventricular volumes increased 7.7-10.9% in treated patients, compared to 1.4% in controls; and that gray-matter or whole-brain volume decreased 1.2-2.9% per year in patients compared to 0.4 to 1% in controls.  [Note that ventricles are fluid-filled spaces in the skull cavity: growing ventricles means a shrinking brain.]

But is this just the result of schizophrenia itself causing brain damage? The evidence suggests not. Two studies of drug-naive patients showed a decrease in brain volume after onset of antipsychotics; two showed no effect.  Studies of chronically ill, untreated patients in India show no difference in brain volume vs. controls.  There’s no difference in volume in the brains of “high-risk” (pre-psychosis) patients compared to controls, including in the subgroup that went on to develop psychosis.  It may be that the reduction in brain volume that has usually been associated with schizophrenia is instead caused by antipsychotic use.[14]

Longer-term and higher-dose use of antipsychotics is associated with more gray matter and white-matter shrinkage, even after adjusting for illness severity, substance abuse, and follow-up duration.[18]

Long-term (17-27 month) exposure to antipsychotics (olanzapine and haloperidol) in macaque monkeys resulted in an 8-11% reduction in brain weight.[19]

The fact that antipsychotic-naive schizophrenics do not show a progressive decline in brain volume, and the fact that treating macaques with antipsychotics does cause a decline in brain volume, has led psychiatrist Joanna Moncrieff to argue that antipsychotics do not exert a “neuroprotective” effect on schizophrenia, and that schizophrenia is not a degenerative brain disease — instead, she believes that antipsychotics treat symptoms and also cause much of the brain damage we observe in schizophrenics.[20]

Personal Views

Before I went to the literature on this, I had a pretty negative view on antipsychotics, heavily colored by the personal stories I’ve heard of them working out very badly, and the rare cases of severe side effects like neuroleptic malignant syndrome.  My view was also colored by the fact that they’re often used on children and psychiatric inpatients as a coercive mechanism, against the will of the patients, and whether or not they have any medically beneficial effect. My sympathies are always going to be with the victims of coercion who have, in many cases, pleaded eloquently to just be let alone.

On the other hand, schizophrenia is really bad. And, from what I can tell, we can be quite confident that antipsychotics reduce the positive symptoms (delusions and hallucinations).  I can believe there are situations where the benefits outweigh the costs.

I think the evidence that antipsychotics, including atypical antipsychotics, can cause cognitive impairment is pretty compelling.

Do they cause net cognitive impairment in schizophrenics? I don’t know.  Maybe they reduce negative symptoms enough to balance out the cognitive impairment.

Do they cause irreversible cognitive impairment? I don’t know.  I don’t think we have human evidence of what happens to brains when people go off antipsychotics, or take a dopamine agonist.

Is taking antipsychotics worse, or better, than leaving psychosis untreated? Can you split the difference by taking lower dosages or getting off meds sooner? I have no idea how I would even begin to answer this question, and the answer probably depends a lot on individual values.

Here’s stuff that I do think is sensible to do (keep in mind, I am not a doctor or any kind of psych professional):

  • Think about the tradeoffs of antipsychotics before you have a psychotic break. 
    • In some states, including California, you can write up a psychiatric advance directive in which you can specify what to do in the event you lose your mind, including medications you are not to be given.
  • Look up psychiatric meds that you’re prescribed and see what they do.
    • Sometimes antipsychotics will be prescribed for things besides psychosis, like depression or Tourette’s. Sometimes they work for those things! But they have the same kinds of side effect risks that they always do, and doctors don’t always tell you that.  Abilify? Is an antipsychotic! It can cause extrapyramidal side effects! It’s surprisingly common for people not to get told things like this.
  • Don’t take a judgmental, one-size-fits-all attitude unless you have correspondingly incredible data.
    • “Always meds!” and “Never meds!” are terrible oversimplifications. We don’t know what’s going on yet, so in the meantime, all anyone can do is to try to make the best judgments they can under conditions of colossal uncertainty (and often great stress).


[1]Harvey, Philip D., et al. “Treatment of cognitive impairment in early psychosis: a comparison of risperidone and haloperidol in a large long-term trial.” American Journal of Psychiatry 162.10 (2005): 1888-1895.

[2]Keefe, Richard SE, et al. “Neurocognitive effects of antipsychotic medications in patients with chronic schizophrenia in the CATIE Trial.” Archives of general psychiatry 64.6 (2007): 633-647.

[3]Meltzer, Herbert Y., and Susan R. McGurk. “The effects of clozapine, risperidone, and olanzapine on cognitive function in schizophrenia.” Schizophrenia bulletin 25.2 (1999): 233-256.

[4]Nasrallah, H. “A review of the effect of atypical antipsychotics on weight.” Psychoneuroendocrinology 28 (2003): 83-96.

[5]Shirzadi, Arshia A., and S. Nassir Ghaemi. “Side effects of atypical antipsychotics: extrapyramidal symptoms and the metabolic syndrome.” Harvard Review of Psychiatry 14.3 (2006): 152-164.

[6]Miller, Del D., et al. “Extrapyramidal side-effects of antipsychotics in a randomised trial.” The British Journal of Psychiatry 193.4 (2008): 279-288

[7]Leucht, Stefan, et al. “Second-generation versus first-generation antipsychotic drugs for schizophrenia: a meta-analysis.” The Lancet 373.9657 (2009): 31-41.

[8]Reilly, James L., et al. “Adverse effects of risperidone on spatial working memory in first-episode schizophrenia.” Archives of General Psychiatry 63.11 (2006): 1189-1197.

[9]Purdon, Scot E., et al. “Procedural learning in schizophrenia after 6 months of double-blind treatment with olanzapine, risperidone, and haloperidol.” Psychopharmacology 169.3-4 (2003): 390-397.

[10]Harris, Margret SH, et al. “Effects of risperidone on procedural learning in antipsychotic-naive first-episode schizophrenia.” Neuropsychopharmacology 34.2 (2009): 468-476.

[11]Soyka, Michael, et al. “Effects of haloperidol and risperidone on psychomotor performance relevant to driving ability in schizophrenic patients compared to healthy controls.” Journal of psychiatric research 39.1 (2005): 101-108.

[12]Scherer, Hélene, et al. “Procedural learning in schizophrenia can reflect the pharmacologic properties of the antipsychotic treatments.” Cognitive and behavioral neurology 17.1 (2004): 32-40.

[13]Waddington, J. L., et al. “Cognitive dysfunction in schizophrenia: organic vulnerability factor or state marker for tardive dyskinesia?.” Brain and cognition 23.1 (1993): 56-70.

[14]Moncrieff, J., and J. Leo. “A systematic review of the effects of antipsychotic drugs on brain volume.” Psychological medicine 40.09 (2010): 1409-1422

[15]Castner, Stacy A., Graham V. Williams, and Patricia S. Goldman-Rakic. “Reversal of antipsychotic-induced working memory deficits by short-term dopamine D1 receptor stimulation.” Science 287.5460 (2000): 2020-2022.

[16]Bartus, Raymond T. “Short-term memory in the rhesus monkey: Effects of dopamine blockade via acute haloperidol administration.” Pharmacology Biochemistry and Behavior 9.3 (1978): 353-357.

[17]Murai, Takeshi, et al. “Effects of lurasidone on executive function in common marmosets.” Behavioural brain research 246 (2013): 125-131.

[18]Ho, Beng-Choon, et al. “Long-term antipsychotic treatment and brain volumes: a longitudinal study of first-episode schizophrenia.” Archives of general psychiatry 68.2 (2011): 128-137.

[19]Dorph-Petersen, Karl-Anton, et al. “The influence of chronic exposure to antipsychotic medications on brain size before and after tissue fixation: a comparison of haloperidol and olanzapine in macaque monkeys.” Neuropsychopharmacology 30.9 (2005): 1649-1661.

[20]Moncrieff, Joanna. “Questioning the ‘neuroprotective’hypothesis: does drug treatment prevent brain damage in early psychosis or schizophrenia?.” (2011): 85-87.

Dwelling in Possibility

Epistemic Status: Intuitive, Casual

One of the things I’ve noticed in people who are farther along in business or management than I am, usually men with a “leaderly” mien, is a certain comfort with uncertainty or imperfection.

They can act relaxed even when their personal understanding of a situation is vague, when the future is uncertain, when the optimal outcome is unlikely.  This doesn’t mean they’re not motivated to get things done.  But they’re cool with a world in which a lot of things remain nebulous and unresolved at any given moment.

They’re able to produce low-detail, high-level, positive patter for a general audience.  They’re able to remain skeptical, expecting that most new ideas won’t work, without seeming sad about that.

Talking to someone like that, it feels like a smooth layer of butter has been spread over the world, where everything is pretty much normal and fine most of the time — not a crisis, not a victory, just normalcy.

This isn’t me.  If something I care about is unclear to me, it’ll bother me. Either consciously (in which case I’ll try to learn more until I understand) or unconsciously (in which it’ll be an unpleasant blank spot on my map, that’ll nag at me uncomfortably.)

It also bothers me, as a radical, when I don’t see a path to my long-term goals being possible.  “Business as usual” feels not okay to me, much of the time.  I don’t like having a “forget about it, it’s Chinatown” attitude.  I don’t want to be a naive idiot, but I don’t want to be complacent either.


Being okay with vagueness seems to be a prerequisite to managing other people — after all, you can’t know every detail of everyone else’s job.  When I managed people, I struggled with that a lot. I couldn’t be sure a thing was done right unless I checked it for myself.  I’m pretty good at holding large systems in my head, but eventually organizations defeat even the most heroic attempt to micromanage them.

Being okay with uncertainty also seems to be a prerequisite for managing a portfolio of anything high-risk and high-reward — investments, sales leads, technologies to adopt, etc.  If you are elated every time an opportunity appears, and dejected every time it doesn’t work out, you’ll have a very hard time emotionally when dealing with a large volume of such opportunities. (My husband is a salesman and he’s long since stopped telling me about leads because I’ll get over-excited about every one of them.)

This reminds me of some of the stuff leadership coach Bryan Franklin says about paradox.  I don’t know if I can represent his ideas accurately, since he comes from a very different paradigm than mine, but I think he’s alluding to “both/and” thinking, the ability to simultaneously hold, for instance, the frame “this business is bound for incredible success” and “this business will fail unless we solve this problem.”

Consider the common example of a leader who needs to convince her followers that, while the team is experiencing significant challenges and there is a very real risk of failure, ultimately the team will prevail. There are two ways a lesser leader could falter in this moment. The first is to simply pander to neg activism: agreeing with everyone’s feeling that the current situation is rough or hopeless, without offering any vision, possibility, or credible plan. This would be a good display of empathy, but it won’t lead anyone to change. The second mistake would be to hold the opposite view, that the future is bright and the current setbacks are illusory or insignificant. This could be seen superficially as inspiring, but more likely it will backfire because it will be dismissed as being noncredible and unrelatable to the lived reality of the employees.

A superior leader learns how to hold paradox: to believe, at the same time, that the situation is dire and hopeful, meeting employees where they’re at, but also convincing them of the actions they can take that will lead to a brighter future. The evidence is that things are bad (anyone denying this will be seen as a Pollyanna); and also, the evidence is that things are good (anyone denying this would be seen as a weak leader, lacking creativity to produce a positive way forward). Followers need to feel met in the reality that they are scared, yet they also need to be given a realistic expectation of future success.

When you’re confronted with a paradox, you are presented with a choice. You can either ignore it and take a side (believe one side of the statement is true while the other is false), or you can do what we call hold paradox, which is to believe both contradictory statements or implications simultaneously. It’s an expression of faith in a greater truth that is currently invisible to you, but resolves the paradox and allows for the truth of both sides to harmoniously coexist. This is what great leaders do.

Holding paradox is the ability to literally hold in your mind the truth and acknowledge, for example, your utter insignificance on a cosmic scale, and then without allowing that experience to dissipate, add to it the unmistakable truth of your profound significance to those you love.

Believing a literal paradox is believing something that is logically impossible, and so, obviously, I don’t want to do it.  But believing in lots of different possibilities at the same time, believing that a thing can be viewed from lots of different points of view — there might be some purchase in that.

The real world is parti-colored. It doesn’t have a single theme or mood or color scheme.  But to really know in your bones that lots of different things are possible is a deeply scary option to me. It feels like letting go of things that are important to me, like commitment or ambition or rigor or even personal identity. If I care about something, how can I allow myself to chill out about it? How can I allow myself to fully enter into the worldview of someone with the opposite belief?  Wouldn’t that be a betrayal? Wouldn’t that mean losing myself?

There’s a common thread between this notion, and people like Jonathan Haidt who believe in worldview diversity and people at the Integral Center who believe that higher human developmental stages involve the ability to move fluidly between frames, and who sometimes connect this to business through books like Tribal Leadership.

All of them share a view that the principled or systematic person — the person who believes in one truth according to one set of principles — is weaker or less spiritually advanced than the person who sees things through multiple points of view.

In particular, one idea I picked up from Tribal Leadership is that if you believe a particular thing as an individual, you’ll be a weaker leader, because you’re just saying what you personally believe (which is selfish, in a sense, or at least private, and thus taken less seriously by others).  The leader has to be not just John Smith but the voice of Acme Corp.  Expressing your own thoughts (speaking as John Smith) has value coming from an individual contributor, but there’s a different, more facilitator-like, skill where you try to encourage dialogue or distill a common thread between different people’s views, and encourage teamwork and unity — and that’s leadership.

What I worry about, in all these kinds of philosophies, is that if I gain this balance, this ability to stay cool in the face of uncertainty and ignorance, this ability to engage with multiple perspectives, then I’ll no longer be able to be an individual with a particular point of view and set of goals and detailed knowledge of my areas of expertise.

Is it possible to love something, or pursue something, without freaking out about it?  Doesn’t equanimity trade off against passion?  Wouldn’t a person who kept their cool all the time be boring?  Wouldn’t a person who tried to “diversify worldviews” be inherently an unprincipled pragmatist?  Doesn’t the chillness of leaders sometimes look uncomfortably like privilege or elitism?  Isn’t there a lot of potential for manipulativeness when people try to “facilitate” for others?

And yet, pretty much every book about business success counsels equanimity — to a very high standard, from what I can see.  Seriously, flip through something like Emotional Intelligence 2.0 the next time you’re in an airport bookstore. Apparently ordinary middle managers have to be unbelievably good at handling emotional stress just to scrape by.

I have definitely seen chillness coexist with strong technical skill; quite a few people with that relaxed, leaderly affect are also top-notch at engineering or data science.  Accepting that some information “lives” in the “collective mind” of a group clearly doesn’t preclude knowing some things very well in your own mind and being able to execute well individually.

I’ve even seen a certain kind of chillness coexist with radical commitment. Rick Doblin, the founder of MAPS, has been steadily working for forty years on trying to promote research into the therapeutic use of psychedelics.  He’s a pleasant, mild-mannered family man; despite his controversial mission, he seems to bear no ill will to anyone, including the regulators he’s been trying to persuade to ease up drug restrictions.  He’s willing to collaborate with anyone, from any perspective or background, if it’ll help psychedelic research.  He’s my role model for how someone can be profoundly committed to a cause without being an angry or rigid person.  His way is like water wearing down a stone.

But I definitely have heard people tell me that equanimity cost them something — that they lost the chance to have a personal perspective and to want things for themselves when they learned to see things from all possible angles and be a facilitator for others.  I’ve seen people who are very good at sparking “interesting” conversations complain that they have a hard time connecting personally rather than remaining a third-party observer.

I’ve had occasions myself when I deliberately “took myself out of the picture” in order to hold space for others — and it worked pretty well, and was fun in its own way, and people responded well to it, but I had a strong intuition that this wasn’t what I wanted to spend the majority of my life doing.  I have a self, and it’s not going to like being cooped up forever.

So I’m genuinely uncertain.  Maybe leadership is fundamentally incompatible with stuff I want to keep? Maybe I just have hangups about harmless stuff, or resistance against working on things I’m naturally not good at? I wonder what older and more accomplished people would think about this issue.