In Defense of Individualist Culture

Epistemic Status: Pretty much serious and endorsed.

College-educated Western adults in the contemporary world mostly live in what I’d call individualist environments.

The salient feature of an individualist environment is that nobody directly tries to make you do anything.

If you don’t want to go to class in college, nobody will nag you or yell at you to do so. You might fail the class, but this is implemented through a letter you get in the mail or on a registrar’s website.  It’s not a punishment, it’s just an impersonal consequence.  You can even decide that you’re okay with that consequence.

If you want to walk out of a talk in a conference designed for college-educated adults, you can do so. You will never need to ask permission to go to the bathroom. If you miss out on the lecture, well, that’s your loss.

If you slack off at work, in a typical office-job environment, you don’t get berated. And you don’t have people watching you constantly to see if you’re working. You can get bad performance reviews, you can get fired, but the actual bad news will usually be presented politely.  In the most autonomous workplaces, you can have a lot of control over when and how you work, and you’ll be judged by the results.

If you have a character flaw, or a behavior that bothers people, your friends might point it out to you respectfully, but if you don’t want to change, they won’t nag, cajole, or bully you about it. They’ll just either learn to accept you, or avoid you. There are extremely popular advice columns that try to teach this aspect of individualist culture: you can’t change anyone who doesn’t want to change, so once you’ve said your piece and they don’t listen, you can only choose to accept them or withdraw association.

The basic underlying assumption of an individualist environment or culture is that people do, in practice, make their own decisions. People believe that you basically can’t make people change their behavior (or, that techniques for making people change their behavior are coercive and thus unacceptable.)  In this model, you can judge people on the basis of their decisions — after all, those were choices they made — and you can decide they make lousy friends, employees, or students.  But you can’t, or shouldn’t, cause them to be different, beyond a polite word of advice here and there.

There are downsides to these individualist cultures or environments.  It’s easy to wind up jobless or friendless, and you don’t get a lot of help getting out of bad situations that you’re presumed to have brought upon yourself. If you have counterproductive habits, nobody will guide or train you into fixing them.

Captain Awkward’s advice column is least sympathetic to people who are burdens on others — the depressive boyfriend who needs constant emotional support and can’t get a job, the lonely single or heartbroken ex who just doesn’t appeal to his innamorata and wants a way to get the girl.  His suffering may be real, and she’ll acknowledge that, but she’ll insist firmly that his problems are not others’ job to fix.  If people don’t like you — tough! They have the right to leave.

People don’t wholly “make their own decisions”.  We are, to some degree, malleable, by culture and social context. The behaviorist or sociological view of the world would say that individualist cultures are gravely deficient because they don’t put any attention into setting up healthy defaults in environment or culture.  If you don’t have rules or expectations or traditions about food, or a health-optimized cafeteria, you “can” choose whatever you want, but in practice a lot of people will default to junk.  If you don’t have much in the way of enforcement of social expectations, in practice a lot of people will default to isolation or antisocial behavior. If you don’t craft an environment or uphold a culture that rewards diligence, in practice a lot of people will default to laziness.  “Leaving people alone”, says this argument, leaves them in a pretty bad place.  It may not even be best described as “leaving people alone” — it might be more like “ripping out the protections and traditions they started out with.”

Lou Keep, I think, is a pretty good exponent of this view, and summarizer of the classic writers who held it. David Chapman has praise for the “sane, optimistic, decent” societies that are living in a “choiceless mode” of tradition, where people are defined by their social role rather than individual choices.  Duncan Sabien is currently trying to create a (voluntary) intentional community designed around giving up autonomy in order to be trained/social-pressured into self-improvement and group cohesion.  There are people who actively want to be given external structure as an aid to self-mastery, and I think their desires should be taken seriously, if not necessarily at face value.

I see a lot of writers these days raising problems with modern individualist culture, and it may be an especially timely topic. The Internet is a novel superstimulus, and it changes more rapidly, and affords people more options, than ever before.  We need to think about the actual consequences of a world where many people are in practice being left alone to do what they want, and clearly not all the consequences are positive.

But I do want to suggest some considerations in favor of individualist culture — that often-derided “atomized modern world” that most of us live in.

We Aren’t Clay

It’s a common truism that we’re all products of our cultural environment. But I don’t think people have really put together the consequences of the research showing that it’s not that easy to change people through environmental cues.

  • Behavior is very heritable. Personality, intelligence, mental illness, and social attitudes are all well established as being quite heritable.  The top ten most replicated findings in behavioral genetics starts with “all psychological traits show significant and substantial genetic influence”, which Eric Turkheimer has called the “First Law of behavioral genetics.”  A significant proportion of behavior is also explained by “nonshared environment”, which means it isn’t genetic and isn’t a function of the family you were raised in; it could include lots of things, from peers to experimental error to individual choice.
  • Brainwashing doesn’t work. Cult attrition rates are high, and “brainwashing” programs of POWs by the Chinese after the Korean War didn’t result in many defections.
  • There was a huge boom in the 1990’s and 2000’s in “priming” studies — cognitive-bias studies that showed that seemingly minor changes in environment affected people’s behavior.  A lot of these findings didn’t replicate. People don’t actually walk slower when primed with words about old people. People don’t actually make different moral judgments when primed with words or videos of cleanliness or disgusting bathrooms.  Being primed with images of money doesn’t make people more pro-capitalist.  Girls don’t do worse on math test when primed with negative stereotypes. Daniel Kahneman himself, who publicized many of these priming studies in Thinking, Fast and Slow, wrote an open letter to priming researchers that they’d have to start replicating their findings or lose credibility.
  • Ego depletion failed to replicate as well; using willpower doesn’t make you too “tired” to use willpower later.
  • The Asch Conformity Experiment was nowhere near as extreme as casual readers generally think: the majority of people didn’t change their answers to wrong ones to conform with the crowd, only 5% of people always conformed, and 25% of people never conformed.
  • The Sapir-Whorf Hypothesis has generally been found to be false by modern linguists: the language one speaks does not determine one’s cognition. For instance, people who speak a language that uses a single word for “green” and “blue” can still visually distinguish the colors green and blue.

Scott Alexander said much of this before, in Devoodooifying Psychology.  It’s been popular for many years to try to demonstrate that social pressure or subliminal cues can make people do pretty much anything.  This seems to be mostly wrong.  The conclusion you might draw from the replication crisis along with the evidence from behavioral genetics is “People aren’t that easily malleable; instead, they behave according to their long-term underlying dispositions, which are heavily influenced by inheritance.”  People may respond to incentives and pressures (the Milgram experiment replicated, for instance), but not to trivial external pressures, and they can actually be quite resistant to pressure to wholly change their lives and values (becoming a cult member or a Communist.)

Those who study culture think that we’re all profoundly shaped by culture, and to some extent that may be true. But not as much or as easily as social scientists think.  The idea of mankind as arbitrarily malleable is an appealing one to marketers, governments, therapists, or anyone who hopes that it’s easy to shift people’s behavior.  But this doesn’t seem to be true.  It might be worth rehabilitating the notion that people pretty much do what they’re going to do.  We’re not just swaying in the breeze, waiting for a chance external influence to shift us. We’re a little more robust than that.

People Do Exist, Pretty Much

People try to complicate the notion of “person” — what is a person, really? Do individuals even exist?  I would argue that a lot of this is not as true as it sounds.

A lot of theorists suggest that people have internal psychological parts (Plato, Freud, Minsky, Ainslie) or are part of larger social wholes (Hegel, Heidegger, lots and lots of people I haven’t read).  But these, while suggestive, are metaphors and hypotheses. The basic, boring fact, usually too obvious to state, is that most of your behavior is proximately caused by your brain (except for reflexes, which are controlled by your spinal cord.)  Your behavior is mostly due to stuff inside your body; other people’s behavior is mostly due to stuff inside their bodies, not yours.  You do, in fact, have much more control over your own behavior than over others’.

“Person” is, in fact, a natural category; we see people walking around and we give them names and we have no trouble telling one person apart from another.

When Kevin Simler talks about “personhood” being socially constructed, he means a role, like “lady” or “gentleman.” The default assumptions that are made about people in a given context. This is a social phenomenon — of course it is, by design!  He’s not literally arguing that there is no such entity as Kevin Simler.

I’ve seen Buddhist arguments that there is no self, only passing mental states.  Derek Parfit has also argued that personal identity doesn’t exist.  I think that if you weaken the criterion of identity to statistical similarity, you can easily say that personal identity pretty much exists.  People pretty much resemble themselves much more than they resemble others. The evidence for the stability of personality across the lifespan suggests that people resemble themselves quite a bit, in fact — different timeslices of your life are not wholly unrelated.

Self-other boundaries can get weird in certain mental conditions: psychotics often believe that someone else is implanting thoughts inside their heads, people with DID have multiple personalities, and some kinds of autism involve a lot of suggestibility, imitation, and confusion about what it means to address another person.  So it’s empirically true that the sense of identity can get confused.

But that doesn’t mean that personal identity doesn’t usually work in the “normal” way, or that the normal way is an arbitrary convention. It makes sense to distinguish Alice from Bob by pointing to Alice’s body and Bob’s body.  It’s a distinction that has a lot of practical use.

If people do pretty much exist and have lasting personal characteristics, and are not all that malleable by small social or environmental influences, then modeling people as individual agents who want things isn’t all that unreasonable, even if it’s possible for people to have inconsistent preferences or be swayed by social pressure.

And cultural practices which acknowledge the reality that people exist — for example, giving people more responsibility for their own lives than they have over other people’s lives — therefore tend to be more realistic and attainable.

 

How Ya Gonna Keep Em Down On The Farm

Traditional cultures are hard to keep, in a modern world.  To be fair, pro-traditionalists generally know this.  But it’s worth pointing out that ignorance is inherently fragile.  As Lou Keep points out , beliefs that magic can make people immune to bullets can be beneficial, as they motivate people to pull together and fight bravely, and thus win more wars. But if people find out the magic doesn’t work, all that benefit gets lost.

Is it then worth protecting gri-gri believers from the truth?  Or protecting religious believers from hearing about atheism?  Really? 

The choiceless mode depends on not being seriously aware that there are options outside the traditional one.  Maybe you’ve heard of other religions, but they’re not live options for you. Your thoughts come from inside the tradition.

Once you’re aware that you can pick your favorite way of life, you’re a modern. Sorry. You’ve got options now.

Which means that you can’t possibly go back to a premodern mindset unless you are brutally repressive about information about the outside world, and usually not even then.  Thankfully, people still get out.

Whatever may be worth preserving or recreating about traditional cultures, it’s going to have to be aspects that don’t need to be maintained by forcible ignorance.  Otherwise it’ll have a horrible human cost and be ineffective.

Independence is Useful in a Chaotic World

Right now, anybody trying to build a communitarian alternative to modern life is in an underdog position.  If you take the Murray/Putnam thesis seriously — that Americans have less social cohesion now than they did in the mid-20th century, and that this has had various harms — then that’s the landscape we have to work with.

Now, that doesn’t mean that communitarian organizations aren’t worth building. I participate in a lot of them myself (group houses, alloparenting, community events, mutual aid, planning a homeschooling center and a baugruppe).  Some Christians are enthusiastic about a very different flavor of community participation and counterculture-building called the Benedict Option, and I’m hoping that will work out well for them.

But, going into such projects, you need to plan for the typical failure modes, and the first one is that people will flake a lot.  You’re dealing with moderns! They have options, and quitting is an option.

The first antidote to flaking that most people think of — building people up into a frenzy of unanimous enthusiasm so that it doesn’t occur to them to quit — will probably result in short-lived and harmful projects.

Techniques designed to enhance group cohesion at the expense of rational deliberation — call-and-response, internal jargon and rituals, cults of personality, suppression of dissent  — will feel satisfying to many who feel the call of the premodern, but aren’t actually that effective at retaining people in the long term.  Remember, brainwashing isn’t that strong.

And we live in a complicated, unstable world.  When things break, as they will, you’d like the people in your project to avoid breaking.  That points in the direction of  valuing independence. If people need a leader’s charisma to function, what are they going to do if something happens to the leader?

Rewarding Those Who Can Win Big

A traditionalist or authoritarian culture can help people by guarding against some kinds of failure (families and churches can provide a social safety net, rules and traditions can keep people from making mistakes that ruin their lives), but it also constrains the upside, preventing people from creating innovations that are better than anything within the culture.

An individualist culture can let a lot of people fall through the cracks, but it rewards people who thrive on autonomy. For every abandoned and desolate small town with shrinking economic opportunity, there were people who left that small town for the big city, people whose lives are much better for leaving.  And for every seemingly quaint religious tradition, there are horrible abuse scandals under the surface.  The freedom to get out is extremely important to those who aren’t well-served by a traditional society.

It’s not that everything’s fine in modernity. If people are getting hurt by the decline of traditional communities — and they are — then there’s a problem, and maybe that problem can be ameliorated.

What I’m saying is that there’s a certain kind of justice that says “at the very least, give the innocent and the able a chance to win or escape; don’t trade their well-being for that of people who can’t cope well with independence.”  If you can’t end child abuse, at least let minors run away from home. If you can’t give everybody a great education, at least give talented broke kids scholarships.  Don’t put a ceiling on anybody’s success.

Immigrants and kids who leave home by necessity (a lot of whom are LGBT and/or abused) seem to be rather overrepresented among people who make great creative contributions.  “Leaving home to seek your freedom and fortune” is kind of the quintessential story of modernity.  We teach our children songs about it.  Immigration and migration is where a lot of the global growth in wealth comes from.  It was my parents’ story — an immigrant who came to America and a small-town girl who moved to the city.  It’s also inherently a pattern that disrupts traditions and leaves small towns with shrinking populations and failing economies.

Modern, individualist cultures don’t have a floor — but they don’t have a ceiling either. And there are reasons for preferring not to allow ceilings. There’s the justice aspect I alluded to before — what is “goodness” but the ability to do valuable things, to flourish as a human? And some if people are able to do really well for themselves, isn’t limiting them in effect punishing the best people?

Now, this argument isn’t an exact fit for real life.  It’s certainly not the case that everything about modern society rewards “good guys” and punishes “bad guys”.

But it works as a formal statement. If the problem with choice is that some people make bad choices when not restricted by rules, then the problem with restricting choice is that some people can make better choices than those prescribed by the rules. The situations are symmetrical, except that in the free-choice scenario, the people who make bad choices lose, and in the restricted scenario, the people who make good choices lose.  Which one seems more fair?

There’s also the fact that in the very long run, only existence proofs matter.  Does humanity survive? Do we spread to the stars?  These questions are really about “do at least some humans survive?”, “do at least some humans develop such-and-such technology?”, etc.  That means allowing enough diversity or escape valves or freedom so that somebody can accomplish the goal.  You care a lot about not restricting ceilings.  Sure, most entrepreneurs aren’t going to be Elon Musk or anywhere close, but if the question is “does anybody get to survive/go to Mars/etc”, then what you care about is whether at least one person makes the relevant innovation work.  Playing to “keep the game going”, to make sure we actually have descendants in the far future, inherently means prioritizing best-case wins over average-case wins.

Upshots

I’m not arguing that it’s never a good idea to “make people do things.”  But I am arguing that there are reasons to be hesitant about it.

It’s hard to make people do what you want; you don’t actually have that much influence in the long term; people in their healthy state generally are correctly aware that they exist as distinct persons; surrendering judgment or censoring information is pretty fragile and unsustainable; and restricting people’s options cuts off the possibility of letting people seek or create especially good new things.

There are practical reasons why “leave people alone” norms became popular, despite the fact that humans are social animals and few of us are truly loners by temperament.

I think individualist cultures are too rarely explicitly defended, except with ideological buzzwords that don’t appeal to most people. I think that a lot of pejoratives get thrown around against individualism, and I’ve spent a lot of time getting spooked by the negative language and not actually investigating whether there are counterarguments.  And I think counterarguments do actually exist, and discussion should include them.

 

 

Regulatory Arbitrage for Medical Research: What I Know So Far

Epistemic status: pretty ignorant. I’m sharing now because I believe in transparency.

I’ve been interested in the potential of regulatory arbitrage (that is, relocating to less regulated polities) for medical research for a while. Getting drugs or devices FDA-approved is expensive and extremely slow.  What if you could speed it up by going abroad to do your research?

I talked to some people who work in the field, and so far this is my distillation of what I got out of those conversations.  It’s a very rough draft and I expect to learn more.

Q: Why don’t pharma companies already run trials in developing countries?

A: They do! A third of clinical trials run by US-based pharma companies are outside the US, and that number is rapidly growing — a more than 2000% increase over the past two decades. Labor costs in India, China, and Russia are much lower, and it’s easier to recruit participants in countries where a clinical trial may be the only chance people have to get access to the latest treatments.

But in order to sell to American markets, those overseas trials still have to be conducted to FDA standards (with correspondingly onerous reporting requirements.) Many countries, like China, are starting to harmonize their regulatory standards with the FDA.  It’s not the Wild West.

Q: Ok, but why not sell drugs to foreign countries and bypass the US entirely?

A: The US is by far the biggest pharmaceutical market. As of 2014, US sales made up about 38% of global pharmaceutical sales; the European market was about 31%, and is roughly as tightly regulated. The money in pharma comes from selling to the developed world, which has strict standards for demonstrating safety and efficacy.

Q: Makes sense. But why not run cheap, preliminary, unofficial trials just to confirm for yourself whether drugs work, before investing in bigger and more systematic FDA-compliant trials for the successful ones?

A: I don’t know for sure, but it seems like pharma companies are generally not very interested in choosing their drug portfolio based on the likely efficacy of early-stage drug candidates.  When I’ve tried to do research into how they decide which drug candidates to pursue through clinical trials, what I found was that there’s a lot of portfolio management: mathematical models, sometimes quite complex, based on discounted cash flow analysis.  A drug candidate is treated as a random variable which has some distribution over future returns, based on the market size and the average success rate of trials.

What doesn’t seem to be involved in the decision-making process is analysis of which drug candidates are more likely to succeed in trials than others. Most drug candidates don’t work: 92% of preclinical drug candidates fail to be efficacious when tested in humans, and that attrition rate is only growing.  As clinical trials grow more expensive, failed trials are a serious and increasing drag on the pharma industry, but I’m not sure there’s interest in trying to cut those costs by choosing drug candidates more selectively.

On the few occasions when I’ve tried to pitch to large pharma companies the idea of trying to “pick winners” among early-stage drugs based on data analysis (of preclinical results, the past performance of the drug class, whatever), the idea was rejected.

Investors in biotech startups, of course, do try to pick winners among preclinical drug candidates; but an investor told me that, based on his experience, it wouldn’t be much easier to raise money if you had a successful but non-FDA-compliant preliminary human trial than if you had no human trials at all.

My impression is that (perhaps as a rational reaction to high rates of noise or fraud) decisionmakers in the industry aren’t very interested in making bets based on weak or preliminary evidence, and tend to round it down to no evidence at all.

Q: So are there any options left for trying to do medical research outside of an onerous regulatory environment?

A: Well, one option is legal exemptions. For example, the FDA’s Rare Disease Program can offer faster options for reviewing applications for a drug candidate that treats a life-threatening disease where no adequate treatment exists.

Another option is selling supplements, which do not need FDA approval. You need to make sure they’re safe, you can’t sell controlled substances, and you can’t claim that supplements treat any disease, but other than that, for better or worse, you can sell what you want.  One company, Elysium Health, is actually trying to develop serious anti-aging therapies and market them as supplements; Leonard Guarente, one of the pioneers of geroscience and the head of MIT’s aging lab, is the co-founder.

The problem with supplements, of course, is that you can’t sell them as treatments. Aging isn’t legally a disease, and the FDA is not approving anti-aging therapies, so Elysium’s model makes sense. But if you had a cure for cancer, you’d have a hard time selling it as a supplement without running afoul of the law.

There’s also medical tourism, which is a $10bn industry as of 2012, and expected to reach $32bn by 2019.  Most medical tourism is for conventional medical procedures, especially cosmetic surgery and dentistry, as customers seek cheaper options abroad.  Sometimes there are also experimental procedures, like stem cell therapies, though a lot of those are fraudulent and dangerous.  It might be possible to open a high-quality translational-research clinic in a developing country, and eventually collect enough successful results to advertise it globally as a medical tourism destination.  The key challenge, from what people in the field tell me, is to get the official blessing of the local government.

Q: Could you do it on a ship?

A: Maybe, but it would be hard.

Yes, technically international waters are not under any country’s jurisdiction.  But if a government really doesn’t want you doing your thing, they can still stop you. Pirate radio (unlicensed radio broadcasting from ships in international waters) was technically legal in the 1960’s, where it was very popular in the UK, but by 1967 the legal loophole had been shut down.

Also, ships are in the water. If you compare a cruise ship to a building of equivalent square-footage, the ship needs to be staffed with people with nautical expertise, and it needs more regular maintenance.  In most situations, I’d expect it to be much more expensive to run a ship clinic than a land clinic.

There’s also the sobering example of BlueSeed, which was to be a cruise ship where international entrepreneurs could live and work in international waters, without the need for a US visa. It was put “on hold” in 2013 due to lack of investor funding.  And, obviously, a “floating condo/office” is a much easier goal than a “floating clinic.”

Q: Would cryptocurrencies help?

A: Noooooo. No no no no no.

You’re probably thinking about black markets, which are risky in themselves; and anyway, cryptocurrencies do not help with black markets because they are not anonymous.

Bitcoin helpfully points out that Bitcoin is not anonymous.  It is incredibly not anonymous.  It is literally a public record of all your transactions.  Ross Ulbricht of Silk Road, tragically, didn’t understand this.

Q: So, can regulatory arbitrage work?

A: It’s definitely not trivial, but I haven’t ruled it out yet. The medical tourism model currently seems like the most promising method.

I think that transparency would be essential to any big win — yes, there’s lots of shady gray-market stuff out there, but even aside from ethical concerns, if you have to fly under the radar, it’s hard to grow big.  If you’re doing clinical research, it’s impossible to get anything done unless you’re transparent with the scientific community.  If you’re trying to push medical tourism towards the mainstream, you have to inspire trust in patients.  Controversy is inevitable, but if a model like this can work at all, the results would have to be good enough to speak for themselves.

Miscellany

  1. I have a Twitter feed. It’s just journal articles (and commentary on them), I don’t use it as a social network, but if you want to see what’s on my mind, check it out.  For instance, what are the implications if most polygenic traits are affected by nearly all genes?
  2. I quit cross-posting to LessWrong because the discussion didn’t seem that good and I didn’t have the energy to single-handedly try to shift the flow. That may be changing now that they’re setting up a new, more troll-proof website, now in private beta. I’ll see how it goes and link when it’s open to the public.
  3. I highly recommend Lapham’s Quarterly, a magazine that brings together excerpts from historical and contemporary writers on a common theme. It’s an easy way to get some perspective, since we live in a really ahistorical culture.
  4. Elizabeth of Aceso Under Glass is now trying to go pro with her writing and research:

    My passion is the things I do for this blog- research, modeling, writing.  So obviously a lot of my newfound free time will go here.  But I’d also like to look for paid opportunities to use those skills.  If you are or know of someone who needs writing or research like I do for this blog (deep scientific investigation, synthesizing difficult sources into something easy to read, effectiveness analysis, media reviews, all of these together), please reach out to me via elizabeth at this domain.   Have a thing you really want me to blog about?  Now’s a good time to ask.

If you like my lit reviews, and want to commission someone to research the answer to a question, go to Elizabeth. She’s excellent, and she actually has the time and opportunity to do freelance work, which I currently don’t.

Momentum, Reflectiveness, Peace

Epistemic Status: Personal

I’ve been writing a lot lately about the mental habits that make calm and reflection possible. This is because a lot of “rationality” seems to depend on dispositions — things like the propensity to question your first assumptions, seek new information, examine evidence in a fair or dispassionate manner, and so on.  It’s very difficult to be motivated towards reflective behavior if you’re so upset that the mental motion of “stop and think” is impossible for you.  Knowing about cognitive biases isn’t much use if you don’t want to do anything except your default reactions to stimuli.

Reflectiveness, I think, is simply the capacity to question, “Is this what I want to be doing?”  The opposite of reflectiveness is momentum: when you feel like “whatever I happen to be doing, I want to keep doing it, good and hard!”  Reflectiveness is “Hmm, could things be otherwise than they are?” Momentum is “Things shall be exactly as they are! Except more so!”

Social media feedback loops are an example of momentum. You happened to start fooling around on social media, so you want to continue.  Similarly, you notice that something is beginning to trend, so you want to jump on the trend and ride it higher.  This is momentum in the sense of the momentum term in a stochastic process.

I suspect that psychological reactance and momentum are linked. When you think, “whatever I’m doing, I don’t want to change, and if you suggest I change, I’ll only do it more!” there’s something of a momentum flavor.

“Do whatever is being done, but more so” is what Michele Reilly calls “pragmatism”:

Pragmatism creates a call for conformity, implicit pressure for agreement and unquestioning support for whatever is representative of power. Its philosophy is a submission to threats.  Intellectualism as I am using the term, points directly away from those things.

Reflectiveness, then, is “consider what is not being done, what is not representative of power, what is not in agreement with the default.”  Consider deviations and alternatives and original approaches.  Consider whether the current direction of society might not be optimal. Consider whether what you’re doing might not be for the best.  Consider whether the last thing you read might not be correct. Consider whether to turn in a different direction.

This is the mental motion of “stop, think, ask a question.”

As I understand it, it is similar to sattva, the peaceful, aware state of mind.  Like air, it is mobile; it can change direction.  Like air, it is light; it feels mildly pleasant to be intellectually engaged.

But getting to reflectiveness is often scary and threatening. If you really want something at the moment, you have to let go long enough to think about “do I want to want this?” If you are doing something at the moment, you have to stop long enough to think about “do I want to do this?”  And if you had to change your behavior, or change an entire chunk of the world, that would be a lot of work.  The prospect of extra work, or of stepping back from the object of your present desire, is really stressful.

My current hack towards reflectiveness is to simply start with the stop.

Rest is the first thing. Sleep deprivation makes people more emotionally reactive and less reflective.  I found that a day of focused rest — when I deliberately spent all day sleeping whenever I wanted, eating as much as I wanted, quietly daydreaming or meditating without talking to anyone or consuming any media, and focusing on regaining a sense of wellness and satiety, was really helpful.

A related thing is cultivating a sort of contentment. “All is well, literally everything is fine, I don’t have to do anything except be.  Everything can be left in peace.”

I know that there are a lot of problems with contentment, if I were to present it as a totalizing philosophy. Lots of people are not fine. Many things are worth doing. Eternal apathy isn’t most people’s idea of a great life plan.

But I’m not thinking of contentment as the whole of one’s life or mind. I’m thinking of it as a base. There is a very low-level sense of “things are all right, I can rest and be nourished, I am welcome in the universe” that I think is probably important for living things.  And to cultivate that base, sometimes you have to stop doing things and rest your body and mind.  You don’t have to do anything right now. No obligations bind. You can rest in peace and freedom.

And out of that restful state, sometimes reflectiveness becomes more accessible. For instance, if you believe you don’t have an obligation to act on a particular idea you read about, you can begin to merely consider it, abstractly, hypothetically. With a certain airy gentleness.

(In a weird way, I think this may be akin to Kant’s notion of public reason. He says that in a state with a sovereign strong enough that one can be certain that mere intellectual discussion of reforms won’t lead to revolution, it becomes possible to actually achieve “enlightened” reforms, slowly and over time, whereas revolutions tend to merely replace one form of arbitrary power with another.  Similarly, if you can merely consider an idea intellectually, while temporarily promising yourself that you don’t have to do anything about it, then in the long run you might become more able to change your behavior on the basis of such reflections.)

Cultivating this sense of restful, contented peace made it more possible for me to engage with ideas without feeling pressured to agree with them.  If lots of alternatives are possible, but none are obligatory, then entertaining hypothetical concepts is a rather gossamer-light experience, like looking at a soap bubble or a rainbow.

It’s also easier to behave with gentleness and self-restraint to other people, if you tap into that sense of eternal peace; people can put no duties upon you, they are simply fellow-creatures sharing the world with you, and you can separate from them if you like.

I’ll have to wait and see if this leads to more thorough abilities to consider alternatives and act on the basis of reflection, but it seems promising.

My current motto is “Turn — slowly.”  I can only adapt slowly, improve slowly, originate useful ideas slowly.  I still need stretches of rest and peace. A slow positive trajectory is still worth it. (And can be more productive in the long run. I get dramatically more work done after rest.)  Turning slowly towards truth seems to be the best way available.

Update on Sepsis: Donations Probably Unnecessary

Epistemic Status: Pretty Confident

So, remember how I was urging people to donate for a randomized controlled trial of a new treatment for sepsis?

I’ve been informed by some people who work with the Open Philanthropy Project, which does research into giving opportunities that I really respect, that there are already foundations which are likely to fund an RCT for the treatment.  This means that donations from private individuals are no longer necessary.

(A quick rundown of the logic behind this: if you’re trying to give “optimally”, you want to pay attention to the marginal returns of your dollars.  If you give the first dollar to a great opportunity that nobody else will fund, your marginal impact is huge. If you give a dollar to the same great opportunity, but somebody else has already pledged $10M, then your dollar has become a lot less useful, because pretty much any goal has diminishing marginal returns on investment.  If your motivation for giving to charity is achieving a goal as cheaply as possible, you should move away from charities that are already adequately funded, and towards opportunities that are underfunded. This is a simple idea but it took me a surprisingly long time to understand!)

If you already gave to Eastern Virginia Medical School for the sepsis trial, your money’s not refundable, but it’s still being dedicated to sepsis research.

General implications I’d draw from this:

  • This is another example, as GiveWell and OpenPhil have found many times, of the principle that finding good giving opportunities is hard. It’s hard for the same reason finding good investment opportunities is hard. If something is obviously great, there’s a good chance that professionals have already invested in it. If something is undervalued, it’s probably not obviously great (it’s at least likely to be controversial.)
  • This is a positive update on the success of the philanthropic community, esp. in medicine.  Drug companies may not have an incentive to fund trials of cheap, unpatentable treatments, but perhaps foundations do.
  • Unfortunately for those of us on the awkward and scruffy side, this suggests that talking to rich people is a useful skill in finding out what’s actually going on in the world.

 

Kindness Against The Grain

Epistemic Status: Unformed Thoughts

I’ve heard from a number of secular-ish sources (Carse, Girard, Arendt) that the essential contribution of Christianity to human thought is the concept of forgiveness.  (Ribbonfarm also has a recent post on the topic of forgiveness.)

I have never been a Christian and haven’t even read all of the New Testament, so I’ll leave it to commenters to recommend Christian sources on the topic.

What I want to explore is the notion of kindness without a smooth incentive gradient.

Most human kindness is incentivized. We do things for others, and get things in return. Contracts and favors alike are reciprocal actions.  And this makes a lot of sense, because trade is sustainable. Systems of game-theoretic agents that do some variant of tit-for-tat exchange tend to thrive, compared to agents that are freeloaders or altruists. Freeloaders can only exploit so long until they destroy the system they’re exploiting, or suffer from the retribution of tit-for-tat players; pure altruists burn themselves out quickly.

Sometimes kindness is reciprocated at the genetic rather than the personal level (see kin selection.)

Sometimes it’s reciprocated by long-term or indirect means — you can sometimes get social credit for being kind, even if the person you help can’t directly reciprocate. A reputation for generosity to allies and innocents makes you look strong and worth allying with, so you come out ahead in the long run.

And one of the ways we implement the incentives towards kindness in practice is through sympathy. When we see another’s suffering, we feel an urge to be kind to them, and a warm fuzzy reward if we help them.  That way, kindness is feasible along local emotional incentive gradients.

But, of course, sympathy itself is carefully optimized to make sure we only sympathize with those whom we’d come out ahead by helping. Sympathy is not merely a function of suffering. It is easier to sympathize with children than with adults, with the grateful than the ungrateful, with those who have experienced culturally acceptable “grounds for sympathy” (such as divorce, loss of a loved one, illness, job loss, crime victimization, car trouble, or fatigue, according to this sociological study).  We sympathize more with those whose suffering is perceived as unjust — though this may be something of a circular notion.

This leaves out certain forms of suffering.

  • The stranger, who is not part of your group, will receive less sympathy.  So will the outsider or social deviant.
  • The person with a permanent problem that can’t be easily fixed will eventually receive less sympathy, because he cannot be restored to happiness and in a position to show gratitude or return favors.
  • The overly self-reliant person will receive less sympathy; if sympathy is like a “credit account”, the person who has never opened one will be offered less credit than one who maintains a modest balance. We require vulnerability and a show of weakness before our sympathy will turn on.
  • The angry or assertive person who does not show gratitude or deference will receive less sympathy.  Appeasement displays evoke sympathy and reconciliation.
  • The person whose suffering takes an illegible form will receive less sympathy.

To be a recipient of sympathy one must be both weak and strong; weak, to show one really has received a misfortune; strong, to show one can be a useful ally someday. Children are the perfect example, because they are small and vulnerable today, but can grow to be strong as adults.  The victims of temporary and easily reversible bad luck are in a similar position: vulnerable today, but soon to be restored to strength.  Permanently disadvantaged adults, people who may be poor/disabled/nonwhite/etc and have developed the self-reliance or resentment associated with coping with long-term deprivation that isn’t going away, are less easy to sympathize with.

Some of this has been shown experimentally; subjects in an experiment who viewed other subjects appearing to receive electric shocks were more unsympathetic when they were told the shocks would continue in a subsequent session, versus when they were told the shocks had ended, or when they were told that their choices could stop the shocks. Permanent suffering is less sympathetic than temporary or fixable suffering.

Sympathy provides an immediate emotional incentive to respond to suffering with kindness, and it’s pretty well calibrated to be “good game theory” — but it’s not perfect by any means.

Cooperation Without Sympathy

Imagine a space alien — a grotesque creature, one whose appearance makes you want to vomit — offers you a deal. Let’s say this alien is, like the creatures in Octavia Butler’s Xenogenesis trilogy, a “gene trader”, one who can splice DNA with its bodily organs, and has a drive towards genetic engineering analogous to what Earth animals experience as a sex drive.  If you have “sex” with the alien and produce part-alien babies, it will give you and your children access to the vastly advanced powers in its alien genes, in exchange for gratifying its biological urge and allowing it to benefit from your genes.

From an intuitive standpoint, this is grotesque. The alien is not sexy. You cannot feel compassion for its desires to trade genes with you. It feels violating, disgusting, unacceptable. You were never evolved to want to breed with aliens.

And yet the game theory is sound. Superpowers are a grand thing to have. Even sexiness exists as a way to incentivize you to have strong children — and your alien children will undoubtedly be strong.

It’s a game-theoretic win-win but not a sympathetic win-win. Other humans will not find your alien babies sympathetic, or your choice to cooperate with the aliens a pro-social one.

It’s a sort of betrayal against your fellow humans, in that you are breaking the local game of “sex is between humans” and unilaterally gaining superpowered alien babies; but it’s a choice that any human could make as easily as you, so you aren’t leaving others permanently worse off, or depleting a valuable commons. Since all humans would be better off with alien genes, it’s not really a “defection” if you take the lead in doing something that would be beneficial if done by everyone.

Butler is really good at expressing how a “peaceful win-win” — on paper, an obviously correct choice — can feel disgusting.  Sympathy incentives can’t get you to win-win cooperation, if the thing that the other person wants is not something that you can imagine wanting.

This is an example of incentives for cooperation being present but not smooth.  It is in your interest to “gene trade”, but you only know that intellectually; you cannot be guided to it naturally through sympathy.

In the same way, helping someone “unsympathetic” but valuable is a “good investment” but doesn’t feel like it.  You often hear about this in disability contexts. “All you have to do is give me a relatively cheap accommodation and suddenly I become way more productive! How is this not a good deal for you?”  Well, it may be a good deal but not a sympathetic deal, because people’s mental accounting doesn’t match reality; if they think that the person “ought to be able to” get along without the accommodation, sympathy doesn’t provoke them to help, and if they don’t have a strong intuitive sense of people being plastic, so that they function differently in different environments, they don’t really intuitively believe that a blind person can be an expert programmer if given a screen reader, for instance.  Abstractly it’s a good deal, but concretely it’s not being guided smoothly by emotional gradients, it requires an act of detached cognition.

In practice, you can guide a situation back to sympathy, and that’s usually the best way to get the trade done. Try to play up the sympathetic qualities of the trade partner, try to analogize the requested action to things that are considered moral duties in one’s social context.  Try to set up emotional guardrails, engineer the social environment so the deal can be done without abstract thought.

But this isn’t really feasible for a single individual to do.  If you’re alone and nobody wants to help you, even if you reciprocate, because you’re not a “sympathetic character”, you can’t reshape social pressures to make yourself sympathetic all by yourself.  If we aren’t going to brutally destroy the lives of valuable people who don’t already have a posse, somebody is going to have to think, to go beyond gradient-following.

I think that to get the best results, thought is actually necessary.  By “thought” I mean the God’s-eye view, the long-view, the ability to ask “where do I want to go?” and potentially have an answer that isn’t “whichever way I’m currently going.” But what emotional or psychological or behavioral scaffolding promotes thought?  We are, after all, made of meat.  Since sometimes humans do think, there must be a way to build thought out of meat.  I’m still trying to understand how that is done.

Forgiveness and the Very Long Term

Forgiveness, on a structural level, is choosing not to call in a debt. I’m entitled to compensation, according to the rules of whatever game I’m playing, but I don’t demand it.

Forgiveness is a local loss to the forgiver. If everyone forgave everything all the time, it wouldn’t be remotely sustainable.

But a little bit of forgiveness is useful, in exactly the same way that bankruptcy is useful.  Bankruptcy means that there’s a floor to how much debt you can get in, which allows loss-averse humans to be willing to take on debt at all, which means that more high-expected-value investments get made.

Tit-for-tat with forgiveness outperforms plain tit-for-tat.

You can also think of forgiveness as a function of time. If you expect that someone will be net positive to you in the long run, you can accept them costing you in the short run, and not demand payment now. In other words, you extend them cheap credit.  As your time horizon goes to infinity (or your discount rate goes to zero), it can become possible to not demand payment at all, to forgive the loan entirely.  If it doesn’t matter whether they pay you back tomorrow, or in a hundred years, or in a thousand, but you expect them to be able to pay someday, then you don’t really need the repayment at any time, and you can drop it.

This is sort of similar to the heuristic of “be tolerant and kind to all persons, you never know when they might be valuable.” The fairy tales and myths about being kind to strangers and old ladies, in case they’re gods in disguise. You don’t want to burn bridges with anybody, you don’t want to kick anybody wholly out of the game, if you expect that eventually (and eventually may be very long indeed, and perhaps not within your lifetime), this will pay off.

Tit-for-tat or reinforcement-learning or behaviorism — reward what you want to see, punish what you don’t — makes a lot of sense, except when you factor in time and death. If you punish someone so hard that they die before they have a chance to turn around and improve, you’ve lost them.

And, on a more abstract level: it can make sense to disincentivize the slightly worse thing in general, that’s how evolution works, but that leads to things like rare languages dying out. Yes, it’s perfectly rational to speak Spanish rather than Zapotec, and Zapotec-speakers need to make a living too, but my inner Finite and Infinite Games says “wouldn’t you like to preserve Zapotec from dying out altogether? Couldn’t it come in handy someday?”  Language preservation is an example of preserving a “loser” because, if the world went on forever, nothing would be permanently guaranteed to lose.

It’s like having a slightly noisy update mechanism. Mostly, you reinforce what works and penalize what doesn’t. But sometimes, or to a small degree, you forgive, you rescue someone or something that would ordinarily be penalized, and save it, in case you need it later. In gradient descent, a little stochasticity keeps you from getting stuck on local maxima. In economics, a little bankruptcy or the occasional jubilee keeps you from getting stuck in stagnant, monopolistic conditions. You don’t ruthlessly weed out the “bad” all the time.

Sometimes you throw some resources at someone who “doesn’t deserve them” just in case you’re wrong, or to get out of the nasty feedback loops where someone behaves badly in response to being treated badly.  If you unilaterally gave them some help, you might allow them to escape into a cooperative, reciprocal-benefit situation, which you’d actually like better!  Even if this didn’t work one particular time, doing it in general, at some frequency, might in expectation work out in your favor.

A sense of the very long term may also make sympathy easier, because in the very long term nothing is permanent and everything is eventually mutable. If permanent suffering is what makes people unsympathetic, then a sense of the very long term makes it possible to realize that under different circumstances that person might become fine, and thus their suffering is ultimately the “temporary kind” that can elicit sympathy.  “The stone that the builders rejected/ has become the cornerstone” — well, if you wait long enough, that might actually happen. Things could change; the “loser”‘s or “villain”‘s status on the bottom is not eternal; so with a long-enough-term mindset it’s not actually appropriate to treat him as definitively a “loser” or a “villain.”

Forgiveness can be a lot easier to implement than “cooperation without sympathy”, which requires you to actually ascertain where win-wins are, with your mind. You can mindlessly add a little forgiveness to a system.  Machine-learning algorithms can do it.  Which may make it a useful tool in the process of “trying to build thought out of meat.”

 

Finite and Infinite

Epistemic Status: Really, really informal

After being told I really need to read Finite and Infinite Games for god knows how long, I finally went and did it.  I’m still processing.

In my earlier post I described a polarity between “survival thinking” (which is physical-reality-oriented, man-vs-nature, serious, and non-competitive) vs “sexual-selection thinking” (which is social-reality-oriented, focuses on the human world, frivolous, and competitive.)

James Carse, in Finite and Infinite Games, sets up a completely different polarity, between infinite game-playing (which is open-ended, playful, and non-competitive) vs. finite game-playing (which is definite, serious, and competitive).

Like a lot of philosophical thinkers, he’s rather negative on trying to win social games, and believes that humanity-wide survival problems require us to set such games aside.  He’s also quite positive on thinking — self-awareness, going meta, asking “do I want to be doing what I’m doing?”

The distinction with my earlier post is that he believes the key alternative to unproductive social-status games is playfulness and open-endedness, rather than seriousness, strenuousness, survival-level urgency.  I think this is possible; it’s also possible that both playfulness and survival pressure provoke people into abandoning social-status jockeying; it’s possible that there are many other things that do so.

My Understanding of what James Carse Wants

Playing an infinite game, in my own words, means “Let things continue and get weird”.

Culture has a tendency to get “out of hand”, to shift definitions, to break once-hallowed rules.  Living things evolve over time. Languages shift. Peoples migrate.  Individuals don’t fit perfectly into demographic or ideological categories.  As I understand it, an infinite player embraces change and tries to keep life/culture/humanity going, knowing that whatever future form it takes will be unfamiliar.

In human relationships this would mean keeping lines of communication open and trying to allow space for deep, weird, vulnerable, unfamiliar, or surprising interactions to arise.  It would also mean questioning or poking fun at fixed or narrowly competitive behavior patterns.

In grammar this would mean being descriptivist rather than prescriptivist.

In art this would mean playing with new forms and dialoguing with old formalisms.

In technological social-engineering this would mean trying to design platforms that promote unexpected and fertile rather than reliable and predictable behavior.

In politics it would mean actively seeking to preserve diversity, questioning the assumptions of nationalism or officialness, lots of meta stuff combined with a desire not to lock anybody out of the discourse.

Carse’s examples of evil are genocides: the permanent silencing of an entire people.  He’s against absolutism because it will try to destroy those parts of reality that don’t fit its system. Sometimes those parts are people.  The Native Americans, or the Romany, or the Jews, didn’t fit into somebody’s system.

If you imagine actually being a 19th-century Anglo-American, and imagine that you don’t really like Indians — they’re often pagan, they’re not of your civilization, they cause you a not-insignificant amount of danger and inconvenience — and imagine what it would be like to think “yes, but they’re people, they exist, they have a point of view that may matter, it’s worth trying to work things out with them rather than utterly destroying them” — I think that gives something of the flavor of what Carse wants people to do. Octavia Butler’s Lilith’s Brood is a good example of what cooperating with literal outer-space aliens would feel like — cooperating and trading and negotiating with creatures whose values you cannot empathize with at all and which seriously creep you out.  “Mutual benefit” sounds nice when you have easy sympathy with the other party; it is stranger and less intuitive when you don’t.  To keep playing with the truly alien seems to be part of what infinite games are about.

My Resistance to FIG

I have some inner resistance to pretty much all philosophical or spiritual thinkers, and Carse is no exception. He’s favorably disposed to thinking, and unfavorably disposed to most temporal rewards, like pretty much all philosophers. I like temporal rewards, and am less inclined towards detached meta-level thinking than most of the people who like these kinds of books.  I like a hot meal and a soft bed and a cat to pet.  So that’s one thing.

I also don’t like any creed which requires me to have unbounded energy, and “playfulness” or “openness to change and ambiguity” are both things that cost energy. (The latter because it takes cognitive effort to wrap one’s mind around complex or unpredictable things.) Carse complains that traveling in airplanes and hotels insulates you from change, from truly experiencing the foreignness of foreign places. He’s not wrong. It’s just that if you are very tired, you do not want anything to surprise you if you can possibly avoid it.  I am usually tired, so an ideal of the world as being eternally surprising sounds exhausting.  Finite games end, and that is a great deal of their appeal; after they’re over, you get to stop playing.

I do appreciate the message of “if you want to simplify your life, you don’t get to do it by controlling other people.”  Trying to live in such a way that you don’t ruin other people’s games just because you don’t want to play.

Actual Flaws in FIG

I don’t think he has a complete theory of property. He’s talking about property as primarily a finite game that involves fairness and competition intuitions.  I don’t think that’s all it is. There’s also property-as-territoriality (something that even animals have), and property as an incentive-stable way of allocating resources.  I don’t think I’ve ever seen a complete and satisfying theory of why we have property and how far it can/should extend, even Locke, but I really think it’s not disposed of as easily as Carse does.

I also think the “technology is limiting, gardens are infinite-game-like” stuff could use a little more nuance.  Not all technology is oriented towards centralized/standardized behavior patterns.  I think Carse would have been rather interested in blockchains and 3d printers if they had existed when he wrote the book.

Updates

In a comment on my earlier post, Komponisto said “The correct dichotomy, it seems to me, is not “work” vs. “play/gossip”, but rather “work/play” vs. “gossip”.” This also seems to be something Carse would agree with.  The creative stuff he values is play-like; he thinks seriousness is usually a “theatrical” device that’s basically used only for gossip-oriented status-games.

I think I was just wrong to imply that work is always serious.  (And certainly if I gave the impression that art is always frivolous, that was wrong! But I’ve never really believed that.)

I like Carse’s idea of storytelling as contrasted with lecturing; using communication to open up possibilities or food for thought rather than to shut down unwanted behaviors.  I was much more into that in the past (in my current state of constant exhaustion I normally don’t want to open up any future possibilities because then I’d have to deal with them) but it is genuinely beautiful to create a world for people to play in.