Psycho-Conservatism: What it Is, When to Doubt It

Epistemic status: I’m being emphatic for clarity here, not because I’m super confident.

I’m noticing that a large swath of the right-of-center infovore world has come around to a kind of consensus, and nobody has named it yet.

Basically, I’m pointing at the beliefs that Jonathan Haidt (The Righteous Mind , The Happiness Hypothesis), Jordan Peterson (Maps of Meaning), and Geoffrey Miller (The Mating Mind) have in common.

All of a sudden, it seemed like everybody “centrist” or “conservative” I knew was quoting Haidt or linking videos by Peterson.

(In absolute terms Peterson isn’t that famous — his videos get hundreds of thousands of Youtube views, about half as popular as the most popular Hearthstone streamer. There’s a whole universe of people who aren’t in the culture-and-politics fandom at all.  But among those who are, Peterson seems highly influential.)

All three of these men are psychology professors, which is why I’m calling the intersection of their views psycho-conservatism. Haidt is a social psychologist, Peterson is a Jungian, and Miller is an evolutionary psychologist.

Psycho-conservatism is mostly about human nature.  It says that humans have a given, observable, evolved nature; that this nature isn’t always pretty (we are frequently irrational, deceptive, and self-centered); and that human nature’s requirements place limits on what we can do with culture or society.  Often, traditional wisdom is valuable because it is a good fit for human nature. Often, utopian modern changes in society fail because they don’t fit human nature well.

This is, of course, a small-c conservative view: it looks to the past for inspiration, and it’s skeptical of radical changes. It differs from other types of conservatism in that it gets most of its evidence from psychology — whether using empirical experiments (as Haidt does) or evolutionary arguments (as Miller does).  Psycho-conservatives have a great deal of respect for religion, but they don’t speak on religious grounds themselves; they’re more likely to argue that religion is adaptive or socially beneficial or that we’re “wired for it.”

Psycho-conservatism is also methodologically skeptical.  In the wake of the psychology replication crisis, it’s reasonable to become very, very doubtful of the social sciences in general.  What do we really know about what makes people tick? Not much.  In such an environment, it makes sense to drastically raise your standards for evidence.  Look for the most replicated and hard-to-fudge empirical findings.  (This may lead you to the literature on IQ and behavioral genetics, and heritable, stable phenomena like the Big Five personality dimensions.) Look for commonalities between cultures across really long time periods.  Look for evidence about the ancestral environment, which constituted most of humans’ time on Earth. Try to find ways to sidestep the bias of our present day and location.

This is the obvious thing to do, as a first pass, in an environment of untrustworthy information.

It’s what I do when I try to learn about biology — err on the side of being pickier, look for overwhelming and hard-to-fake evidence, look for ideas supported by multiple independent lines of evidence (especially evolutionary evidence and evidence across species.)

If you do this with psychology, you end up with an attempt to get a sort of core summary of what we can be most confident about in human nature.

Psycho-conservatives also wind up sharing a set of distinctive political and cultural concerns:

  • Concern that modern culture doesn’t meet most people’s psychological needs.
  • A fair amount of sympathy for values like authority, tradition, and loyalty.
  • Belief that science on IQ and human evolution is being suppressed in favor of less accurate egalitarian theories.
  • Belief that illiberal left-wing activism on college campuses is an important social problem.
  • Disagreement with most contemporary feminism, LGBT activism, and anti-racist activism
  • A general attitude that it’s better to be sunny, successful, and persuasive than aggrieved; disapproval of the “culture of victimhood”
  • Basically no public affiliation with the current Republican Party
  • Moderate or silent on “traditional” political controversies like abortion, gov’t spending, war, etc.
  • Interested in building more national or cultural unity (as opposed to polarization)

Where are the weaknesses in psycho-conservatism?

I just said above that a skeptical methodology regarding “human nature” makes a lot of sense, and is kind of the obvious epistemic stance. But I’m not really a psycho-conservative myself.  So where might this general outlook go wrong?

  1. When we actually do know what we’re talking about.

If you used evolved tacit knowledge, the verdict of history, and only the strongest empirical evidence, and were skeptical of everything else, you’d correctly conclude that in general, things shaped like airplanes don’t fly.  The reason airplanes do fly is that if you shape their wings just right, you hit a tiny part of the parameter space where lift can outbalance the force of gravity.  “Things roughly like airplanes” don’t fly, as a rule; it’s airplanes in particular that fly.

Highly skeptical, conservative methodology gives you rough, general rules that you can be pretty confident won’t be totally wrong. It doesn’t guarantee that there can’t be exceptions that your first-pass methods won’t reach.  For instance, in the case of human nature:

  • It could turn out that one can engineer better-than-historically-normal outcomes even though, as a general rule, most things in the reference class don’t work
    • Education and parenting don’t empirically matter much for life outcomes, but there may be exceptional teaching or parenting methods — just don’t expect them to be easy to implement en masse
    • Avoiding lead exposure massively increases IQ; there may be other biological interventions out there that allow us to do better than “default human nature”
  • Some minority of people are going to have “human natures” very different from your rough, overall heuristics; statistical phenomena don’t always apply to individuals
  • Modern conditions, which are really anomalous, can result in behaviors being adaptive that really weren’t in ancestral or historical conditions, so “deep history” or evolutionary arguments for what humans should do are less applicable today

Basically, the heuristics you get out of methodological conservatism make sense as a first pass, but while they’re robust, they’re very fuzzy.  In a particular situation where you know the details, it may make sense to say “no thanks, I’ve checked the ancestral wisdom and the statistical trends and they don’t actually make sense here.”

2. When psycho-conservatives don’t actually get the facts right.

Sometimes, your summary of “cultural universals” isn’t really universal.  Sometimes, your experimental studies are on shaky ground. (Haidt’s Moral Foundations don’t emerge organically from factor analysis the way Big Five personality traits do.)  Even though the overall strategy of being skeptical about human nature makes sense, the execution can fail in various places.

Conservatives tend to think that patriarchy is (apart from very recently) a human universal, but it really isn’t; hunter-gatherer and hoe cultures have done without it for most of humanity’s existence.

Lots of people assume that government is a human universal, but it isn’t; nation-states are historically quite modern, and even monarchy is far from universal. (Germanic tribes as well as hunter-gatherers and pastoralists around the world were governed by councils and war-leaders rather than kings; Medieval Iceland had a fairly successful anarchy; the Bible is a story of pastoralist tribes transitioning to monarchy, and the results are not represented sympathetically!)

It’s hard to actually compensate for parochial bias and look for a genuinely universal property of human nature, and psycho-conservatives deserve critique when they fail at that mission.

3. When A Principle Is At Stake

Knowledge of human nature can tell you the likely consequences of what you’re doing, and that should inform your strategy.

But sometimes, human nature is terrible.

All the evidence in the world that people usually do something, or that we evolved to do something, doesn’t mean we should do it.

The naturalistic fallacy isn’t exactly a fallacy; natural behaviors are far more likely to be feasible and sustainable than arbitrary hypothetical behaviors, and so if you’re trying to set ideal norms you don’t want them to be totally out of touch with human nature.  But I tend to think that human values emerge and expand from evolutionary pressures rather than being bound wholly to them; we are godshatter.

Sometimes, you gotta say, “I don’t care about the balance of nature and history, this is wrong, what we should do is something else.”  And the psycho-conservative will say “You know you’re probably gonna fail, right?”

At which point you smile, and say, “Probably.”

Advertisements

Distinctions in Types of Thought

Epistemic status: speculative

For a while, I’ve had the intuition that current machine learning techniques, though powerful and useful, are simply not touching some of the functions of the human mind.  But before I can really get at how to justify that intuition, I would have to start clarifying what different kinds of thinking there are.  I’m going to be reinventing the wheel a bit here, not having read that much cognitive science, but I wanted to write down some of the distinctions that seem important, and trying to see whether they overlap. A lot of this is inspired by Dreyfus’ Being-in-the-World. I’m also trying to think about the questions raised in the post “What are Intellect and Instinct?”

Effortful vs. Effortless

In English, we have different words for perceiving passively versus actively paying attention.  To see vs. to look, to hear vs. to listen, to touch vs. to feel.  To go looking for a sensation means exerting a sort of mental pressure; in other words, effort. William James, in his Principles of Psychology, said “Attention and effort, as we shall see later, are but two names for the same psychic fact.”  He says, in his famous introduction to attention, that

Every one knows what attention is. It is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Localization, concentration, of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others, and is a condition which has a real opposite in the confused, dazed, scatterbrained state which in French is called distraction, and Zerstreutheit in German.

Furthermore, James divides attention into two types:

Passive, reflex, non-voluntary, effortless ; or Active and voluntary.

In other words, the mind is always selecting certain experiences or thoughts as more salient than others; but sometimes this is done in an automatic way, and sometimes it’s effortful/voluntary/active.  A fluent speaker of a language will automatically notice a grammatical error; a beginner will have to try effortfully to catch the error.

In the famous gorilla experiment where subjects instructed to count passes in a basketball game failed to notice a gorilla on the basketball field, “counting the passes” is paying effortful attention, while “noticing the gorilla” would be effortless or passive noticing.

Activity in “flow” (playing a musical piece by muscle memory, or spatially navigating one’s own house) is effortless; activities one learns for the first time are effortful.

Oliver Sacks’ case studies are full of stories that illustrate the importance of flow. People with motor disorders like Parkinson’s can often dance or walk in rhythm to music, even when ordinary walking is difficult; people with memory problems sometimes can still recite verse; people who cannot speak can sometimes still sing. “Fluent” activities can remain undamaged when similar but more “deliberative” activities are lost.

The author of intellectualizing.net thinks about this in the context of being an autistic parent of an autistic son:

Long ago somewhere I can’t remember, I read a discussion of knowing what vs. knowing how. The author’s thought experiment was about walking. Imagine walking with conscious planning, thinking consciously about each muscle and movement involved. Attempting to do this makes us terrible at walking.

When I find myself struggling with social or motor skills, this is the feeling. My impression of my son is the same. Rather than trying something, playing, experimenting he wants the system first. First organize and analyze it, then carefully and cautiously we might try it.

A simple example. There’s a curriculum for writing called Handwriting Without Tears. Despite teaching himself to read when barely 2, my son refused to even try to write. Then someone showed him this curriculum in which letters are broken down into three named categories according to how you write them; and then each letter has numbered strokes to be done in sequence. Suddenly my son was interested in writing. He approached it by first memorizing the whole Handwriting Without Tears system, and only then was he willing to try to write. I believe this is not how most 3-year-olds work, but this is how he works.

One simple study (“Children with autism do not overimitate”) had to do with children copying “unnecessary” or “silly” actions. Given a demonstration by an adult, autistic kids would edit out pointless steps in the demonstrated procedure. Think about what’s required to do this: the procedure has to be reconstructed from first principles to edit the silly out. The autistic kids didn’t take someone’s word for it, they wanted to start over.

The author and his son learn skills by effortful conscious planning that most people learn by “picking up” or “osmosis” or “flow.”

Most of the activity described by Heidegger’s Being and Time, and Dreyfus’ commentary Being-In-The-World, is effortless flow-state “skilled coping.”  Handling a familiar piece of equipment, like typing on a keyboard, is a prototypical example. You’re not thinking about how to do it except when you’re learning how for the first time, or if it breaks or becomes “disfluent” in some way.  If I’m interpreting him correctly, I think Dreyfus would say that neurotypical adults spend most of their time, minute-by-minute, in an effortless flow state, punctuated by occasions when they have to plan, try hard, or figure something out.

William James would agree that voluntary attention occupies a minority of our time:

There is no such thing as voluntary attention sustained for more than a few seconds at a time. What is called sustained voluntary attention is a repetition of successive efforts which bring back the topic to the mind.

(This echoes the standard advice in mindfulness meditation that you’re not aiming for getting the longest possible period of uninterrupted focus, you’re training the mental motion of returning focus from mind-wandering.)

Effortful attention can also be viewed as the cognitive capacities which stimulants improve. Reaction times shorten, and people distinguish and remember the stimuli in front of them better.

It’s important to note that not all focused attention is effortful attention. If you are playing a familiar piece on the piano, you’re in a flow state, but you’re still being “focused” in a sense; you’re noticing the music more than you’re noticing conversation in another room, you’re playing this piece rather than any other, you’re sitting uninterrupted at the piano rather than multitasking.  Effortless flow can be extremely selective and hyper-focused (like playing the piano), just as much as it can be diffuse, responsive, and easily interruptible (like navigating a crowded room).  It’s not the size of your window of salience that distinguishes flow from effortful attention, it’s the pressure that you apply to that window.

Psychologists often call effortful attention cognitive disfluency, and find that experiences of disfluency (such as a difficult-to-read font) improve syllogistic reasoning and reduce reliance on heuristics, while making people more likely to make abstract generalizations.  Disfluency improves results on measures of “careful thinking” like the Cognitive Reflection Test as well as on real-world high-school standardized tests, and also makes them less likely to confess embarrassing information on the internet. In other words, disfluency makes people “think before they act.”  Disfluency raises heart rate and blood pressure, just like exercise, and people report it as being difficult and reliably disprefer it to cognitive ease.  The psychology research seems consistent with there being such a thing as “thinking hard.”  Effortful attention occupies a minority of our time, but it’s prominent in the most specifically “intellectual” tasks, from solving formal problems on paper to making prudent personal decisions.

What does it mean, on a neurological or a computational level, to expend mental effort?  What, precisely, are we doing when we “try hard”?  I think it might be an open question.

Do the neural networks of today simulate an agent in a state of “effortless flow” or “effortful attention”, or both or neither?  My guess would be that deep neural nets and reinforcement learners are generally doing effortless flow, because they excel at the tasks that we generally do in a flow state (pattern recognition and motor learning.)

Explicit vs. Implicit

Dreyfus, as an opponent of the Representational Theory of Mind, believes that (most of) cognition is not only not based on a formal system, but not in principle formalizable.  He thinks you couldn’t possibly write down a theory or a set of rules that explain what you’re doing when you drive a car, even if you had arbitrary amounts of information about the brain and human behavior and arbitrary amounts of time to analyze them.

This distinction seems to include the distinctions of “declarative vs. procedural knowledge”, “know-what vs. know-how”, savoir vs. connaître.  We can often do, or recognize, things that we cannot explain.

I think this issue is related to the issue of interpretability in machine learning; the algorithm executes a behavior, but sometimes it seems difficult or impossible to explain what it’s doing in terms of a model that’s simpler than the whole algorithm itself.

The seminal 2001 article by Leo Breiman, “Statistical Modeling: The Two Cultures” and Peter Norvig’s essay “On Chomsky and the Two Cultures of Statistical Learning” are about this issue.  The inverse square law of gravitation and an n-gram Markov model for predicting the next word in a sentence are both statistical models, in some sense; they allow you to predict the dependent variable given the independent variables. But the inverse square law is interpretable (it makes sense to humans) and explanatory (the variables in the model match up to distinct phenomena in reality, like masses and distances, and so the model is a relationship between things in the world.)

Modern machine learning models, like the n-gram predictor, have vast numbers of variables that don’t make sense to humans and don’t obviously correspond to things in the world. They perform well without being explanations.  Statisticians tend to prefer parametric models (which are interpretable and sometimes explanatory) while machine-learning experts use a lot of non-parametric models, which are complex and opaque but often have better empirical performance.  Critics of machine learning argue that a black-box model doesn’t bring understanding, and so is the province of engineering rather than science.  Defenders, like Norvig, flip open a random issue of Science and note that most of the articles are not discovering theories but noting observations and correlations. Machine learning is just another form of pattern recognition or “modeling the world”, which constitutes the bulk of scientific work today.

These are heuristic descriptions; these essays don’t make explicit how to test whether a model is interpretable or not. I think it probably has something to do with model size; is the model reducible to one with fewer parameters, or not?  If you think about it that way, it’s obvious that “irreducibly complex” models, of arbitrary size, can exist in principle — you can just build simulated data sets that fit them and can’t be fit by anything simpler.

How much of human thought and behavior is “irreducible” in this way, resembling the huge black-box models of contemporary machine learning?  Plausibly a lot.  I’m convinced by the evidence that visual perception runs on something like convolutional neural nets, and I don’t expect there to be “simpler” underlying laws.  People accumulate a lot of data and feedback through life, much more than scientists ever do for an experiment, so they can “afford” to do as any good AI startup does, and eschew structured models for open-ended, non-insightful ones, compensating with an abundance of data.

Subject-Object vs. Relational

This is a concept in Dreyfus that I found fairly hard to pin down, but the distinction seems to be operating upon the world vs. relating to the world. When you are dealing with raw material — say you are a potter with a piece of clay — you think of yourself as active and the clay as passive. You have a goal (say, making a pot) and the clay has certain properties; how you act to achieve your goal depends on the clay’s properties.

By contrast, if you’re interacting with a person or an animal, or even just an object with a UI, like a stand mixer, you’re relating to your environment.  The stand mixer “lets you do” a small number of things — you can change attachments or speeds, raise the bowl up and down, remove the bowl, fill it with food or empty it. You orient to these affordances. You do not, in the ordinary process of using a stand mixer, think about whether you could use it as a step-stool or a weapon or a painting tool. (Though you might if you are a child, or an engineer or an artist.) Ordinarily you relate in an almost social, almost animist, way to the stand mixer.   You use it as it “wants to be used”, or rather as its designer wants you to use it; you are “playing along” in some sense, being receptive to the external intentions you intuit.

And, of course, when we are relating to other people, we do much stranger and harder-to-describe things; we become different around them, we are no longer solitary agents pursuing purely internally-driven goals. There is such a thing as becoming “part of a group.”  There is the whole messy business of culture.

For the most part, I don’t think machine-learning models today are able to do either subject-object or relational thinking; the problems they’re solving are so simple that neither paradigm seems to apply.  “Learn how to work a stand mixer” or “Figure out how to make a pot out of clay” both seem beyond the reach of any artificial intelligence we have today.

Aware vs. Unaware

This is the difference between sight and blindsight.  It’s been shown that we can act on the basis of information that we don’t know we have. Some blind people are much better than chance at guessing where a visual stimulus is, even though they claim sincerely to be unable to see it.  Being primed by a cue makes blindsight more accurate — in other words, you can have attention without awareness.

Anosognosia is another window into awareness; it is the phenomenon when disabled people are not aware of their deficits (which may be motor, sensory, speech-related, or memory-related.) In unilateral neglect, for instance, a stroke victim might be unaware that she has a left side of her body; she won’t eat the left half of her plate, make up the left side of her face, etc.  Sensations may still be possible on the left side, but she won’t be aware of them.  Squirting cold water in the left ear can temporarily fix this, for unknown reasons.

Awareness doesn’t need to be explicit or declarative; we aren’t formalizing words or systems constantly when we go through ordinary waking life.  It also doesn’t need to be effortful attention; we’re still aware of the sights and sounds that enter our attention spontaneously.

Efference copy signals  seem to provide a clue to what’s going on in awareness.  When we act (such as to move a limb), we produce an “efference copy” of what we expect our sensory experience to be, while simultaneously we receive the actual sensory feedback. “This process ultimately allows sensory reafferents from motor outputs to be recognized as self-generated and therefore not requiring further sensory or cognitive processing of the feedback they produce.”  This is what allows you to keep a ‘still’ picture of the world even though your eyes are constantly moving, and to tune out the sensations from your own movements and the sound of your own voice.

Schizophrenics may be experiencing a dysfunction of this self-monitoring system; they have “delusions of passivity or thought insertion” (believing that their movements or thoughts are controlled from outside) or “delusions of grandeur or reference” (believing that they control things with their minds that they couldn’t possibly control, or that things in the outside world are “about” themselves when they aren’t.) They have a problem distinguishing self-caused from externally-caused stimuli.

We’re probably keeping track, somewhere in our minds, of things labeled as “me” and “not me” (my limbs are part of me, the table next to me is not), sensations that are self-caused and externally-caused, and maybe also experiences that we label as “ours” vs. not (we remember them, they feel like they happened to us, we can attest to them, we believe they were real rather than fantasies.)

It might be as simple as just making a parallel copy of information labeled “self,” as the efference-copy theory has it.  And (probably in a variety of complicated and as-yet-unknown ways), our brains treat things differently when they are tagged as “self” vs. “other.”

Maybe when experiences are tagged as “self” or labeled as memories, we are aware that they are happening to us.  Maybe we have a “Cartesian theater” somewhere in our brain, through which all experiences we’re aware of pass, while the unconscious experiences can still affect our behavior directly.  This is all speculation, though.

I’m pretty sure that current robots or ML systems don’t have any special distinction between experiences inside and outside of awareness, which means that for all practical purposes they’re always operating on blindsight.

Relationships and Corollaries

I think that, in order of the proportion of ordinary neurotypical adult life they take up, awareness  > effortful attention > explicit systematic thought. When you look out the window of a train, you are aware of what you see, but not using effortful attention or thinking systematically.  When you are mountain-climbing, you are using effortful attention, but not thinking systematically very much. When you are writing an essay or a proof, you are using effortful attention, and using systematic thought more, though perhaps not exclusively.

I think awareness, in humans, is necessary for effortful attention, and effortful attention is usually involved in systematic thought.  (For example, notice how concentration and cognitive disfluency improve the ability to generalize or follow reasoning principles.) . I don’t know whether those necessary conditions hold in principle, but they seem to hold in practice.

Which means that, since present-day machine-learners aren’t aware, there’s reason to doubt that they’re going to be much good at what we’d call reasoning.

I don’t think classic planning algorithms “can reason” either; they’re hard-coding in the procedures they follow, rather than generating those procedures from simpler percepts the way we do.  It seems like the same sort of misunderstanding as it would be to claim a camera can see.

(As I’ve said before, I don’t believe anything like “machines will never be able to think the way we do”, only that they’re not doing so now.)

The Weirdness of Thinking on Purpose

It’s popular these days to “debunk” the importance of the “intellect” side of “intellect vs. instinct” thinking.  To point out that we aren’t always rational (true), are rarely thinking effortfully or explicitly (also true), can’t usually reduce our cognitive processes to formal systems (also true), and can be deeply affected by subconscious or subliminal processes (probably true).

Frequently, this debunking comes with a side order of sneer, whether at the defunct “Enlightenment” or “authoritarian high-modernist” notion that everything in the mind can be systematized, or at the process of abstract/deliberate thought itself and the people who like it.  Jonathan Haidt’s lecture on “The Rationalist Delusion” is a good example of this kind of sneer.

The problem with the popular “debunking reason” frame is that it distracts us from noticing that the actual process of reasoning, as practiced by humans, is a phenomenon we don’t understand very well yet.  Sure, Descartes may have thought he had it all figured out, and he was wrong; but thinking still exists even after you have rejected naive rationalism, and it’s a mistake to assume it’s the “easy part” to understand. Deliberative thinking, I would guess, is the hard part; that’s why the cognitive processes we understand best and can simulate best are the more “primitive” ones like sensory perception or motor learning.

I think it’s probably better to think of those cognitive processes that distinguish humans from animals as weird and mysterious and special, as “higher-level” abilities, rather than irrelevant and vestigial “degenerate cases”, which is how Heidegger seems to see them.  Even if the “higher” cognitive functions occupy relatively little time in a typical day, they have outsize importance in making human life unique.

Two weirdly similar quotes:

“Three quick breaths triggered the responses: he fell into the floating awareness… focusing the consciousness… aortal dilation… avoiding the unfocused mechanism of consciousness… to be conscious by choice… blood enriched and swift-flooding the overload regions… one does not obtain food-safety freedom by instinct alone… animal consciousness does not extend beyond the given moment nor into the idea that its victims may become extinct… the animal destroys and does not produce… animal pleasures remain close to sensation levels and avoid the perceptual… the human requires a background grid through which to see his universe… focused consciousness by choice, this forms your grid… bodily integrity follows nerve-blood flow according to the deepest awareness of cell needs… all things/cells/beings are impermanent… strive for flow-permanence within…”

–Frank Herbert, Dune, 1965

 

“An animal’s consciousness functions automatically: an animal perceives what it is able to perceive and survives accordingly, no further than the perceptual level permits and no better. Man cannot survive on the perceptual level of his consciousness; his senses do not provide him with an automatic guidance, they do not give him the knowledge he needs, only the material of knowledge, which his mind has to integrate. Man is the only living species who has to perceive reality, which means: to be conscious — by choice.  But he shares with other species the penalty for unconsciousness: destruction. For an animal, the question of survival is primarily physical; for man, primarily epistemological.

“Man’s unique reward, however, is that while animals survive by adjusting themselves to their background, man survives by adjusting his background to himself. If a drought strikes them, animals perish — man builds irrigation canals; if a flood strikes them, animals perish — man builds dams; if a carnivorous pack attacks them animals perish — man writes the Constitution of the United States. But one does not obtain food, safety, or freedom — by instinct.”

–Ayn Rand, For the New Intellectual, 1963

(bold emphasis added, ellipses original).

“Conscious by choice” seems to be pointing at the phenomenon of effortful attention, while “the unfocused mechanism of consciousness” is more like awareness.  There seems to be some intuition here that effortful attention is related to the productive abilities of humanity, our ability to live in greater security and with greater thought for the future than animals do.  We don’t usually “think on purpose”, but when we do, it matters a lot.

We should be thinking of “being conscious by choice” more as a sort of weird Bene Gesserit witchcraft than as either the default state or as an irrelevant aberration. It is neither the whole of cognition, nor is it unimportant — it is a special power, and we don’t know how it works.

Why I Quit Social Media

Epistemic Status: Personal

I’m not a Puritan. I eat dessert, enjoy a good cocktail, and socialize a lot.  I like fun, and I don’t think fun is wrong.

So I really understand the allergy to messages of ascetic self-denial.

But lately I’ve found it necessary to become more like this:

220px-athena_giustiniani

and less like this.

woman-crying-mascara

More balanced, deliberate, reflective. Less needy, emotionally unstable, dramatic, and attention-seeking.

I’ve written about this process here, here, here, here, and here.  I’ve been at it for about a year.

Why is it worth being less of a drama queen?  In some ways, equinamity is a lot less fun.

But, ultimately, being a drama queen is a dependent’s lifestyle. It makes you unable to function on your own.  As a practical matter, I am not a dependent, and so I sometimes need to do things — in real life, outside of my own head, where the actual state of the world matters.

I also think it’s wrong to constantly interrupt things other people are doing to change the subject to All About Me.

And, basically, that’s what social media does. It distances you from reality, makes you focus on a shadow-world of opinions about opinions about opinions; it makes you more impulsive and emotionally unstable; it incentivizes derailing conversations to fish for ego-strokes.

I don’t dislike petty bullshit — I enjoy it all too much.  I could happily spend eternity picking fights and chasing drama if somehow that were feasible.

So I asked myself “is there, ultimately, anything wrong with living in a world of screams and shadows and impulse pleasures?  Do I actually care about anything else?” And the answer was “Unfortunately, yeah. I have to literally sustain my own life, and there are people I genuinely care about.  So…ok, reality matters.”

And if reality matters, obviously you shouldn’t be doing stuff that makes you into a moron.

Life after social media isn’t hard, in my experience.  Life without one pleasure isn’t miserable, because there are other pleasures. The brain’s pleasure mechanisms are damnably homeostatic; you adjust to about the same amount of average happiness, regardless of how intense or mild the pleasures in your daily life.  I miss the drama of social media now and then, but not most days.

I think if you consider yourself reality-oriented or “serious”, then quitting social media should be overdetermined.

I’m a little more ambivalent about all that — I’m the kind of person who might plug myself into the Experience Machine — but I think as long as we live on a planet with limited resources, a pure life of fantasy is suicidal, and at least sometimes we have to deal with reality.  And we should at least not mislead, or dissipate the efforts of, people who are trying to deal with reality.

Plus, even for dreamers like myself, I think there might someday be a better Annwfn than Facebook.

 

Life Updates Strike Again!

So, I have again changed jobs — I am now working at Starsky Robotics helping build self-driving trucks!

I’m still interested in biotech, but there are a lot of interesting problems in autonomous vehicles that I’m excited to work on. Also, this job is in San Francisco, unlike my last job, and since I am pregnant, I’d like to work in the same state as my future kid.

The great thing about California is that local laws allow freelancing, so I am once again open for business, to the extent that I have time.  If you have a question that could benefit from a personalized literature review, particularly in the biomedical sciences, I can do that, and you can contact me at srconstantin@gmail.com.

Currently my main freelance research project is surveying current experimental avenues for life extension, in cooperation with an investor who’s interested in funding longevity research. I love working on stuff like this, and will be updating this blog with progress.

I’m also going to start seriously preparing to get my cancer research book published, which, I’m told, involves doing your own marketing. I’m going to be looking for opportunities to write shorter articles for magazines/blogs and otherwise get publicity, so if you know of any good venues, please let me know.

Hoe Cultures: A Type of Non-Patriarchal Society

Epistemic status: mostly facts, a few speculations.

TW: lots of mentions of violence, abuse, and rape.

There is a tremendous difference, in pre-modern societies, between those that farmed with the plow and those that farmed with the hoe.

If you’re reading this, you live in a plow culture, or are heavily influenced by one. Europe, the Middle East, and most of Asia developed plow cultures. These are characterized by reliance on grains such as wheat and rice, which provide a lot of calories per acre in exchange for a lot of hard physical labor.  They also involve large working livestock, such as horses, donkeys, and oxen.

Hoe cultures, by contrast, arose in certain parts of sub-Saharan Africa, the Americas, southeast Asia, and Oceania.

Hoe agriculture is sometimes called horticulture, because it is more like planting a vegetable garden than farming.  You clear land with a machete and dig it with a hoe.  This works for crops such as bananas, breadfruit, coconuts, taro, yam, calabashes and squashes, beans, and maize.  Horticulturalists also keep domestic animals like chickens, dogs, goats, sheep, and pigs — but never cattle. They may hunt or fish.  They engage in small-scale home production of pottery and cloth.[1]

Hoe agriculture is extremely productive per hour of labor, much more so than preindustrial grain farming, but requires a vast amount of land for a small population. Horticulturists also tend to practice shifting cultivation, clearing new land when the old land is used up, rather than repeatedly plowing the same field — something that is only possible when fertile land is “too cheap to meter.”  Hoe cultures therefore have lots of leisure, but low population density, low technology, and few material objects.[1]

I live with a toddler, so I’ve seen a lot of the Disney movie Moana, which had a lot of consultation with Polynesians to get the culture right. This chipper little song is a pretty nice illustration of hoe culture: you see people digging with hoes, carrying bananas and fish, singing about coconuts and taro root, making pottery and cloth, and you see a pig and a chicken tripping through the action.

Hoe Culture and Gender Roles

Ester Boserup, in her 1970 book Woman’s Role in Economic Development [2], notes that in hoe cultures women do the hoeing, while in plow cultures men do the plowing.

This is because plowing is so physically difficult that men, with greater physical strength, have a comparative advantage at agricultural labor, while they have no such advantage in horticulture.

Men in hoe cultures clear the land (which is physically challenging; machete-ing trees is quite the upper-body workout), hunt, and engage in war. But overall, hour by hour, they spend most of their time in leisure.  (Or in activities that are not directly economically productive, like politics, ritual, or the arts.)

Women in hoe cultures, as in all known human cultures, do most of the childcare.  But hoeing is light enough work that they can take small children into the fields with them and watch them while they plant and weed. Plowing, hunting, and managing large livestock, by contrast, are forms of work too heavy or dangerous to accommodate simultaneous childcare.

The main gender difference between hoe and plow cultures is, then, that women in hoe cultures are economically productive while women in plow cultures are largely not.

This has strong implications for marriage customs.  In a plow culture, a husband supports his wife; in a hoe culture, a wife supports her husband.

Correspondingly, plow cultures tend to have a tradition of dowry (the bride’s parents compensate the groom financially for taking an extra mouth to feed off their hands) while hoe cultures tend to practice bride price (the groom compensates the bride’s family financially for the loss of a working woman) or bride service (the groom labors for the bride’s family, again as compensation for taking her labor.)

Hoe cultures are much more likely to be polygamous than plow cultures.  Since land is basically free, a man in a hoe culture is rich in proportion to how much labor he can accumulate — and labor means women. The more wives, the more labor.  In a plow culture, however, extra labor must come from men, which usually means hired labor, or slaves or serfs.  Additional wives would only mean more mouths to feed.

Because hoe cultures need women for labor, they allow women more autonomy.  Customs like veiling or seclusion (purdah) are infeasible when women work in the fields.  Hoe-culture women can usually divorce their husbands if they pay back the bride-price.

Barren women, widows, and unchaste women or rape victims in pre-modern plow cultures often face severe stigma (and practices like sati and honor killings) which do not occur in hoe cultures. Women everywhere are valued for their reproductive abilities, and men everywhere have an evolutionary incentive to prefer faithful mates; but in a hoe culture, women have economic value aside from reproduction, and thus society can’t afford to kill them as soon as their reproductive value is diminished.

“Matriarchy” is considered a myth by modern anthropologists; there is no known society, present or past, where women ruled. However, there are matrilinear societies, where descent is traced through the mother, and matrilocal societies, where the groom goes to live near the bride and her family.  All matrilinear and matrilocal societies in Africa are hoe cultures (though some hoe cultures are patrilinear and/or patrilocal.)[3]

The Seneca, a Native American people living around Lake Ontario, are a good example of a hoe culture where women enjoyed a great deal of power. [4] Traditionally, they cultivated the Three Sisters: maize, beans, and squash.  The women practiced horticulture, led councils, had rights over all land, and distributed food and household stores within the clan.  Descent was matrilineal, and marriages (which were monogamous) were usually arranged by the mothers. Of the Seneca wife, Henry Dearborn noted wistfully in his journal, “She lives with him from love, for she can obtain her own means of support better than he.”  Living, childrearing, and work organization was communal within a clan (living within a longhouse) and generally organized by elder women.

Hoe and Plow Cultures Today

A 2012 study [5] found that people descended from plow cultures are more likely to agree with the statements “When jobs are scarce, men should have more right to a job than women” and “On the whole, men make better political leaders than women do” than people descended from hoe cultures.

“Traditional plough-use is positively correlated with attitudes reflecting gender inequality and negatively correlated with female labor force participation, female firm ownership, and female participation in politics.”  This remains true after controlling for a variety of societal variables, such as religion, race, climate, per-capita GDP, history of communism, civil war, and others.

Even among immigrants to Europe and the US, history of ancestral plow-use is still strongly linked to female labor force participation and attitudes about gender roles.

Patriarchy Through a Materialist Lens

Friedrich Engels, in The Origin of the Family, Private Property and the State, was the first to argue that patriarchy was a consequence of the rise of (plow) agriculture.  Alesina et al summarize him as follows:

He argued that gender inequality arose due to the intensification of agriculture, which resulted in the emergence of private property, which was monopolized by men. The control of private property allowed men to subjugate women and to introduce exclusive paternity over their children, replacing matriliny with patrilineal descent, making wives even more dependent on husbands and their property. As a consequence, women were no longer active and equal participants in community life.

Hoe societies (and hunter-gatherer societies) have virtually no capital. Land can be used, but not really owned, as its produce is unreliable or non-renewable, and its boundaries are too large to guard. Technology is too primitive for any tool to be much of a capital asset.  This is why they are poor in material culture, and also why they are egalitarian; nobody can accumulate more than his neighbors if there just isn’t any way to accumulate stuff at all.

I find the materialistic approach to explaining culture appealing, even though I’m not a Marxist.  Economic incentives — which can be inferred by observing the concrete facts of how a people makes its living — provide elegant explanations for the customs, traditions, and ideals that emerge in a culture.  We do not have to presume that those who live in other cultures are stupid or fundamentally alien; we can assume they respond to incentives just as we do.  And, when we see the world through a materialist lens, we do not hope to change culture by mere exhortation. Oppression occurs when people see an advantage in oppressing; it is subdued when the advantage disappears, or when the costs become too high.  Individual people can follow their consciences even when it differs from the surrounding pressures of their culture, but when we talk about aggregates and whole populations, we don’t expect personal heroism to shift systems by itself.

A materialist analysis of gender relations would say that women are not going to escape oppression until they are economically independent.  And, even in the developed world, women mostly are not.

Women around the world, including in America, are much more likely to live in poverty than men.  This is because women have lower-paying jobs and struggle to support single-mother households. Women everywhere do most of the childcare, and most women have children at some point in their lives, so an economy that does not allow a woman to support and care for children with her own labor is not an economy that will ever allow most women to be economically independent.

Just working outside the home does not make a woman economically independent. If a family is living in a “two-income trap”[6], in which the wife’s income is just enough to pay for the childcare she does not personally provide, then the wife’s net economic contribution to the family is zero.

Sure, much of the “gender pay gap” disappears after controlling for college major and career choice [7][8]. Men report more interest in making a lot of money and being leaders, while women report more interest in being helpful and working with people rather than things. But a lot of this is probably due to the fact that most women rationally assume that they will take time to raise children, and that their husband will be the primary breadwinner, so they are less likely to make early education and career choices on the basis of earning the most money.

Economist Claudia Goldin believes the main reason for the gender pay gap is the cost of temporal flexibility; women want more work flexibility in order to raise children, and so they are paid less.  Childless men and women have virtually no wage disparity.[9]

Since women who will ever have children (which is most women) are still usually economically dependent on men even in the developed world, and strongly disadvantaged if they don’t have a male provider, is it any wonder that women are still more submissive and agreeable, higher in neuroticism and mood disorders, and subject to greater pressure to appeal sexually?  Their livelihood still depends on finding a mate to support them.

In order to change the economic incentives to make women financially independent, it would have to be no big deal to be a single mother. This probably means an economy whose resources were shifted from luxury towards leisure. Mothers of young children need a lot of time away from economic work; if we “bought” time instead of fancy goods with our high-tech productivity gains, a single mother in a technological economy might be able to support children by herself.  But industrial-age workplaces are not set up to allow employees flexibility, and modern states generally put up heavy barriers to easy, flexible self-employment or ultra-frugal living, through licensing laws, zoning regulations, and quality regulations on goods.

Morality and Religion under Hoe Societies

It’s hard to trust what we read about hoe-culture mores, because these generally aren’t societies that develop writing, and what we read is filtered through the opinions of Western researchers or missionaries. But, as far as I can tell, they are mostly animist and polytheist cultures. There are many “spirits” or “gods”, some friendly and some unfriendly, but none supreme.  Magical practices (“if you do this ritual, you’ll get that outcome”)  seem to be common.

Monotheist and henotheist cultures (one god, or one god above all other gods, usually male) seem to be more of a plow-culture thing, though not all plow cultures follow that pattern.

The presence of goddesses doesn’t correlate that much to the condition of women in a society, contrary to the (now falsified) belief that pre-agrarian societies were matriarchal and goddess-worshipping.

The Code of Handsome Lake is an interesting example of a moral and religious code written by a man from a hoe culture. Handsome Lake was a religious reformer among the Iroquois in the 18th century.  His Code is heavily influenced by Christianity (his account of Hell and of the apocalypse closely follow the New Testament and are not found in earlier Iroquois beliefs) but includes some distinctively Iroquois features.

Notably, he was strongly against spousal and child abuse, and in favor of family harmony, including this touching passage:

“Parents disregard the warnings of their children. When a child says, “Mother, I want you to stop wrongdoing,” the child speaks straight words and the Creator says that the child speaks right and the mother must obey. Furthermore the Creator proclaims that such words from a child are wonderful and that the mother who disregards then takes the wicked part. The mother may reply, “Daughter, stop your noise. I know better than you. I am the older and you are but a child. Think not that you can influence me by your speaking.” Now when you tell this message to your people say that it is wrong to speak to children in such words.”

Are Hoe Societies Good?

They’re not paradise. (Though, note that Adam and Eve were gardeners in Eden.)

As stated before, horticulturalists are poor. People in hoe cultures don’t necessarily have less to eat than their pre-modern agrarian peers, but they have less stuff, and they are much poorer than anyone in industrialized societies.

Polygamy also has distinct disadvantages.  It promotes venereal disease. It also excludes a population of unmarried men from society, which leads to violence and exposes the excluded men to poverty and isolation.

And you can’t replicate hoe societies across the globe even if you wanted to.  Hoe agriculture is so land-intensive that it couldn’t possibly support a population of seven billion.

Furthermore, while women in hoe societies have more autonomy and are subject to less gendered violence than women in pre-modern plow societies, it’s not clear how that compares to women in modern societies with rule of law. Hoe societies are still traditionalist and communitarian. Men’s and women’s spheres are still separate. Life in a hoe society is not going to exactly match a modern feminist’s ideal.  These aren’t WEIRD people, they’re something quite different, for better or for worse, and it’s hard to know exactly how the experience is different just by reading a few papers.

Hoe cultures are interesting not because we should model ourselves after them, but because they are an existence proof that non-patriarchal societies can exist for millennia.  Conservatives can always argue that a new invention hasn’t been proved stable or sustainable. Hoe cultures have been proved incredibly long-lasting.

References

[1]Braudel, Fernand. Civilization and capitalism, 15th-18th century: The structure of everyday life. Vol. 1. Univ of California Press, 1992.

[2]Boserup, Ester. Woman’s role in economic development. Earthscan, 2007.

[3]Goody, Jack, and Joan Buckley. “Inheritance and women’s labour in Africa.” Africa 43.2 (1973): 108-121.

[4]Jensen, Joan M. “Native American women and agriculture: A Seneca case study.” Sex Roles 3.5 (1977): 423-441.

[5]Alesina, Alberto, Paola Giuliano, and Nathan Nunn. “On the origins of gender roles: Women and the plough.” The Quarterly Journal of Economics 128.2 (2013): 469-530.

[6]Warren, Elizabeth, and Amelia Warren Tyagi. The two-income trap: Why middle-class parents are going broke. Basic Books, 2007.

[7]Daymont, Thomas N., and Paul J. Andrisani. “Job preferences, college major, and the gender gap in earnings.” Journal of Human Resources (1984): 408-428.

[8]Zafar, Basit. “College major choice and the gender gap.” Journal of Human Resources 48.3 (2013): 545-595.

[9]Waldfogel, Jane. “Understanding the” family gap” in pay for women with children.” The Journal of Economic Perspectives 12.1 (1998): 137-156.

Patriarchy is the Problem

Epistemic Status: speculative. We’ve got some amateur Biblical exegesis in here, and some mentions of abuse.

I’m starting to believe that patriarchy is the root of destructive authoritarianism, where patriarchy simply means the system of social organization where families are hierarchical and headed by the father. To wit:

  • Patriarchy justifies abuse of wives by husbands and abuse of children by parents
  • The family is the model of the state; pretty much everybody, from Confucius to Plato, believes that governmental hierarchy evolved from familial hierarchy; rulers from George Washington to Ataturk are called “the father of his country”
  • There is no clear separation between hierarchy and abuse. The phenomenon of dominant/submissive behavior among primates closely parallels what humans would consider domestic abuse.

Abuse in Mammalian Context

A study of male vervet monkeys [1] gives an illustration of what I mean by abuse.

Serotonin levels closely track a monkey’s status in the dominance hierarchy. When a monkey is dominant, his serotonin is high, and is sustained at that high level by observing submissive displays from other monkeys.  The more serotonin a dominant monkey has in his system, the more affection and the less aggression he displays; you can see this experimentally by injecting him with a serotonin precursor. When a high status monkey is full of serotonin, he relaxes and becomes more tolerant towards subordinates[2]; the subordinates, feeling less harassed, offer him fewer submissive displays; this rapidly drops the dominant’s serotonin levels, leaving him more anxious and irritable; he then engages in more dominance displays; the submissive monkeys then display more submission, thereby raising the dominant’s serotonin level and starting all over again.

This cycle (known as regulation-dysregulation theory, or RDT) is basically the same as the cycle of abuse in humans, whose stages are rising tension (the dominant is low in serotonin), acute violence (dominance display), reconciliation/honeymoon (the dominant’s serotonin spikes after the subordinate submits), and calm (the dominant is high in serotonin and tolerant towards subordinates.)

In each case, tolerance extends only as long as submissive behavior continues.  Anger, threats, and violence are the result of any slackening of submissive displays.  I consider this to be a working definition of both dominance and abuse: the abuser is easily slighted and considers any lèse-majesté to be grounds for an outburst.

Most conditions of oppression among humans follow this pattern.  Slaves would be harshly punished for “disrespecting” masters, subordinates must show “respect” to gangsters and warlords on pain of violence, despots require rituals of submission or tribute, etc.  I believe it to be an ancient and even pre-human pattern.

The prototypical opposite of freedom, I think, is slavery, imprisonment, or captivity.  Concepts like “rights” are more modern and less universal. But even ancient peoples would agree that to be subject to the arbitrary will of another, and not free to physically escape from him, is an unhappy state. These are more or less the conditions that cause CPTSD — kidnapping, imprisonment and institutionalization, concentration camps and POW camps, slavery, and domestic abuse — situations in which one is at another’s mercy for a prolonged period of time and unable to escape.

A captive subordinate must appease the abuser in order to avoid retaliation; this has a soul-warping effect. Symptoms of CPTSD include “a chronic and pervasive sense of helplessness, paralysis of initiative, shame, guilt, self-blame, a sense of defilement or stigma” and “attributing total power to the perpetrator, becoming preoccupied with the relationship to the perpetrator, including a preoccupation with revenge, idealization or paradoxical gratitude, seeking approval from the perpetrator, a sense of a special relationship with the perpetrator or acceptance of the perpetrator’s belief system or rationalizations.”  In other words, captives are at risk for developing something like Nietzsche’s “slave morality”, characterized by shame, submission, and appeasement towards the perpetrator.

Here’s John Darnielle talking about the thing:

“My stepfather wanted me to write Marxist poetry; if it didn’t serve the revolution, it wasn’t worthwhile.” I asked him what his mother thought, and he let out a sad laugh. “You have to understand the dynamic of the abused household. What you think doesn’t matter. Your thoughts are passing. They are positions you adopt to survive.”

The physical behaviors of shame (gaze aversion, shifty eyes, nervous smiles, downcast head, and slouched, forward-leaning postures)[3] are also common mammalian appeasement displays; subordinate monkeys and apes also have a “fear smile” and don’t meet the gaze of dominants.[4] It seems quite clear that the psychological problem of chronic shame as a result of abuse is a result of having to engage in prolonged appeasement behavior on pain of punishment.

A subordinate primate is not a healthy primate. Robert Sapolsky [5] has an overview article about how low-ranked primates are more stressed and more susceptible to disease in hierarchical species.

“When the hierarchy is stable in species where dominant individuals actively subjugate subordinates, it is the latter who are most socially stressed; this can particularly be the case in the most extreme example of a stable hierarchy, namely, one in which rank is hereditary. This reflects the high rates of physical and psychological harassment of subordinates, their relative lack of social control and predictability, their need to work harder to obtain food, and their lack of social outlets such as grooming or displacing aggression onto someone more subordinate.”

…The inability to physically avoid dominant individuals is associated with stress, and the ease of avoidance varies by ecosystem. The spatial constraints of a two-dimensional terrestrial habitat differ from those of a three-dimensional arboreal or aquatic setting, and living in an open grassland differs from living in a plain dense with bushes. As an extreme example, subordinate animals in captivity have many fewer means to evade dominant individuals than they would in a natural setting.

This coincides with the CPTSD model — social stress correlates with inability to escape.

The physiological results of social stress are cardiovascular and immune:

Prolonged stress adversely affects cardiovascular function, producing (i) hypertension and elevated heart rate; (ii) platelet aggregation and increased circulating levels of lipids and cholesterol, collectively promoting atherosclerotic plaque formation in injured blood vessels; (iii) decreased levels of protective high-density lipoprotein (HDL) cholesterol and/or elevated levels of endangering low-density lipoprotein (LDL) cholesterol; and (iv) vasoconstriction of damaged coronary arteries…In general, mild to moderate transient stressors enhance immunity, particularly the first phase of the immune response, namely innate immunity. Later phases of the stress response are immunosuppressive, returning immune function to baseline. Should the later phase be prolonged by chronic stress, immunosuppression can be severe enough to compromise immune activation by infectious challenges (47, 48). In contrast, a failure of the later phase can increase the risk of the immune overactivity that constitutes autoimmunity.

Autoimmune disorders and weakened disease resistance are characteristic of people with PTSD as well.

Being a captive abuse victim is bad for one’s physical and mental health.  While abuse is “natural” (it appears frequently in nature), it is bad for flourishing in a quite direct and unmistakable way.  Individuals are not, in general, better off under conditions of captivity and abuse.

This abuse/dominance/submission/CPTSD thing is basically about dysfunctions in the second circuit in Leary’s eight-circuit model.  It’s the part of the mind that forms intuitions about social power relations.  Every social interaction between humans has some dominance/submission content; this is normal and probably inevitable, given our mammalian heritage. But Leary’s model is somewhat developmental — to be stuck in the mindset of dominance/submission means that you cannot reach the “higher” functions, such as intellectual thought or more mature moral reasoning.  Prolonged abuse can make people so stuck in submission that they cannot think.

Morality-As-Submission vs. Morality-As-Pattern

Most primates have something like abuse, and thus I’d believe all human societies have it. Patriarchal societies have a normative form of abuse: if the hierarchical family is established as standard, then husbands have certain rights of control and violence over wives, and parents have certain rights of control and violence over children.  In societies with land ownership and monarchs, there are also rights of control and violence of landowners over serfs and slaves, and of rulers over subjects.  Historically, higher-population agrarian societies (think Sumer or neolithic China) had larger and firmer hierarchies than earlier hunter-gatherer and horticultural societies, and probably worse treatment of women.  As Sapolsky notes, stable and particularly inherited hierarchies put greater stress on subordinates. (More about that in a later post.)

To give a stereotypical picture, think of patriarchal agrarian society as Blue in the Spiral Dynamics paradigm.  (This is horoscopey and ahistorical but it gives good archetypes.)  Blue culture means grain cultivation, pyramids and ziggurats, god-kings, temple sacrifices, and the first codes of law.

Not all humans are descended from agrarian-patriarchal cultures, but almost all Europeans and Asians are.

When you have stability, high population, and accumulation of resources, as intensive agriculture allows, you begin to have laws and authorities in a much stronger sense than tribal elders.  Your kings can be richer; your monuments can last longer.  I believe that notions of the absolute and the eternal in morality or religion might develop alongside the ability to have physically permanent objects and lasting power.

And, so, I suspect that this is the origin of the belief that to do right means to obey the father/king, and the worship of supreme gods modeled after a father or king.

To say morality is obedience is not merely to say that it is moral to obey.  Rather, we’re talking about divine command theory.  Goodness is identified with the will of the dominant individual. Inside this headspace, you ask “but what would morality even be if it weren’t a rock to crush me or a chain to bind me?”  It’s fear and submission melded with a sense of the rightness and absolute legitimacy of the dominator.

The “Song of the Sea” is considered by modern Biblical scholars to be the chronologically oldest part of the Bible, dating from the 15th to 5th centuries BC, and echoing praise songs to Mesopotamian gods and kings. God is here no abstract principle or sole creator; he is a “man of war” who defeats other peoples and their gods in battle.  He is to be worshiped not because he is good but because he is fearsome.

But philosophers, even in patriarchal societies, have often had some notion of a “good” which is less like a terrifying warlord and more like a natural law, a pattern in the universe, something to discern rather than someone to submit to.

The ancient Egyptians had ma’at and the Chinese had Heaven, as concepts of abstract justice which wicked earthly rulers could fall short of.  The ancient Greeks had logos, a faculty of reason or speech that allowed one to discern what was good.

Plato neatly disposes of divine command theory in the Euthyphro : if “good” is simply what the gods want, then what should one do if the gods disagree? Since in Greek mythology the gods plainly do disagree, the Good must be something that lies beyond the mere opinion of a powerful individual, human or divine.

As Ben Hoffman put it:

When morality is seen as rules society imposes on us to keep us in line, the superego or parent part is the internalized voice of moral admonition. Likewise, I suspect that in contemporary societies this often includes the internalized voice of the schoolteacher telling you how to do the assignment. This internalized voice of authority feels like an external force compelling you. People often feel tempted to rebel against their own superego or internalized parent.

By contrast, logos and sattva are not seen as internalized narratives – they are described as perceptive faculties. You see what’s right, by seeing the deep structure of reality. The same thing that lets you see the deep patterns in mathematics, lets you see the deep decision-theoretic symmetries underlying truly moral behavior.

This is why it matters so much that theologians such as Maimonides and Augustine were so insistent on the point that God has no body and anthropomorphic references in the Bible are metaphors, and why this point had to be repeated so often and seemed so difficult for their contemporaries to grasp. (Seriously, read The Guide to the Perplexed. It explains separately how each individual Biblical reference to a body part of God is a metaphor — it’s a truly incredible amount of repetition.)

If God has no body, this means that modern (roughly post-Roman-Empire) Jews and Christians worship something more like a principle of goodness than a warlord, even if God is frequently likened to a father or king.  It’s not “might makes right”, but “right makes right.”

The abuse-victim logic of morality-as-submission can have no concept that might might not make right.

But more “mature” ethical philosophies, even if they emerge from authoritarian societies — Christian, Jewish, Confucian, Classical Greek, to name a few that I’m familiar with — can be used as grounds to oppose tyranny and abuse, because they contain the concept of a pattern of justice that transcends the will of any particular man.

Once you can generalize, once you can see pattern, once you notice that humans disagree and kings can be toppled, you have the potential to escape the second-circuit, primate-level, dominant/submissive paradigm.  You can ask “what is right?” and not just “who’s on top?”

An Example of Morality-As-Submission: The Golden Calf

It is generally bad scholarship to read the literal text of the Bible as evidence for what contemporary Jews or Christians believe; that ignores thousands of years of interpretation.  But if you just look at the Bible without context, raw, you can get some kind of an unfiltered impression of the mindset of whoever wrote it — which is quite different from how moderns (religious or not) think, but which still influences us deeply.

So let’s look at Exodus 32:34.

The People of Israel, impatient with Moses taking so long on Mount Sinai, build a golden calf and worship it. Now God gets mad.

7 And the LORD spoke unto Moses: ‘Go, get thee down; for thy people, that thou broughtest up out of the land of Egypt, have dealt corruptly; 8 they have turned aside quickly out of the way which I commanded them; they have made them a molten calf, and have worshipped it, and have sacrificed unto it, and said: This is thy god, O Israel, which brought thee up out of the land of Egypt.’ 9 And the LORD said unto Moses: ‘I have seen this people, and, behold, it is a stiffnecked people.

“Stiff-necked”, meaning stubborn. Meaning “you just do as you damn well please.”  Meaning “you have a will, you choose to do things besides obey me, and that is just galling.”  This is abuser/authoritarian logic: the abuser feels entitled to obedience and especially submission. To be stiff-necked is not to bow the neck.

10 Now therefore let Me alone, that My wrath may wax hot against them, and that I may consume them; and I will make of thee a great nation.’ 11 And Moses besought the LORD his God, and said: ‘LORD, why doth Thy wrath wax hot against Thy people, that Thou hast brought forth out of the land of Egypt with great power and with a mighty hand? 12 Wherefore should the Egyptians speak, saying: For evil did He bring them forth, to slay them in the mountains, and to consume them from the face of the earth? Turn from Thy fierce wrath, and repent of this evil against Thy people. 13 Remember Abraham, Isaac, and Israel, Thy servants, to whom Thou didst swear by Thine own self, and saidst unto them: I will multiply your seed as the stars of heaven, and all this land that I have spoken of will I give unto your seed, and they shall inherit it for ever.’ 14 And the LORD repented of the evil which He said He would do unto His people.

Moses pleads with God to remember his promises and not kill everyone. He even calls the plan of genocide “evil”!  And God, who is here not an implacable force of justice but out-of-control angry, calms down in response to the pleading and moderates his behavior.

But then Moses comes down the mountain, and he gets angry, and he slaughters, not everyone, but 3000 men.

27 And he [Moses] said unto them: ‘Thus saith the LORD, the God of Israel: Put ye every man his sword upon his thigh, and go to and fro from gate to gate throughout the camp, and slay every man his brother, and every man his companion, and every man his neighbour.’ 28 And the sons of Levi did according to the word of Moses; and there fell of the people that day about three thousand men.

 

Notice how, if you’re at all familiar with abusive family dynamics, God is the primary abusive parent, and Moses is the less-abusive, appeasing parent, who tries to protect the children somewhat but still terrorizes them.

Now, God is going to make sure the Israelites know how grateful to be for his mercy and should beware lest he does anything worse:

1. And the LORD spoke unto Moses: ‘Depart, go up hence, thou and the people that thou hast brought up out of the land of Egypt, unto the land of which I swore unto Abraham, to Isaac, and to Jacob, saying: Unto thy seed will I give it– 2. and I will send an angel before thee; and I will drive out the Canaanite, the Amorite, and the Hittite, and the Perizzite, the Hivite, and the Jebusite– 3. unto a land flowing with milk and honey; for I will not go up in the midst of thee; for thou art a stiffnecked people; lest I consume thee in the way.’ 4. And when the people heard these evil tidings, they mourned; and no man did put on him his ornaments.  5 And the LORD said unto Moses: ‘Say unto the children of Israel: Ye are a stiffnecked people; if I go up into the midst of thee for one moment, I shall consume thee; therefore now put off thy ornaments from thee, that I may know what to do unto thee.’

Note the mourning and the refusal to put on ornaments. You have to show contrition, you can’t relax and make merry, as long as the parent is angry. It’s a submission behavior. The whole house has to be thrown into gloom until the parent says your punishment is over.

Now Moses goes into the Tent of Meeting to pray, very humbly, for God’s forgiveness of the people.  And here, in this context, is where you find the famous Thirteen Attributes of God’s Mercy.

6. And the LORD passed by before him, and proclaimed: ‘The LORD, the LORD, God, merciful and gracious, long-suffering, and abundant in goodness and truth;  7 keeping mercy unto the thousandth generation, forgiving iniquity and transgression and sin; and that will by no means clear the guilty; visiting the iniquity of the fathers upon the children, and upon the children’s children, unto the third and unto the fourth generation.’ 8. And Moses made haste, and bowed his head toward the earth, and worshipped. 9. And he said: ‘If now I have found grace in Thy sight, O Lord, let the Lord, I pray Thee, go in the midst of us; for it is a stiffnecked people; and pardon our iniquity and our sin, and take us for Thine inheritance.’

God is “long-suffering” because he doesn’t kill literally everyone, when he is begged not to.  This “mercy” is more like the “tolerance” that dominant primates display when they get “enough” appeasement behaviors from subordinates.  Of course, people have long taken this passage as an inspiration for real mercy and grace; but in context and without theological interpretation that is not what it looks like.

Now, there’s a long interval of the new tablets of the law being brought down, and instructions being given for the tabernacle and how to give sin-offerings. Eight days later, in Leviticus 10,  God’s explained how to give a sin-offering and Aaron and his sons are actually going to do it, to make atonement for their sins…

…and they do it WRONG.

1. And Nadab and Abihu, the sons of Aaron, took each of them his censer, and put fire therein, and laid incense thereon, and offered strange fire before the LORD, which He had not commanded them. 2. And there came forth fire from before the LORD, and devoured them, and they died before the LORD. 3. Then Moses said unto Aaron: ‘This is it that the LORD spoke, saying: Through them that are nigh unto Me I will be sanctified, and before all the people I will be glorified.’ And Aaron held his peace. 4. And Moses called Mishael and Elzaphan, the sons of Uzziel the uncle of Aaron, and said unto them: ‘Draw near, carry your brethren from before the sanctuary out of the camp.’  5 So they drew near, and carried them in their tunics out of the camp, as Moses had said. 6 And Moses said unto Aaron, and unto Eleazar and unto Ithamar, his sons: ‘Let not the hair of your heads go loose, neither rend your clothes, that ye die not, and that He be not wroth with all the congregation; but let your brethren, the whole house of Israel, bewail the burning which the LORD hath kindled. And ye shall not go out from the door of the tent of meeting, lest ye die; for the anointing oil of the LORD is upon you.’ And they did according to the word of Moses.

Not only does the appeasement ritual of the sin-offering have to be done, it has to be done exactly right, and if you make an error, the world will explode. And note  the form of the error — the priests take initiative, they light a fire that God didn’t specifically tell them to light.  “Did I tell you to light that?”  And now, since God is angry, nobody else is allowed to act upset about the punishment, lest they get in trouble too.

These are not abstract theological ideas that the authors got out of nowhere. These are things that happen in families.

Growing in Poisoned Soil

I don’t mean to make this an anti-religious rant, or imply that religious people systematically support domestic abuse and tyranny. It was, after all, the story of Exodus that inspired American slaves in their fight for freedom.

The point is that this pattern — abuser-logic and abuse-victim logic — is a recurrent feature in the moral intuitions of everyone in a culture with patriarchal roots.

Here we have punishment, not as a deterrent or as a natural consequence of wrong action, but as rage, the fury of an authority who didn’t get the proper “respect.”

Here we have appeasement of that rage interpreted as the virtue of “humility” or “atonement.”

Here we have an intuitive sense that even generic moral words like “should” or “ought” are blows; they are what a dominant individual forces upon a subordinate.

Look at Psalm 51.  This is a prayer of repentance; this is what David sings after he realizes that he was wrong to commit adultery and murder. Sensible things to repent of, no doubt. But the internal logic, though beautiful and emotionally resonant, is crazypants.

“Behold, I was brought forth in iniquity, and in sin did my mother conceive me.” (Wait, you didn’t do anything wrong when you were a fetus, we’re talking about what you did wrong just now.)

“Purge me with hyssop, and I shall be clean; wash me, and I shall be whiter than snow.”  (Yes, guilt does create a desire for cleansing; but you’re expecting God to do the washing?  Only an external force can make you clean?)

“Hide Thy face from my sins, and blot out all mine iniquities.”  (Um, I’m pretty sure your victim is still dead.)

“The sacrifices of God are a broken spirit; a broken and a contrite heart, O God, Thou wilt not despise.” (AAAAAAAAAAA.)

Even legitimate guilt for serious wrongdoing gets conflated with submission and “broken-spiritedness” and pleading for mercy and an intuition of innate taintedness.  This is how morality works when you process it through the second circuit, through the native mammalian intuitions around dominance/submission.

It’s natural, it’s human, it’s easy to empathize with — and it’s quite insane.

It’s also, I think, related to problems specific to women.

If women are traditionally subordinate in your society — and forty years of women’s lib is nowhere near enough to overcome thousands of years of tradition — then women will disproportionately suffer domestic abuse, and even those who don’t will still inherit the kinds of intuitions that captives always do.

A “good girl” is obedient, “innocent” (i.e. lacking in experience, especially sexual experience), and never takes initiative, because initiative can get you in trouble. A “good girl” internalizes that to be “good” is simply to submit and to appease and please.

How can you possibly eliminate those dysfunctions until you attack their roots?

Women have higher rates of depression and anxiety than men. Girl toddlers also have higher rates of shame in response to failure than boy toddlers [6].  Women also have a significantly lower salivary cortisol response to social stress than men.[7] Blunted cortisol response to stress is what you see in PTSD, CFS, and atypical depression, which are all more common in women than men; it occurs more in low-status individuals than high-status ones.[8][9] The psychological and physiological problems most specific to women are also the illnesses associated with low social status and chronic shame.

If we have a society that runs on shame and appeasement, especially for women, then women will be hurt.  Everything we do and think today, including modern liberalism, is built on a base that includes granting legitimacy to abusive power.  I don’t mean this in the sense of “everything is tainted, you must see the world through mud-colored glasses”, but in the sense that this is where our inheritance comes from, these influences are still visible, this is the soil we grew from.

It’s not trivial to break away and create alternatives. People do.  Every concept of goodness-as-pattern or of universal justice is an alternative to abuse-logic, which is always personal and emotional.  But it’s hard to break away completely.

References 

[1]McGuire, Michael T., M. J. Raleigh, and C. Johnson. “Social dominance in adult male vervet monkeys: Behavior-biochemical relationships.” Information (International Social Science Council) 22.2 (1983): 311-328.

[2]Gilbert, Paul, and Michael T. McGuire. “Shame, status, and social roles: Psychobiology and evolution.” (1998).

[3]Keltner, Dacher. “Signs of appeasement: Evidence for the distinct displays of embarrassment, amusement, and shame.” Journal of personality and social psychology 68.3 (1995): 441.

[4]Leary, Mark R., and Robin M. Kowalski. Social anxiety. Guilford Press, 1997.

[5]Sapolsky, Robert M. “The influence of social hierarchy on primate health.” Science 308.5722 (2005): 648-652.

[6]Lewis, Michael, Steven M. Alessandri, and Margaret W. Sullivan. “Differences in shame and pride as a function of children’s gender and task difficulty.” Child development 63.3 (1992): 630-638.

[7]Kirschbaum, Clemens, Stefan Wüst, and Dirk Hellhammer. “Consistent sex differences in cortisol responses to psychological stress.” Psychosomatic Medicine 54.6 (1992): 648-657.

[8]Gruenewald, Tara L., Margaret E. Kemeny, and Najib Aziz. “Subjective social status moderates cortisol responses to social threat.” Brain, behavior, and immunity 20.4 (2006): 410-419.

[9]”Threat, Social-Evaluative, and Self-Conscious Emotion.”Miller & Tangney, 1994

Gleanings from Double Crux on “The Craft is Not The Community”

Epistemic status: This is a bunch of semi-remembered rephrasings of a conversation.

At the CFAR alumni reunion, John Salvatier and I had a public double crux on my last post.

A double crux is a technique CFAR invented, which I think is much better than a debate. The goal is to simply pin down where exactly two people disagree. This can take a while. Even the best, most respectful debates are adversarial: it’s my opinion vs. yours, and we see which is stronger in an (ideally fair) contest. A double crux is collaborative: we’re just trying to find which is the exact point of contention here, so that if we go on to have an actual debate we won’t be talking past each other.

John’s motivation for disagreeing with my post was that he didn’t think I should be devaluing the intellectual side of the “rationality community”. My post divided projects into into community-building (mostly things like socializing and mutual aid) versus outward-facing (business, research, activism, etc.); John thought I was neglecting the importance of a community of people who support and take an interest in intellectual inquiry.

I agreed with him on that point — intellectual activity is important to me — but doubted that we had any intellectual community worth preserving.  I was skeptical that rationalist-led intellectual projects were making much progress, so I thought the reasonable thing to do was to start fresh.

John is actually working on an intellectual project of his own — he’s trying to explore what the building blocks of creative thinking are, and how one can improve it — and he thinks his work is productive/useful, so that seemed a good place to dig in deeper.

I mentioned that by a lot of metrics, his work doesn’t have a lot of output. He has done a lot of one-on-one conversations and informal experiments with people in the community, but there’s no writeup, and certainly no formal psychological research, papers, or collaboration with psychologists. How could an outsider possibly tell if there’s a real thing here?

John said that I might be over-valuing formality. He’s pretty confident that the “informal” phase of work — the part when you’re just playing with an idea, or planning out your strategy, before you sit down to execute — is actually the most important part, in the sense that it’s highest-leverage. After some discussion, I came to agree with him.

I’ve definitely had the experience that creative work is “bursty” — that most days you produce piles of junk, and some days you produce solid gold, whether it’s writing, math, or code. I’ve also heard this from other people, both friends and famous historical figures.  It also seems that when something’s going right about your “pre-work” cognitive processes — planning, imagining, even emotional attitudes — you do much better work at the formal, sit-down-and-produce-output stage.  Work goes hugely better when the “muse” is friendly.

John additionally believes that it’s possible to “train your muse” to help you work better, and said that learning to do this himself allowed him to contribute much better to open-source software projects (where he built a statistics library.)

He also pointed out that when it comes to dealing with the distant future, general-purpose and speculative cognitive processes will have to be more important than trained skills, because the future will contain unfamiliar situations that we haven’t trained for. People who excel at the sit-down-and-execute activities that help you succeed in your field aren’t necessarily going to be able to reason about the weirdness of a changing world.

(I agreed that the ability to “philosophize” well seems to be much rarer than the ability to execute well; I’ve seen many prominent computer scientists whose theories about general intelligence just don’t make sense.)

So the speculative, philosophical, imaginative stuff that comes before sitting down and executing is important for success, important for humanity, and maybe something we can learn to do better. John certainly thinks so, and wants the rationality community to be a sort of laboratory or nursery for these ideas.

It’s also true that formally executing on these ideas can be really hard, if you define “formally” strictly enough. Here’s Scott Alexander reflecting on the bureaucratic hell of trying to get a psychiatry study on human subjects approved by an IRB — when it only involved giving them a questionnaire!  If that’s what it takes to do academic experimental research on humans, I don’t want to claim that anybody who’s thinking about the human mind without publishing papers can be rounded down to “doing nothing.”

That still leaves us with the question of “how do I know — not an IRB, not the ‘general public’, but Sarah, your friendly acquaintance — that you’re making real progress?”  I’m still going to need to be shown some kind of results, if not peer-reviewed ones.  This is why I’m a fan of blogging and something in the neighborhood of “citizen science.”  If a programmer tests the speed of two different programs and writes up the results, code included, I believe them, and if I’m skeptical, I can try to duplicate their results. It’s in the spirit of the scientific method, even if it’s not part of the official edifice of Science(TM).

So, John and I still have an unresolved disagreement about the general status of these “how to think real good” projects in the community.  He thinks they’re moving forward; I still haven’t seen evidence that convinces me.  This is our “double crux” — both of us agree (the “double” part) that it’s the key (“crux”) to our disagreement.

But I definitely agree with John that if there were promising ways to “think real good” being developed in our community, then it would be important to support and encourage that exploration.

One interesting thing that we had in common was that we both viewed “community” from a strongly individualist standpoint. John said he would evaluate someone as a potential collaborator on a project pretty much the same way whether they were a community member or not — track records for success, recommendations from friends he respects, and so on.  The “community” is useful because it’s a social network that sometimes floats cool people to his attention.  Deeper notions of tribe or belonging didn’t seem to apply, at least concerning his intellectual aims.  He had no interest in kicking people out for not following community standards, or trying to get everybody in the community to be a certain way; if a person considered themselves “part of the community” but John couldn’t see benefit from associating with that person, he just wouldn’t associate.  This is not everybody’s point of view — in fact, some people might say that John’s idea of a community is equivalent to not having a community at all.  So a lot of the things that seem to spark a lot of debate these days — community standards, community norms, etc — just didn’t show up in this double-crux at all, because neither of us really had strong intuitions about governance or collective issues.

Mostly, I came away with a lot of food for thought about the reflection vs. execution thing.  If there’s a spectrum between musing about the thing and doing the thing, I’m pretty far towards the “musing” side relative to the general population, so I’d generally assumed that I do too much musing and not enough executing.  “Head-in-the-clouds dreamer” and “impractical intellectual” and all that.  (Introspection falls into this category too; thinking too much about your own psyche is “navel-gazing”.)  But reflecting well seems to be incredibly high-reward relative to the time and effort spent, for compounding reasons. Strategizing so that you work on the right project, or putting attention into your mental health now so that you’re systematically more productive in the future, has a much bigger impact than just spending one more marginal hour on the daily slog.  Reflecting and strategizing gave my friend Satvik much more success at work.

It’s always felt a little presumptuous to me — like “who am I to think about what I’m doing? I’m supposed to keep my head down, keep slogging, and not ask questions!  Isn’t it terribly selfish to wonder what helps me do my best, rather than just doing my duty?”  But that’s a set of norms that gets applied to children, soldiers, and laborers (and maybe it shouldn’t even then), not to people like me. My peers expect that a person who does “knowledge work” for a living and writes essays will, of course, reflect on what she’s doing.

So maybe I ought to be going back and reading what reflective people write, taking it seriously this time around. “The unexamined life is not worth living.”  What if you literally meant that?  What if thinking about stuff was not a half-forbidden luxury but the most important thing about being human?

The Craft is Not The Community

Epistemic status: argumentative. I expect this to start a discussion, not end it.

“Company culture” is not, as I’ve learned, a list of slogans on a poster.  Culture consists of the empirical patterns of what’s rewarded and punished within the company. Do people win promotions and praise by hitting sales targets? By coming up with ideas? By playing nice?  These patterns reveal what the company actually values.

And, so, with community cultures.

It seems to me that the increasingly ill-named “Rationalist Community” in Berkeley has, in practice, a core value of “unconditional tolerance of weirdos.”  It is a haven for outcasts and a paradise for bohemians. It is a social community based on warm connections of mutual support and fun between people who don’t fit in with the broader society.

I think it’s good that such a haven exists. More than that, I want to live in one.

I think institutions like sharehouses and alloparenting and homeschooling are more practical and humane than typical American living arrangements; I want to raise children with far more freedom than traditional parenting allows; I believe in community support for the disabled and mentally ill and mutual aid for the destitute.  I think runaways and sexual minorities deserve a safe and welcoming place to go.  And the Berkeley community stands a reasonable chance of achieving those goals!  We’re far from perfect, and we obviously can’t extend to include everyone (esp. since the cost of living in the Bay is nontrivial), but I like our chances. I think we may actually, in the next ten years, succeed at building an accepting and nurturing community for our members.

We’ve built, over the years, a number of sharehouses, a serious plan for a baugruppe, preliminary plans for an unschooling center, and the beginnings of mutual aid organizations and dispute resolution mechanisms.  We’re actually doing this.  It takes time, but there’s visible progress on the ground.

I live on a street with my friends as neighbors. Hardly anybody in my generation gets to say that.

What we’re not doing well at, as a community, is external-facing projects.

And I think it’s time to take a hard look at that, without blame or judgment.

The thing about external-outcome-oriented projects is that they require standards. You have to be able to reject people for incompetence, and expect results from your colleagues.  I don’t think there’s any other way to achieve goals.

That means that an external-oriented project can’t actually serve all of a person’s emotional needs.  It can’t give you unconditional love. It can’t promise you a vibrant social scene. It can’t give you a place of refuge when your life goes to hell.  It can’t replace family or community.

As Robert Frost said, “Home is where, when you go there, they have to take you in.”

But Tesla Motors and MIT don’t have to take you in. And they wouldn’t work if they did.

Internally focused groups, whose goals are about the well-being of their own members, are intrinsically different. You have to care more about inclusion, consensus, and making the process itself rewarding and enjoyable for the participants. If you’re organizing parties for each other, making the social group gel well and making everyone feel welcome is not a side issue — it’s part of the main goal.  A Berkeley community organization that didn’t serve the people who currently live in Berkeley and meet their needs would no longer be an organization for our community; you can’t fire the community and get another.  The whole point is benefiting these specific people.

An externally-focused goal, by contrast, can and should be “no respecter of persons” — you have to focus on achieving good outcomes, regardless of who’s involved.

So far, when members of our community focus on external goals, I think they’ve done much better when they haven’t tried to marry those goals with making community institutions.

Some rationalists have created successful startups and many more have successful careers in the tech industry — but these are basically never “rationalist endeavors”, staffed exclusively by community members or focused on serving this community.  And they shouldn’t be. If you want to build a company, you hire the most competent people for the job, not necessarily your friends or neighbors. A company is oriented towards an external outcome, and so has to be objective and strategic about that goal. It’s by nature outward-facing, not inward-facing to the community.

My own outward-facing goal is to make an impact on treating disease.  Mainly I’m working towards that through working in drug development — at a company which is by no means a “rationalist community project.” It shouldn’t be! What we need are good biologists and engineers and data scientists, regardless of what in-jokes they tell or who they’re friends with.

In the long run, I hope to work on things (like anti-aging or tighter bench-to-bedside feedback loops) that are somewhat more controversial. But I don’t think that changes the calculus. You still want the most competent people you can get, who are also willing to get on board with your mission. Idealism and radicalism don’t negate the need for excellence, if you’re working on an external goal.

Some other people in the community have more purely intellectual projects, that are closer to Eliezer Yudkowsky’s original goals. To research artificial intelligence; to develop tools for training Tetlock-style good judgment; to practice philosophical discourse.  But I still think these are ultimately outcome-focused, external projects.

Artificial intelligence research is science, and requires the strongest possible computer scientists and engineers. (And perhaps cognitive scientists and philosophers.) To their credit, I think most people working on AI are aware of the need for expertise and are trying to attract great talent, but I still think it needs to be said.

“Good judgment” or reducing cognitive biases is social science, and requires people with expertise in psychology, behavioral economics, decision theory, cognitive science, and the like. It might also benefit from collaboration with people who work in finance, who (according to Tetlock’s research) are more effective than average at avoiding cognitive biases, and have a long tradition of valuing strategy and quantitative thinking.

Even philosophical discourse, in my opinion, is ultimately external-outcome-focused. For all that it’s hard to measure success, the people who want to create better discourse norms do have a concern with quality, and ultimately consider this a broad issue affecting modern society, not exclusively a Berkeley-local issue.  Progress on improving discourse should produce results (in the form of writing or teaching) that can be shared with the wider world. It might be worth prioritizing good humanists, writers, teachers, and scholars who have a track record of building high-quality conversations.

None of these projects need to be community-focused!  In fact, I think it would be better if they freed themselves from the Berkeley community and from the particular quirks and prejudices of this group of people. It doesn’t benefit your ability to do AI research that you primarily draw your talent from a particular social group.  It also doesn’t straightforwardly benefit the social group that there’s a lot of overlap with AI research.  (Is your research going to make you better at babysitting? Or cooking? Or resolving roommate drama?)

Cross-pollination between the Berkeley community and outcome-oriented projects would still be good. After all, ambitious people make good company!  I don’t think that the Bay Area is going to stop being a business and academic hub any time soon, and it makes sense for there to be friendships and relationships between people who primarily focus on community and people who primarily focus on external projects.  (After all, that’s one traditional division of labor in a marriage.)

But I think it muddies the water tremendously when people conflate community-building with external-facing projects.

Does maintaining good social cohesion within the Berkeley community actually advance the art of human rationality? I’m skeptical, because rationality training empirically doesn’t improve our scores on reasoning questions.  [I seem to recall, though I can’t find the source, that community members also don’t score higher than other well-educated people on the Cognitive Reflection Test, a standard measure of cognitive bias.] [ETA: I remembered wrong! As of the 2012 LessWrong survey, LessWrongers scored significantly better on cognitive bias questions than the participants in the original papers.  So it’s still possible, though not obvious, that we’re in some sense a more-rational-than-average community.]  If we’re not actually more rational than you’d expect in the absence of a community, why should rationality-promoters necessarily focus on community-building within Berkeley? Social cohesion is good for people who live together, but it’s a stretch to say that it promotes the cause of critical thinking in general.

Does having fun discussions with friends advance the state of human discourse?  Does building interesting psychological models and trying self-help practices advance the state of psychology?  Again, it’s really easy to confuse that with highbrow forms of just-for-fun socializing. Which are good in themselves, because they are enjoyable and rewarding for us!  But it’s disingenuous to call that progress in a global and objective sense.

I consider charismatic social crazes to be essentially a form of entertainment. People enjoy getting swept up in the emotional thrill of a cult of personality or mass movement for pretty much the same reasons they enjoy falling in love, watching movies, or reading adventure stories. Thrills are personal (they only create pleasure for the recipient and don’t spill over much to the wider world) and temporary (you can’t stay thrilled or entertained by the same thing forever).  Interpersonal thrills, unlike works of art, are inherently ephemeral; they last only as long as the personal relationship does.  These factors place limits on how much value can be derived from charisma alone, if it doesn’t build more lasting outcomes.

That means personality cults and mass enthusiasms belong in the “community-building” bucket, not the “outward-facing project” bucket. Even from a community perspective, you might not think they’re are a great idea, and that’s a separate discussion. But I’m primarily pushing back against the idea that they can be world-saving projects.  Something that only affects us and the insides of our heads, without leaving any lasting products or documents that can be shared with the world, is a purely internal affair.  Essentially, it’s just a glorified personal relationship.  And so it should be evaluated on the basis of whether it’s good for the people involved and the people they have personal relationships with. You look at it wearing your “community member” hat, not your “world-changing” hat.  Even if it’s nominally a nonprofit or a corporation, or associated with some ideology, if it doesn’t produce something for the world at large, it’s a community institution.

(An analogy is fandom debates. Sometimes these pose as political activism, but they are really arguments about fiction, by fans and for fans, with barely any impact on the non-fandom world. Fandom is a leisure activity, and so fandom debates are also a leisure activity.  Real activism, as practiced by professionals, is work; it’s not always fun, has standards for competence, and has tangible external goals that matter to people other than the activists themselves.)

I think distinguishing external-facing goals from community goals sidesteps the eternal debates over “what should the rationalist community be, and who should be in it?”

I think, in practice, the people who go to the same events in Berkeley, live together, parent together, and regularly communicate with each other, form a community. That community exists and deserves the love and attention of the people who value being part of it.  Not for any external reason, but, as they say in Red Dawn, “because we live here.”  We are people, our quality of life matters, our friendships matter, and putting effort into making our lives good is valuable to us.  We won’t choose the universal best way of life for all mankind, because that doesn’t exist; we’ll have the community norms and institutions that suit us, which is what having a local community means.

But there are individual people who are dissatisfied because that particular community, as it exists today, is not well-suited to accomplishing their external-facing goals. And I think that’s also a valid concern, and the natural solution is to divorce those goals from the purely communitarian ones. If you wonder “why doesn’t anybody around here care about my goal?” the natural thing to do is to focus on finding collaborators who do care about your goal — who may not be here!

If you’re frustrated that this isn’t a community based around excellence, I think you’ll be more likely to find what you’re looking for in institutions that have external goals and standards for membership. Some of those exist already, and some are worth creating.

A local, residential community isn’t really equipped to be a team of superstars.  Certainly a multigenerational community can’t be a team of superstars — you can’t just exclude someone’s kid if they don’t make the cut.

I don’t want to overstate this — Classical Athens was a town, and it had a remarkable track record of producing human achievement. But even there, we were talking about a population of 300,000 people.  Most of them didn’t go down in history.  Most of them were the “populace” that Plato thought were not competent to rule.  90% of them weren’t even adult male citizens. I don’t know how you build a new Athens, but it’s important to remember that it’s going to contain a lot of farming and weaving along with the philosophy and poetry.

Small teams of excellent people, though, are pretty much the tried-and-true formula for getting external-facing things done, whether practical or theoretical.  And the usual evaluative tools of industry and academia are, I think, correct in outline: judge by track records, not by personal relationships; measure outcomes objectively; consider ideas that challenge your preconceptions; publish, or ship, your results.

I think more of us who have concrete external goals should be seeking these kinds of focused teams, and not relying on the residential community to provide them.

In Defense of Individualist Culture

Epistemic Status: Pretty much serious and endorsed.

College-educated Western adults in the contemporary world mostly live in what I’d call individualist environments.

The salient feature of an individualist environment is that nobody directly tries to make you do anything.

If you don’t want to go to class in college, nobody will nag you or yell at you to do so. You might fail the class, but this is implemented through a letter you get in the mail or on a registrar’s website.  It’s not a punishment, it’s just an impersonal consequence.  You can even decide that you’re okay with that consequence.

If you want to walk out of a talk in a conference designed for college-educated adults, you can do so. You will never need to ask permission to go to the bathroom. If you miss out on the lecture, well, that’s your loss.

If you slack off at work, in a typical office-job environment, you don’t get berated. And you don’t have people watching you constantly to see if you’re working. You can get bad performance reviews, you can get fired, but the actual bad news will usually be presented politely.  In the most autonomous workplaces, you can have a lot of control over when and how you work, and you’ll be judged by the results.

If you have a character flaw, or a behavior that bothers people, your friends might point it out to you respectfully, but if you don’t want to change, they won’t nag, cajole, or bully you about it. They’ll just either learn to accept you, or avoid you. There are extremely popular advice columns that try to teach this aspect of individualist culture: you can’t change anyone who doesn’t want to change, so once you’ve said your piece and they don’t listen, you can only choose to accept them or withdraw association.

The basic underlying assumption of an individualist environment or culture is that people do, in practice, make their own decisions. People believe that you basically can’t make people change their behavior (or, that techniques for making people change their behavior are coercive and thus unacceptable.)  In this model, you can judge people on the basis of their decisions — after all, those were choices they made — and you can decide they make lousy friends, employees, or students.  But you can’t, or shouldn’t, cause them to be different, beyond a polite word of advice here and there.

There are downsides to these individualist cultures or environments.  It’s easy to wind up jobless or friendless, and you don’t get a lot of help getting out of bad situations that you’re presumed to have brought upon yourself. If you have counterproductive habits, nobody will guide or train you into fixing them.

Captain Awkward’s advice column is least sympathetic to people who are burdens on others — the depressive boyfriend who needs constant emotional support and can’t get a job, the lonely single or heartbroken ex who just doesn’t appeal to his innamorata and wants a way to get the girl.  His suffering may be real, and she’ll acknowledge that, but she’ll insist firmly that his problems are not others’ job to fix.  If people don’t like you — tough! They have the right to leave.

People don’t wholly “make their own decisions”.  We are, to some degree, malleable, by culture and social context. The behaviorist or sociological view of the world would say that individualist cultures are gravely deficient because they don’t put any attention into setting up healthy defaults in environment or culture.  If you don’t have rules or expectations or traditions about food, or a health-optimized cafeteria, you “can” choose whatever you want, but in practice a lot of people will default to junk.  If you don’t have much in the way of enforcement of social expectations, in practice a lot of people will default to isolation or antisocial behavior. If you don’t craft an environment or uphold a culture that rewards diligence, in practice a lot of people will default to laziness.  “Leaving people alone”, says this argument, leaves them in a pretty bad place.  It may not even be best described as “leaving people alone” — it might be more like “ripping out the protections and traditions they started out with.”

Lou Keep, I think, is a pretty good exponent of this view, and summarizer of the classic writers who held it. David Chapman has praise for the “sane, optimistic, decent” societies that are living in a “choiceless mode” of tradition, where people are defined by their social role rather than individual choices.  Duncan Sabien is currently trying to create a (voluntary) intentional community designed around giving up autonomy in order to be trained/social-pressured into self-improvement and group cohesion.  There are people who actively want to be given external structure as an aid to self-mastery, and I think their desires should be taken seriously, if not necessarily at face value.

I see a lot of writers these days raising problems with modern individualist culture, and it may be an especially timely topic. The Internet is a novel superstimulus, and it changes more rapidly, and affords people more options, than ever before.  We need to think about the actual consequences of a world where many people are in practice being left alone to do what they want, and clearly not all the consequences are positive.

But I do want to suggest some considerations in favor of individualist culture — that often-derided “atomized modern world” that most of us live in.

We Aren’t Clay

It’s a common truism that we’re all products of our cultural environment. But I don’t think people have really put together the consequences of the research showing that it’s not that easy to change people through environmental cues.

  • Behavior is very heritable. Personality, intelligence, mental illness, and social attitudes are all well established as being quite heritable.  The top ten most replicated findings in behavioral genetics starts with “all psychological traits show significant and substantial genetic influence”, which Eric Turkheimer has called the “First Law of behavioral genetics.”  A significant proportion of behavior is also explained by “nonshared environment”, which means it isn’t genetic and isn’t a function of the family you were raised in; it could include lots of things, from peers to experimental error to individual choice.
  • Brainwashing doesn’t work. Cult attrition rates are high, and “brainwashing” programs of POWs by the Chinese after the Korean War didn’t result in many defections.
  • There was a huge boom in the 1990’s and 2000’s in “priming” studies — cognitive-bias studies that showed that seemingly minor changes in environment affected people’s behavior.  A lot of these findings didn’t replicate. People don’t actually walk slower when primed with words about old people. People don’t actually make different moral judgments when primed with words or videos of cleanliness or disgusting bathrooms.  Being primed with images of money doesn’t make people more pro-capitalist.  Girls don’t do worse on math test when primed with negative stereotypes. Daniel Kahneman himself, who publicized many of these priming studies in Thinking, Fast and Slow, wrote an open letter to priming researchers that they’d have to start replicating their findings or lose credibility.
  • Ego depletion failed to replicate as well; using willpower doesn’t make you too “tired” to use willpower later.
  • The Asch Conformity Experiment was nowhere near as extreme as casual readers generally think: the majority of people didn’t change their answers to wrong ones to conform with the crowd, only 5% of people always conformed, and 25% of people never conformed.
  • The Sapir-Whorf Hypothesis has generally been found to be false by modern linguists: the language one speaks does not determine one’s cognition. For instance, people who speak a language that uses a single word for “green” and “blue” can still visually distinguish the colors green and blue.

Scott Alexander said much of this before, in Devoodooifying Psychology.  It’s been popular for many years to try to demonstrate that social pressure or subliminal cues can make people do pretty much anything.  This seems to be mostly wrong.  The conclusion you might draw from the replication crisis along with the evidence from behavioral genetics is “People aren’t that easily malleable; instead, they behave according to their long-term underlying dispositions, which are heavily influenced by inheritance.”  People may respond to incentives and pressures (the Milgram experiment replicated, for instance), but not to trivial external pressures, and they can actually be quite resistant to pressure to wholly change their lives and values (becoming a cult member or a Communist.)

Those who study culture think that we’re all profoundly shaped by culture, and to some extent that may be true. But not as much or as easily as social scientists think.  The idea of mankind as arbitrarily malleable is an appealing one to marketers, governments, therapists, or anyone who hopes that it’s easy to shift people’s behavior.  But this doesn’t seem to be true.  It might be worth rehabilitating the notion that people pretty much do what they’re going to do.  We’re not just swaying in the breeze, waiting for a chance external influence to shift us. We’re a little more robust than that.

People Do Exist, Pretty Much

People try to complicate the notion of “person” — what is a person, really? Do individuals even exist?  I would argue that a lot of this is not as true as it sounds.

A lot of theorists suggest that people have internal psychological parts (Plato, Freud, Minsky, Ainslie) or are part of larger social wholes (Hegel, Heidegger, lots and lots of people I haven’t read).  But these, while suggestive, are metaphors and hypotheses. The basic, boring fact, usually too obvious to state, is that most of your behavior is proximately caused by your brain (except for reflexes, which are controlled by your spinal cord.)  Your behavior is mostly due to stuff inside your body; other people’s behavior is mostly due to stuff inside their bodies, not yours.  You do, in fact, have much more control over your own behavior than over others’.

“Person” is, in fact, a natural category; we see people walking around and we give them names and we have no trouble telling one person apart from another.

When Kevin Simler talks about “personhood” being socially constructed, he means a role, like “lady” or “gentleman.” The default assumptions that are made about people in a given context. This is a social phenomenon — of course it is, by design!  He’s not literally arguing that there is no such entity as Kevin Simler.

I’ve seen Buddhist arguments that there is no self, only passing mental states.  Derek Parfit has also argued that personal identity doesn’t exist.  I think that if you weaken the criterion of identity to statistical similarity, you can easily say that personal identity pretty much exists.  People pretty much resemble themselves much more than they resemble others. The evidence for the stability of personality across the lifespan suggests that people resemble themselves quite a bit, in fact — different timeslices of your life are not wholly unrelated.

Self-other boundaries can get weird in certain mental conditions: psychotics often believe that someone else is implanting thoughts inside their heads, people with DID have multiple personalities, and some kinds of autism involve a lot of suggestibility, imitation, and confusion about what it means to address another person.  So it’s empirically true that the sense of identity can get confused.

But that doesn’t mean that personal identity doesn’t usually work in the “normal” way, or that the normal way is an arbitrary convention. It makes sense to distinguish Alice from Bob by pointing to Alice’s body and Bob’s body.  It’s a distinction that has a lot of practical use.

If people do pretty much exist and have lasting personal characteristics, and are not all that malleable by small social or environmental influences, then modeling people as individual agents who want things isn’t all that unreasonable, even if it’s possible for people to have inconsistent preferences or be swayed by social pressure.

And cultural practices which acknowledge the reality that people exist — for example, giving people more responsibility for their own lives than they have over other people’s lives — therefore tend to be more realistic and attainable.

 

How Ya Gonna Keep Em Down On The Farm

Traditional cultures are hard to keep, in a modern world.  To be fair, pro-traditionalists generally know this.  But it’s worth pointing out that ignorance is inherently fragile.  As Lou Keep points out , beliefs that magic can make people immune to bullets can be beneficial, as they motivate people to pull together and fight bravely, and thus win more wars. But if people find out the magic doesn’t work, all that benefit gets lost.

Is it then worth protecting gri-gri believers from the truth?  Or protecting religious believers from hearing about atheism?  Really? 

The choiceless mode depends on not being seriously aware that there are options outside the traditional one.  Maybe you’ve heard of other religions, but they’re not live options for you. Your thoughts come from inside the tradition.

Once you’re aware that you can pick your favorite way of life, you’re a modern. Sorry. You’ve got options now.

Which means that you can’t possibly go back to a premodern mindset unless you are brutally repressive about information about the outside world, and usually not even then.  Thankfully, people still get out.

Whatever may be worth preserving or recreating about traditional cultures, it’s going to have to be aspects that don’t need to be maintained by forcible ignorance.  Otherwise it’ll have a horrible human cost and be ineffective.

Independence is Useful in a Chaotic World

Right now, anybody trying to build a communitarian alternative to modern life is in an underdog position.  If you take the Murray/Putnam thesis seriously — that Americans have less social cohesion now than they did in the mid-20th century, and that this has had various harms — then that’s the landscape we have to work with.

Now, that doesn’t mean that communitarian organizations aren’t worth building. I participate in a lot of them myself (group houses, alloparenting, community events, mutual aid, planning a homeschooling center and a baugruppe).  Some Christians are enthusiastic about a very different flavor of community participation and counterculture-building called the Benedict Option, and I’m hoping that will work out well for them.

But, going into such projects, you need to plan for the typical failure modes, and the first one is that people will flake a lot.  You’re dealing with moderns! They have options, and quitting is an option.

The first antidote to flaking that most people think of — building people up into a frenzy of unanimous enthusiasm so that it doesn’t occur to them to quit — will probably result in short-lived and harmful projects.

Techniques designed to enhance group cohesion at the expense of rational deliberation — call-and-response, internal jargon and rituals, cults of personality, suppression of dissent  — will feel satisfying to many who feel the call of the premodern, but aren’t actually that effective at retaining people in the long term.  Remember, brainwashing isn’t that strong.

And we live in a complicated, unstable world.  When things break, as they will, you’d like the people in your project to avoid breaking.  That points in the direction of  valuing independence. If people need a leader’s charisma to function, what are they going to do if something happens to the leader?

Rewarding Those Who Can Win Big

A traditionalist or authoritarian culture can help people by guarding against some kinds of failure (families and churches can provide a social safety net, rules and traditions can keep people from making mistakes that ruin their lives), but it also constrains the upside, preventing people from creating innovations that are better than anything within the culture.

An individualist culture can let a lot of people fall through the cracks, but it rewards people who thrive on autonomy. For every abandoned and desolate small town with shrinking economic opportunity, there were people who left that small town for the big city, people whose lives are much better for leaving.  And for every seemingly quaint religious tradition, there are horrible abuse scandals under the surface.  The freedom to get out is extremely important to those who aren’t well-served by a traditional society.

It’s not that everything’s fine in modernity. If people are getting hurt by the decline of traditional communities — and they are — then there’s a problem, and maybe that problem can be ameliorated.

What I’m saying is that there’s a certain kind of justice that says “at the very least, give the innocent and the able a chance to win or escape; don’t trade their well-being for that of people who can’t cope well with independence.”  If you can’t end child abuse, at least let minors run away from home. If you can’t give everybody a great education, at least give talented broke kids scholarships.  Don’t put a ceiling on anybody’s success.

Immigrants and kids who leave home by necessity (a lot of whom are LGBT and/or abused) seem to be rather overrepresented among people who make great creative contributions.  “Leaving home to seek your freedom and fortune” is kind of the quintessential story of modernity.  We teach our children songs about it.  Immigration and migration is where a lot of the global growth in wealth comes from.  It was my parents’ story — an immigrant who came to America and a small-town girl who moved to the city.  It’s also inherently a pattern that disrupts traditions and leaves small towns with shrinking populations and failing economies.

Modern, individualist cultures don’t have a floor — but they don’t have a ceiling either. And there are reasons for preferring not to allow ceilings. There’s the justice aspect I alluded to before — what is “goodness” but the ability to do valuable things, to flourish as a human? And some if people are able to do really well for themselves, isn’t limiting them in effect punishing the best people?

Now, this argument isn’t an exact fit for real life.  It’s certainly not the case that everything about modern society rewards “good guys” and punishes “bad guys”.

But it works as a formal statement. If the problem with choice is that some people make bad choices when not restricted by rules, then the problem with restricting choice is that some people can make better choices than those prescribed by the rules. The situations are symmetrical, except that in the free-choice scenario, the people who make bad choices lose, and in the restricted scenario, the people who make good choices lose.  Which one seems more fair?

There’s also the fact that in the very long run, only existence proofs matter.  Does humanity survive? Do we spread to the stars?  These questions are really about “do at least some humans survive?”, “do at least some humans develop such-and-such technology?”, etc.  That means allowing enough diversity or escape valves or freedom so that somebody can accomplish the goal.  You care a lot about not restricting ceilings.  Sure, most entrepreneurs aren’t going to be Elon Musk or anywhere close, but if the question is “does anybody get to survive/go to Mars/etc”, then what you care about is whether at least one person makes the relevant innovation work.  Playing to “keep the game going”, to make sure we actually have descendants in the far future, inherently means prioritizing best-case wins over average-case wins.

Upshots

I’m not arguing that it’s never a good idea to “make people do things.”  But I am arguing that there are reasons to be hesitant about it.

It’s hard to make people do what you want; you don’t actually have that much influence in the long term; people in their healthy state generally are correctly aware that they exist as distinct persons; surrendering judgment or censoring information is pretty fragile and unsustainable; and restricting people’s options cuts off the possibility of letting people seek or create especially good new things.

There are practical reasons why “leave people alone” norms became popular, despite the fact that humans are social animals and few of us are truly loners by temperament.

I think individualist cultures are too rarely explicitly defended, except with ideological buzzwords that don’t appeal to most people. I think that a lot of pejoratives get thrown around against individualism, and I’ve spent a lot of time getting spooked by the negative language and not actually investigating whether there are counterarguments.  And I think counterarguments do actually exist, and discussion should include them.

 

 

Regulatory Arbitrage for Medical Research: What I Know So Far

Epistemic status: pretty ignorant. I’m sharing now because I believe in transparency.

I’ve been interested in the potential of regulatory arbitrage (that is, relocating to less regulated polities) for medical research for a while. Getting drugs or devices FDA-approved is expensive and extremely slow.  What if you could speed it up by going abroad to do your research?

I talked to some people who work in the field, and so far this is my distillation of what I got out of those conversations.  It’s a very rough draft and I expect to learn more.

Q: Why don’t pharma companies already run trials in developing countries?

A: They do! A third of clinical trials run by US-based pharma companies are outside the US, and that number is rapidly growing — a more than 2000% increase over the past two decades. Labor costs in India, China, and Russia are much lower, and it’s easier to recruit participants in countries where a clinical trial may be the only chance people have to get access to the latest treatments.

But in order to sell to American markets, those overseas trials still have to be conducted to FDA standards (with correspondingly onerous reporting requirements.) Many countries, like China, are starting to harmonize their regulatory standards with the FDA.  It’s not the Wild West.

Q: Ok, but why not sell drugs to foreign countries and bypass the US entirely?

A: The US is by far the biggest pharmaceutical market. As of 2014, US sales made up about 38% of global pharmaceutical sales; the European market was about 31%, and is roughly as tightly regulated. The money in pharma comes from selling to the developed world, which has strict standards for demonstrating safety and efficacy.

Q: Makes sense. But why not run cheap, preliminary, unofficial trials just to confirm for yourself whether drugs work, before investing in bigger and more systematic FDA-compliant trials for the successful ones?

A: I don’t know for sure, but it seems like pharma companies are generally not very interested in choosing their drug portfolio based on the likely efficacy of early-stage drug candidates.  When I’ve tried to do research into how they decide which drug candidates to pursue through clinical trials, what I found was that there’s a lot of portfolio management: mathematical models, sometimes quite complex, based on discounted cash flow analysis.  A drug candidate is treated as a random variable which has some distribution over future returns, based on the market size and the average success rate of trials.

What doesn’t seem to be involved in the decision-making process is analysis of which drug candidates are more likely to succeed in trials than others. Most drug candidates don’t work: 92% of preclinical drug candidates fail to be efficacious when tested in humans, and that attrition rate is only growing.  As clinical trials grow more expensive, failed trials are a serious and increasing drag on the pharma industry, but I’m not sure there’s interest in trying to cut those costs by choosing drug candidates more selectively.

On the few occasions when I’ve tried to pitch to large pharma companies the idea of trying to “pick winners” among early-stage drugs based on data analysis (of preclinical results, the past performance of the drug class, whatever), the idea was rejected.

Investors in biotech startups, of course, do try to pick winners among preclinical drug candidates; but an investor told me that, based on his experience, it wouldn’t be much easier to raise money if you had a successful but non-FDA-compliant preliminary human trial than if you had no human trials at all.

My impression is that (perhaps as a rational reaction to high rates of noise or fraud) decisionmakers in the industry aren’t very interested in making bets based on weak or preliminary evidence, and tend to round it down to no evidence at all.

Q: So are there any options left for trying to do medical research outside of an onerous regulatory environment?

A: Well, one option is legal exemptions. For example, the FDA’s Rare Disease Program can offer faster options for reviewing applications for a drug candidate that treats a life-threatening disease where no adequate treatment exists.

Another option is selling supplements, which do not need FDA approval. You need to make sure they’re safe, you can’t sell controlled substances, and you can’t claim that supplements treat any disease, but other than that, for better or worse, you can sell what you want.  One company, Elysium Health, is actually trying to develop serious anti-aging therapies and market them as supplements; Leonard Guarente, one of the pioneers of geroscience and the head of MIT’s aging lab, is the co-founder.

The problem with supplements, of course, is that you can’t sell them as treatments. Aging isn’t legally a disease, and the FDA is not approving anti-aging therapies, so Elysium’s model makes sense. But if you had a cure for cancer, you’d have a hard time selling it as a supplement without running afoul of the law.

There’s also medical tourism, which is a $10bn industry as of 2012, and expected to reach $32bn by 2019.  Most medical tourism is for conventional medical procedures, especially cosmetic surgery and dentistry, as customers seek cheaper options abroad.  Sometimes there are also experimental procedures, like stem cell therapies, though a lot of those are fraudulent and dangerous.  It might be possible to open a high-quality translational-research clinic in a developing country, and eventually collect enough successful results to advertise it globally as a medical tourism destination.  The key challenge, from what people in the field tell me, is to get the official blessing of the local government.

Q: Could you do it on a ship?

A: Maybe, but it would be hard.

Yes, technically international waters are not under any country’s jurisdiction.  But if a government really doesn’t want you doing your thing, they can still stop you. Pirate radio (unlicensed radio broadcasting from ships in international waters) was technically legal in the 1960’s, where it was very popular in the UK, but by 1967 the legal loophole had been shut down.

Also, ships are in the water. If you compare a cruise ship to a building of equivalent square-footage, the ship needs to be staffed with people with nautical expertise, and it needs more regular maintenance.  In most situations, I’d expect it to be much more expensive to run a ship clinic than a land clinic.

There’s also the sobering example of BlueSeed, which was to be a cruise ship where international entrepreneurs could live and work in international waters, without the need for a US visa. It was put “on hold” in 2013 due to lack of investor funding.  And, obviously, a “floating condo/office” is a much easier goal than a “floating clinic.”

Q: Would cryptocurrencies help?

A: Noooooo. No no no no no.

You’re probably thinking about black markets, which are risky in themselves; and anyway, cryptocurrencies do not help with black markets because they are not anonymous.

Bitcoin helpfully points out that Bitcoin is not anonymous.  It is incredibly not anonymous.  It is literally a public record of all your transactions.  Ross Ulbricht of Silk Road, tragically, didn’t understand this.

Q: So, can regulatory arbitrage work?

A: It’s definitely not trivial, but I haven’t ruled it out yet. The medical tourism model currently seems like the most promising method.

I think that transparency would be essential to any big win — yes, there’s lots of shady gray-market stuff out there, but even aside from ethical concerns, if you have to fly under the radar, it’s hard to grow big.  If you’re doing clinical research, it’s impossible to get anything done unless you’re transparent with the scientific community.  If you’re trying to push medical tourism towards the mainstream, you have to inspire trust in patients.  Controversy is inevitable, but if a model like this can work at all, the results would have to be good enough to speak for themselves.