Hoe Cultures: A Type of Non-Patriarchal Society

Epistemic status: mostly facts, a few speculations.

TW: lots of mentions of violence, abuse, and rape.

There is a tremendous difference, in pre-modern societies, between those that farmed with the plow and those that farmed with the hoe.

If you’re reading this, you live in a plow culture, or are heavily influenced by one. Europe, the Middle East, and most of Asia developed plow cultures. These are characterized by reliance on grains such as wheat and rice, which provide a lot of calories per acre in exchange for a lot of hard physical labor.  They also involve large working livestock, such as horses, donkeys, and oxen.

Hoe cultures, by contrast, arose in certain parts of sub-Saharan Africa, the Americas, southeast Asia, and Oceania.

Hoe agriculture is sometimes called horticulture, because it is more like planting a vegetable garden than farming.  You clear land with a machete and dig it with a hoe.  This works for crops such as bananas, breadfruit, coconuts, taro, yam, calabashes and squashes, beans, and maize.  Horticulturalists also keep domestic animals like chickens, dogs, goats, sheep, and pigs — but never cattle. They may hunt or fish.  They engage in small-scale home production of pottery and cloth.[1]

Hoe agriculture is extremely productive per hour of labor, much more so than preindustrial grain farming, but requires a vast amount of land for a small population. Horticulturists also tend to practice shifting cultivation, clearing new land when the old land is used up, rather than repeatedly plowing the same field — something that is only possible when fertile land is “too cheap to meter.”  Hoe cultures therefore have lots of leisure, but low population density, low technology, and few material objects.[1]

I live with a toddler, so I’ve seen a lot of the Disney movie Moana, which had a lot of consultation with Polynesians to get the culture right. This chipper little song is a pretty nice illustration of hoe culture: you see people digging with hoes, carrying bananas and fish, singing about coconuts and taro root, making pottery and cloth, and you see a pig and a chicken tripping through the action.

Hoe Culture and Gender Roles

Ester Boserup, in her 1970 book Woman’s Role in Economic Development [2], notes that in hoe cultures women do the hoeing, while in plow cultures men do the plowing.

This is because plowing is so physically difficult that men, with greater physical strength, have a comparative advantage at agricultural labor, while they have no such advantage in horticulture.

Men in hoe cultures clear the land (which is physically challenging; machete-ing trees is quite the upper-body workout), hunt, and engage in war. But overall, hour by hour, they spend most of their time in leisure.  (Or in activities that are not directly economically productive, like politics, ritual, or the arts.)

Women in hoe cultures, as in all known human cultures, do most of the childcare.  But hoeing is light enough work that they can take small children into the fields with them and watch them while they plant and weed. Plowing, hunting, and managing large livestock, by contrast, are forms of work too heavy or dangerous to accommodate simultaneous childcare.

The main gender difference between hoe and plow cultures is, then, that women in hoe cultures are economically productive while women in plow cultures are largely not.

This has strong implications for marriage customs.  In a plow culture, a husband supports his wife; in a hoe culture, a wife supports her husband.

Correspondingly, plow cultures tend to have a tradition of dowry (the bride’s parents compensate the groom financially for taking an extra mouth to feed off their hands) while hoe cultures tend to practice bride price (the groom compensates the bride’s family financially for the loss of a working woman) or bride service (the groom labors for the bride’s family, again as compensation for taking her labor.)

Hoe cultures are much more likely to be polygamous than plow cultures.  Since land is basically free, a man in a hoe culture is rich in proportion to how much labor he can accumulate — and labor means women. The more wives, the more labor.  In a plow culture, however, extra labor must come from men, which usually means hired labor, or slaves or serfs.  Additional wives would only mean more mouths to feed.

Because hoe cultures need women for labor, they allow women more autonomy.  Customs like veiling or seclusion (purdah) are infeasible when women work in the fields.  Hoe-culture women can usually divorce their husbands if they pay back the bride-price.

Barren women, widows, and unchaste women or rape victims in pre-modern plow cultures often face severe stigma (and practices like sati and honor killings) which do not occur in hoe cultures. Women everywhere are valued for their reproductive abilities, and men everywhere have an evolutionary incentive to prefer faithful mates; but in a hoe culture, women have economic value aside from reproduction, and thus society can’t afford to kill them as soon as their reproductive value is diminished.

“Matriarchy” is considered a myth by modern anthropologists; there is no known society, present or past, where women ruled. However, there are matrilinear societies, where descent is traced through the mother, and matrilocal societies, where the groom goes to live near the bride and her family.  All matrilinear and matrilocal societies in Africa are hoe cultures (though some hoe cultures are patrilinear and/or patrilocal.)[3]

The Seneca, a Native American people living around Lake Ontario, are a good example of a hoe culture where women enjoyed a great deal of power. [4] Traditionally, they cultivated the Three Sisters: maize, beans, and squash.  The women practiced horticulture, led councils, had rights over all land, and distributed food and household stores within the clan.  Descent was matrilineal, and marriages (which were monogamous) were usually arranged by the mothers. Of the Seneca wife, Henry Dearborn noted wistfully in his journal, “She lives with him from love, for she can obtain her own means of support better than he.”  Living, childrearing, and work organization was communal within a clan (living within a longhouse) and generally organized by elder women.

Hoe and Plow Cultures Today

A 2012 study [5] found that people descended from plow cultures are more likely to agree with the statements “When jobs are scarce, men should have more right to a job than women” and “On the whole, men make better political leaders than women do” than people descended from hoe cultures.

“Traditional plough-use is positively correlated with attitudes reflecting gender inequality and negatively correlated with female labor force participation, female firm ownership, and female participation in politics.”  This remains true after controlling for a variety of societal variables, such as religion, race, climate, per-capita GDP, history of communism, civil war, and others.

Even among immigrants to Europe and the US, history of ancestral plow-use is still strongly linked to female labor force participation and attitudes about gender roles.

Patriarchy Through a Materialist Lens

Friedrich Engels, in The Origin of the Family, Private Property and the State, was the first to argue that patriarchy was a consequence of the rise of (plow) agriculture.  Alesina et al summarize him as follows:

He argued that gender inequality arose due to the intensification of agriculture, which resulted in the emergence of private property, which was monopolized by men. The control of private property allowed men to subjugate women and to introduce exclusive paternity over their children, replacing matriliny with patrilineal descent, making wives even more dependent on husbands and their property. As a consequence, women were no longer active and equal participants in community life.

Hoe societies (and hunter-gatherer societies) have virtually no capital. Land can be used, but not really owned, as its produce is unreliable or non-renewable, and its boundaries are too large to guard. Technology is too primitive for any tool to be much of a capital asset.  This is why they are poor in material culture, and also why they are egalitarian; nobody can accumulate more than his neighbors if there just isn’t any way to accumulate stuff at all.

I find the materialistic approach to explaining culture appealing, even though I’m not a Marxist.  Economic incentives — which can be inferred by observing the concrete facts of how a people makes its living — provide elegant explanations for the customs, traditions, and ideals that emerge in a culture.  We do not have to presume that those who live in other cultures are stupid or fundamentally alien; we can assume they respond to incentives just as we do.  And, when we see the world through a materialist lens, we do not hope to change culture by mere exhortation. Oppression occurs when people see an advantage in oppressing; it is subdued when the advantage disappears, or when the costs become too high.  Individual people can follow their consciences even when it differs from the surrounding pressures of their culture, but when we talk about aggregates and whole populations, we don’t expect personal heroism to shift systems by itself.

A materialist analysis of gender relations would say that women are not going to escape oppression until they are economically independent.  And, even in the developed world, women mostly are not.

Women around the world, including in America, are much more likely to live in poverty than men.  This is because women have lower-paying jobs and struggle to support single-mother households. Women everywhere do most of the childcare, and most women have children at some point in their lives, so an economy that does not allow a woman to support and care for children with her own labor is not an economy that will ever allow most women to be economically independent.

Just working outside the home does not make a woman economically independent. If a family is living in a “two-income trap”[6], in which the wife’s income is just enough to pay for the childcare she does not personally provide, then the wife’s net economic contribution to the family is zero.

Sure, much of the “gender pay gap” disappears after controlling for college major and career choice [7][8]. Men report more interest in making a lot of money and being leaders, while women report more interest in being helpful and working with people rather than things. But a lot of this is probably due to the fact that most women rationally assume that they will take time to raise children, and that their husband will be the primary breadwinner, so they are less likely to make early education and career choices on the basis of earning the most money.

Economist Claudia Goldin believes the main reason for the gender pay gap is the cost of temporal flexibility; women want more work flexibility in order to raise children, and so they are paid less.  Childless men and women have virtually no wage disparity.[9]

Since women who will ever have children (which is most women) are still usually economically dependent on men even in the developed world, and strongly disadvantaged if they don’t have a male provider, is it any wonder that women are still more submissive and agreeable, higher in neuroticism and mood disorders, and subject to greater pressure to appeal sexually?  Their livelihood still depends on finding a mate to support them.

In order to change the economic incentives to make women financially independent, it would have to be no big deal to be a single mother. This probably means an economy whose resources were shifted from luxury towards leisure. Mothers of young children need a lot of time away from economic work; if we “bought” time instead of fancy goods with our high-tech productivity gains, a single mother in a technological economy might be able to support children by herself.  But industrial-age workplaces are not set up to allow employees flexibility, and modern states generally put up heavy barriers to easy, flexible self-employment or ultra-frugal living, through licensing laws, zoning regulations, and quality regulations on goods.

Morality and Religion under Hoe Societies

It’s hard to trust what we read about hoe-culture mores, because these generally aren’t societies that develop writing, and what we read is filtered through the opinions of Western researchers or missionaries. But, as far as I can tell, they are mostly animist and polytheist cultures. There are many “spirits” or “gods”, some friendly and some unfriendly, but none supreme.  Magical practices (“if you do this ritual, you’ll get that outcome”)  seem to be common.

Monotheist and henotheist cultures (one god, or one god above all other gods, usually male) seem to be more of a plow-culture thing, though not all plow cultures follow that pattern.

The presence of goddesses doesn’t correlate that much to the condition of women in a society, contrary to the (now falsified) belief that pre-agrarian societies were matriarchal and goddess-worshipping.

The Code of Handsome Lake is an interesting example of a moral and religious code written by a man from a hoe culture. Handsome Lake was a religious reformer among the Iroquois in the 18th century.  His Code is heavily influenced by Christianity (his account of Hell and of the apocalypse closely follow the New Testament and are not found in earlier Iroquois beliefs) but includes some distinctively Iroquois features.

Notably, he was strongly against spousal and child abuse, and in favor of family harmony, including this touching passage:

“Parents disregard the warnings of their children. When a child says, “Mother, I want you to stop wrongdoing,” the child speaks straight words and the Creator says that the child speaks right and the mother must obey. Furthermore the Creator proclaims that such words from a child are wonderful and that the mother who disregards then takes the wicked part. The mother may reply, “Daughter, stop your noise. I know better than you. I am the older and you are but a child. Think not that you can influence me by your speaking.” Now when you tell this message to your people say that it is wrong to speak to children in such words.”

Are Hoe Societies Good?

They’re not paradise. (Though, note that Adam and Eve were gardeners in Eden.)

As stated before, horticulturalists are poor. People in hoe cultures don’t necessarily have less to eat than their pre-modern agrarian peers, but they have less stuff, and they are much poorer than anyone in industrialized societies.

Polygamy also has distinct disadvantages.  It promotes venereal disease. It also excludes a population of unmarried men from society, which leads to violence and exposes the excluded men to poverty and isolation.

And you can’t replicate hoe societies across the globe even if you wanted to.  Hoe agriculture is so land-intensive that it couldn’t possibly support a population of seven billion.

Furthermore, while women in hoe societies have more autonomy and are subject to less gendered violence than women in pre-modern plow societies, it’s not clear how that compares to women in modern societies with rule of law. Hoe societies are still traditionalist and communitarian. Men’s and women’s spheres are still separate. Life in a hoe society is not going to exactly match a modern feminist’s ideal.  These aren’t WEIRD people, they’re something quite different, for better or for worse, and it’s hard to know exactly how the experience is different just by reading a few papers.

Hoe cultures are interesting not because we should model ourselves after them, but because they are an existence proof that non-patriarchal societies can exist for millennia.  Conservatives can always argue that a new invention hasn’t been proved stable or sustainable. Hoe cultures have been proved incredibly long-lasting.

References

[1]Braudel, Fernand. Civilization and capitalism, 15th-18th century: The structure of everyday life. Vol. 1. Univ of California Press, 1992.

[2]Boserup, Ester. Woman’s role in economic development. Earthscan, 2007.

[3]Goody, Jack, and Joan Buckley. “Inheritance and women’s labour in Africa.” Africa 43.2 (1973): 108-121.

[4]Jensen, Joan M. “Native American women and agriculture: A Seneca case study.” Sex Roles 3.5 (1977): 423-441.

[5]Alesina, Alberto, Paola Giuliano, and Nathan Nunn. “On the origins of gender roles: Women and the plough.” The Quarterly Journal of Economics 128.2 (2013): 469-530.

[6]Warren, Elizabeth, and Amelia Warren Tyagi. The two-income trap: Why middle-class parents are going broke. Basic Books, 2007.

[7]Daymont, Thomas N., and Paul J. Andrisani. “Job preferences, college major, and the gender gap in earnings.” Journal of Human Resources (1984): 408-428.

[8]Zafar, Basit. “College major choice and the gender gap.” Journal of Human Resources 48.3 (2013): 545-595.

[9]Waldfogel, Jane. “Understanding the” family gap” in pay for women with children.” The Journal of Economic Perspectives 12.1 (1998): 137-156.

Advertisements

Patriarchy is the Problem

Epistemic Status: speculative. We’ve got some amateur Biblical exegesis in here, and some mentions of abuse.

I’m starting to believe that patriarchy is the root of destructive authoritarianism, where patriarchy simply means the system of social organization where families are hierarchical and headed by the father. To wit:

  • Patriarchy justifies abuse of wives by husbands and abuse of children by parents
  • The family is the model of the state; pretty much everybody, from Confucius to Plato, believes that governmental hierarchy evolved from familial hierarchy; rulers from George Washington to Ataturk are called “the father of his country”
  • There is no clear separation between hierarchy and abuse. The phenomenon of dominant/submissive behavior among primates closely parallels what humans would consider domestic abuse.

Abuse in Mammalian Context

A study of male vervet monkeys [1] gives an illustration of what I mean by abuse.

Serotonin levels closely track a monkey’s status in the dominance hierarchy. When a monkey is dominant, his serotonin is high, and is sustained at that high level by observing submissive displays from other monkeys.  The more serotonin a dominant monkey has in his system, the more affection and the less aggression he displays; you can see this experimentally by injecting him with a serotonin precursor. When a high status monkey is full of serotonin, he relaxes and becomes more tolerant towards subordinates[2]; the subordinates, feeling less harassed, offer him fewer submissive displays; this rapidly drops the dominant’s serotonin levels, leaving him more anxious and irritable; he then engages in more dominance displays; the submissive monkeys then display more submission, thereby raising the dominant’s serotonin level and starting all over again.

This cycle (known as regulation-dysregulation theory, or RDT) is basically the same as the cycle of abuse in humans, whose stages are rising tension (the dominant is low in serotonin), acute violence (dominance display), reconciliation/honeymoon (the dominant’s serotonin spikes after the subordinate submits), and calm (the dominant is high in serotonin and tolerant towards subordinates.)

In each case, tolerance extends only as long as submissive behavior continues.  Anger, threats, and violence are the result of any slackening of submissive displays.  I consider this to be a working definition of both dominance and abuse: the abuser is easily slighted and considers any lèse-majesté to be grounds for an outburst.

Most conditions of oppression among humans follow this pattern.  Slaves would be harshly punished for “disrespecting” masters, subordinates must show “respect” to gangsters and warlords on pain of violence, despots require rituals of submission or tribute, etc.  I believe it to be an ancient and even pre-human pattern.

The prototypical opposite of freedom, I think, is slavery, imprisonment, or captivity.  Concepts like “rights” are more modern and less universal. But even ancient peoples would agree that to be subject to the arbitrary will of another, and not free to physically escape from him, is an unhappy state. These are more or less the conditions that cause CPTSD — kidnapping, imprisonment and institutionalization, concentration camps and POW camps, slavery, and domestic abuse — situations in which one is at another’s mercy for a prolonged period of time and unable to escape.

A captive subordinate must appease the abuser in order to avoid retaliation; this has a soul-warping effect. Symptoms of CPTSD include “a chronic and pervasive sense of helplessness, paralysis of initiative, shame, guilt, self-blame, a sense of defilement or stigma” and “attributing total power to the perpetrator, becoming preoccupied with the relationship to the perpetrator, including a preoccupation with revenge, idealization or paradoxical gratitude, seeking approval from the perpetrator, a sense of a special relationship with the perpetrator or acceptance of the perpetrator’s belief system or rationalizations.”  In other words, captives are at risk for developing something like Nietzsche’s “slave morality”, characterized by shame, submission, and appeasement towards the perpetrator.

Here’s John Darnielle talking about the thing:

“My stepfather wanted me to write Marxist poetry; if it didn’t serve the revolution, it wasn’t worthwhile.” I asked him what his mother thought, and he let out a sad laugh. “You have to understand the dynamic of the abused household. What you think doesn’t matter. Your thoughts are passing. They are positions you adopt to survive.”

The physical behaviors of shame (gaze aversion, shifty eyes, nervous smiles, downcast head, and slouched, forward-leaning postures)[3] are also common mammalian appeasement displays; subordinate monkeys and apes also have a “fear smile” and don’t meet the gaze of dominants.[4] It seems quite clear that the psychological problem of chronic shame as a result of abuse is a result of having to engage in prolonged appeasement behavior on pain of punishment.

A subordinate primate is not a healthy primate. Robert Sapolsky [5] has an overview article about how low-ranked primates are more stressed and more susceptible to disease in hierarchical species.

“When the hierarchy is stable in species where dominant individuals actively subjugate subordinates, it is the latter who are most socially stressed; this can particularly be the case in the most extreme example of a stable hierarchy, namely, one in which rank is hereditary. This reflects the high rates of physical and psychological harassment of subordinates, their relative lack of social control and predictability, their need to work harder to obtain food, and their lack of social outlets such as grooming or displacing aggression onto someone more subordinate.”

…The inability to physically avoid dominant individuals is associated with stress, and the ease of avoidance varies by ecosystem. The spatial constraints of a two-dimensional terrestrial habitat differ from those of a three-dimensional arboreal or aquatic setting, and living in an open grassland differs from living in a plain dense with bushes. As an extreme example, subordinate animals in captivity have many fewer means to evade dominant individuals than they would in a natural setting.

This coincides with the CPTSD model — social stress correlates with inability to escape.

The physiological results of social stress are cardiovascular and immune:

Prolonged stress adversely affects cardiovascular function, producing (i) hypertension and elevated heart rate; (ii) platelet aggregation and increased circulating levels of lipids and cholesterol, collectively promoting atherosclerotic plaque formation in injured blood vessels; (iii) decreased levels of protective high-density lipoprotein (HDL) cholesterol and/or elevated levels of endangering low-density lipoprotein (LDL) cholesterol; and (iv) vasoconstriction of damaged coronary arteries…In general, mild to moderate transient stressors enhance immunity, particularly the first phase of the immune response, namely innate immunity. Later phases of the stress response are immunosuppressive, returning immune function to baseline. Should the later phase be prolonged by chronic stress, immunosuppression can be severe enough to compromise immune activation by infectious challenges (47, 48). In contrast, a failure of the later phase can increase the risk of the immune overactivity that constitutes autoimmunity.

Autoimmune disorders and weakened disease resistance are characteristic of people with PTSD as well.

Being a captive abuse victim is bad for one’s physical and mental health.  While abuse is “natural” (it appears frequently in nature), it is bad for flourishing in a quite direct and unmistakable way.  Individuals are not, in general, better off under conditions of captivity and abuse.

This abuse/dominance/submission/CPTSD thing is basically about dysfunctions in the second circuit in Leary’s eight-circuit model.  It’s the part of the mind that forms intuitions about social power relations.  Every social interaction between humans has some dominance/submission content; this is normal and probably inevitable, given our mammalian heritage. But Leary’s model is somewhat developmental — to be stuck in the mindset of dominance/submission means that you cannot reach the “higher” functions, such as intellectual thought or more mature moral reasoning.  Prolonged abuse can make people so stuck in submission that they cannot think.

Morality-As-Submission vs. Morality-As-Pattern

Most primates have something like abuse, and thus I’d believe all human societies have it. Patriarchal societies have a normative form of abuse: if the hierarchical family is established as standard, then husbands have certain rights of control and violence over wives, and parents have certain rights of control and violence over children.  In societies with land ownership and monarchs, there are also rights of control and violence of landowners over serfs and slaves, and of rulers over subjects.  Historically, higher-population agrarian societies (think Sumer or neolithic China) had larger and firmer hierarchies than earlier hunter-gatherer and horticultural societies, and probably worse treatment of women.  As Sapolsky notes, stable and particularly inherited hierarchies put greater stress on subordinates. (More about that in a later post.)

To give a stereotypical picture, think of patriarchal agrarian society as Blue in the Spiral Dynamics paradigm.  (This is horoscopey and ahistorical but it gives good archetypes.)  Blue culture means grain cultivation, pyramids and ziggurats, god-kings, temple sacrifices, and the first codes of law.

Not all humans are descended from agrarian-patriarchal cultures, but almost all Europeans and Asians are.

When you have stability, high population, and accumulation of resources, as intensive agriculture allows, you begin to have laws and authorities in a much stronger sense than tribal elders.  Your kings can be richer; your monuments can last longer.  I believe that notions of the absolute and the eternal in morality or religion might develop alongside the ability to have physically permanent objects and lasting power.

And, so, I suspect that this is the origin of the belief that to do right means to obey the father/king, and the worship of supreme gods modeled after a father or king.

To say morality is obedience is not merely to say that it is moral to obey.  Rather, we’re talking about divine command theory.  Goodness is identified with the will of the dominant individual. Inside this headspace, you ask “but what would morality even be if it weren’t a rock to crush me or a chain to bind me?”  It’s fear and submission melded with a sense of the rightness and absolute legitimacy of the dominator.

The “Song of the Sea” is considered by modern Biblical scholars to be the chronologically oldest part of the Bible, dating from the 15th to 5th centuries BC, and echoing praise songs to Mesopotamian gods and kings. God is here no abstract principle or sole creator; he is a “man of war” who defeats other peoples and their gods in battle.  He is to be worshiped not because he is good but because he is fearsome.

But philosophers, even in patriarchal societies, have often had some notion of a “good” which is less like a terrifying warlord and more like a natural law, a pattern in the universe, something to discern rather than someone to submit to.

The ancient Egyptians had ma’at and the Chinese had Heaven, as concepts of abstract justice which wicked earthly rulers could fall short of.  The ancient Greeks had logos, a faculty of reason or speech that allowed one to discern what was good.

Plato neatly disposes of divine command theory in the Euthyphro : if “good” is simply what the gods want, then what should one do if the gods disagree? Since in Greek mythology the gods plainly do disagree, the Good must be something that lies beyond the mere opinion of a powerful individual, human or divine.

As Ben Hoffman put it:

When morality is seen as rules society imposes on us to keep us in line, the superego or parent part is the internalized voice of moral admonition. Likewise, I suspect that in contemporary societies this often includes the internalized voice of the schoolteacher telling you how to do the assignment. This internalized voice of authority feels like an external force compelling you. People often feel tempted to rebel against their own superego or internalized parent.

By contrast, logos and sattva are not seen as internalized narratives – they are described as perceptive faculties. You see what’s right, by seeing the deep structure of reality. The same thing that lets you see the deep patterns in mathematics, lets you see the deep decision-theoretic symmetries underlying truly moral behavior.

This is why it matters so much that theologians such as Maimonides and Augustine were so insistent on the point that God has no body and anthropomorphic references in the Bible are metaphors, and why this point had to be repeated so often and seemed so difficult for their contemporaries to grasp. (Seriously, read The Guide to the Perplexed. It explains separately how each individual Biblical reference to a body part of God is a metaphor — it’s a truly incredible amount of repetition.)

If God has no body, this means that modern (roughly post-Roman-Empire) Jews and Christians worship something more like a principle of goodness than a warlord, even if God is frequently likened to a father or king.  It’s not “might makes right”, but “right makes right.”

The abuse-victim logic of morality-as-submission can have no concept that might might not make right.

But more “mature” ethical philosophies, even if they emerge from authoritarian societies — Christian, Jewish, Confucian, Classical Greek, to name a few that I’m familiar with — can be used as grounds to oppose tyranny and abuse, because they contain the concept of a pattern of justice that transcends the will of any particular man.

Once you can generalize, once you can see pattern, once you notice that humans disagree and kings can be toppled, you have the potential to escape the second-circuit, primate-level, dominant/submissive paradigm.  You can ask “what is right?” and not just “who’s on top?”

An Example of Morality-As-Submission: The Golden Calf

It is generally bad scholarship to read the literal text of the Bible as evidence for what contemporary Jews or Christians believe; that ignores thousands of years of interpretation.  But if you just look at the Bible without context, raw, you can get some kind of an unfiltered impression of the mindset of whoever wrote it — which is quite different from how moderns (religious or not) think, but which still influences us deeply.

So let’s look at Exodus 32:34.

The People of Israel, impatient with Moses taking so long on Mount Sinai, build a golden calf and worship it. Now God gets mad.

7 And the LORD spoke unto Moses: ‘Go, get thee down; for thy people, that thou broughtest up out of the land of Egypt, have dealt corruptly; 8 they have turned aside quickly out of the way which I commanded them; they have made them a molten calf, and have worshipped it, and have sacrificed unto it, and said: This is thy god, O Israel, which brought thee up out of the land of Egypt.’ 9 And the LORD said unto Moses: ‘I have seen this people, and, behold, it is a stiffnecked people.

“Stiff-necked”, meaning stubborn. Meaning “you just do as you damn well please.”  Meaning “you have a will, you choose to do things besides obey me, and that is just galling.”  This is abuser/authoritarian logic: the abuser feels entitled to obedience and especially submission. To be stiff-necked is not to bow the neck.

10 Now therefore let Me alone, that My wrath may wax hot against them, and that I may consume them; and I will make of thee a great nation.’ 11 And Moses besought the LORD his God, and said: ‘LORD, why doth Thy wrath wax hot against Thy people, that Thou hast brought forth out of the land of Egypt with great power and with a mighty hand? 12 Wherefore should the Egyptians speak, saying: For evil did He bring them forth, to slay them in the mountains, and to consume them from the face of the earth? Turn from Thy fierce wrath, and repent of this evil against Thy people. 13 Remember Abraham, Isaac, and Israel, Thy servants, to whom Thou didst swear by Thine own self, and saidst unto them: I will multiply your seed as the stars of heaven, and all this land that I have spoken of will I give unto your seed, and they shall inherit it for ever.’ 14 And the LORD repented of the evil which He said He would do unto His people.

Moses pleads with God to remember his promises and not kill everyone. He even calls the plan of genocide “evil”!  And God, who is here not an implacable force of justice but out-of-control angry, calms down in response to the pleading and moderates his behavior.

But then Moses comes down the mountain, and he gets angry, and he slaughters, not everyone, but 3000 men.

27 And he [Moses] said unto them: ‘Thus saith the LORD, the God of Israel: Put ye every man his sword upon his thigh, and go to and fro from gate to gate throughout the camp, and slay every man his brother, and every man his companion, and every man his neighbour.’ 28 And the sons of Levi did according to the word of Moses; and there fell of the people that day about three thousand men.

 

Notice how, if you’re at all familiar with abusive family dynamics, God is the primary abusive parent, and Moses is the less-abusive, appeasing parent, who tries to protect the children somewhat but still terrorizes them.

Now, God is going to make sure the Israelites know how grateful to be for his mercy and should beware lest he does anything worse:

1. And the LORD spoke unto Moses: ‘Depart, go up hence, thou and the people that thou hast brought up out of the land of Egypt, unto the land of which I swore unto Abraham, to Isaac, and to Jacob, saying: Unto thy seed will I give it– 2. and I will send an angel before thee; and I will drive out the Canaanite, the Amorite, and the Hittite, and the Perizzite, the Hivite, and the Jebusite– 3. unto a land flowing with milk and honey; for I will not go up in the midst of thee; for thou art a stiffnecked people; lest I consume thee in the way.’ 4. And when the people heard these evil tidings, they mourned; and no man did put on him his ornaments.  5 And the LORD said unto Moses: ‘Say unto the children of Israel: Ye are a stiffnecked people; if I go up into the midst of thee for one moment, I shall consume thee; therefore now put off thy ornaments from thee, that I may know what to do unto thee.’

Note the mourning and the refusal to put on ornaments. You have to show contrition, you can’t relax and make merry, as long as the parent is angry. It’s a submission behavior. The whole house has to be thrown into gloom until the parent says your punishment is over.

Now Moses goes into the Tent of Meeting to pray, very humbly, for God’s forgiveness of the people.  And here, in this context, is where you find the famous Thirteen Attributes of God’s Mercy.

6. And the LORD passed by before him, and proclaimed: ‘The LORD, the LORD, God, merciful and gracious, long-suffering, and abundant in goodness and truth;  7 keeping mercy unto the thousandth generation, forgiving iniquity and transgression and sin; and that will by no means clear the guilty; visiting the iniquity of the fathers upon the children, and upon the children’s children, unto the third and unto the fourth generation.’ 8. And Moses made haste, and bowed his head toward the earth, and worshipped. 9. And he said: ‘If now I have found grace in Thy sight, O Lord, let the Lord, I pray Thee, go in the midst of us; for it is a stiffnecked people; and pardon our iniquity and our sin, and take us for Thine inheritance.’

God is “long-suffering” because he doesn’t kill literally everyone, when he is begged not to.  This “mercy” is more like the “tolerance” that dominant primates display when they get “enough” appeasement behaviors from subordinates.  Of course, people have long taken this passage as an inspiration for real mercy and grace; but in context and without theological interpretation that is not what it looks like.

Now, there’s a long interval of the new tablets of the law being brought down, and instructions being given for the tabernacle and how to give sin-offerings. Eight days later, in Leviticus 10,  God’s explained how to give a sin-offering and Aaron and his sons are actually going to do it, to make atonement for their sins…

…and they do it WRONG.

1. And Nadab and Abihu, the sons of Aaron, took each of them his censer, and put fire therein, and laid incense thereon, and offered strange fire before the LORD, which He had not commanded them. 2. And there came forth fire from before the LORD, and devoured them, and they died before the LORD. 3. Then Moses said unto Aaron: ‘This is it that the LORD spoke, saying: Through them that are nigh unto Me I will be sanctified, and before all the people I will be glorified.’ And Aaron held his peace. 4. And Moses called Mishael and Elzaphan, the sons of Uzziel the uncle of Aaron, and said unto them: ‘Draw near, carry your brethren from before the sanctuary out of the camp.’  5 So they drew near, and carried them in their tunics out of the camp, as Moses had said. 6 And Moses said unto Aaron, and unto Eleazar and unto Ithamar, his sons: ‘Let not the hair of your heads go loose, neither rend your clothes, that ye die not, and that He be not wroth with all the congregation; but let your brethren, the whole house of Israel, bewail the burning which the LORD hath kindled. And ye shall not go out from the door of the tent of meeting, lest ye die; for the anointing oil of the LORD is upon you.’ And they did according to the word of Moses.

Not only does the appeasement ritual of the sin-offering have to be done, it has to be done exactly right, and if you make an error, the world will explode. And note  the form of the error — the priests take initiative, they light a fire that God didn’t specifically tell them to light.  “Did I tell you to light that?”  And now, since God is angry, nobody else is allowed to act upset about the punishment, lest they get in trouble too.

These are not abstract theological ideas that the authors got out of nowhere. These are things that happen in families.

Growing in Poisoned Soil

I don’t mean to make this an anti-religious rant, or imply that religious people systematically support domestic abuse and tyranny. It was, after all, the story of Exodus that inspired American slaves in their fight for freedom.

The point is that this pattern — abuser-logic and abuse-victim logic — is a recurrent feature in the moral intuitions of everyone in a culture with patriarchal roots.

Here we have punishment, not as a deterrent or as a natural consequence of wrong action, but as rage, the fury of an authority who didn’t get the proper “respect.”

Here we have appeasement of that rage interpreted as the virtue of “humility” or “atonement.”

Here we have an intuitive sense that even generic moral words like “should” or “ought” are blows; they are what a dominant individual forces upon a subordinate.

Look at Psalm 51.  This is a prayer of repentance; this is what David sings after he realizes that he was wrong to commit adultery and murder. Sensible things to repent of, no doubt. But the internal logic, though beautiful and emotionally resonant, is crazypants.

“Behold, I was brought forth in iniquity, and in sin did my mother conceive me.” (Wait, you didn’t do anything wrong when you were a fetus, we’re talking about what you did wrong just now.)

“Purge me with hyssop, and I shall be clean; wash me, and I shall be whiter than snow.”  (Yes, guilt does create a desire for cleansing; but you’re expecting God to do the washing?  Only an external force can make you clean?)

“Hide Thy face from my sins, and blot out all mine iniquities.”  (Um, I’m pretty sure your victim is still dead.)

“The sacrifices of God are a broken spirit; a broken and a contrite heart, O God, Thou wilt not despise.” (AAAAAAAAAAA.)

Even legitimate guilt for serious wrongdoing gets conflated with submission and “broken-spiritedness” and pleading for mercy and an intuition of innate taintedness.  This is how morality works when you process it through the second circuit, through the native mammalian intuitions around dominance/submission.

It’s natural, it’s human, it’s easy to empathize with — and it’s quite insane.

It’s also, I think, related to problems specific to women.

If women are traditionally subordinate in your society — and forty years of women’s lib is nowhere near enough to overcome thousands of years of tradition — then women will disproportionately suffer domestic abuse, and even those who don’t will still inherit the kinds of intuitions that captives always do.

A “good girl” is obedient, “innocent” (i.e. lacking in experience, especially sexual experience), and never takes initiative, because initiative can get you in trouble. A “good girl” internalizes that to be “good” is simply to submit and to appease and please.

How can you possibly eliminate those dysfunctions until you attack their roots?

Women have higher rates of depression and anxiety than men. Girl toddlers also have higher rates of shame in response to failure than boy toddlers [6].  Women also have a significantly lower salivary cortisol response to social stress than men.[7] Blunted cortisol response to stress is what you see in PTSD, CFS, and atypical depression, which are all more common in women than men; it occurs more in low-status individuals than high-status ones.[8][9] The psychological and physiological problems most specific to women are also the illnesses associated with low social status and chronic shame.

If we have a society that runs on shame and appeasement, especially for women, then women will be hurt.  Everything we do and think today, including modern liberalism, is built on a base that includes granting legitimacy to abusive power.  I don’t mean this in the sense of “everything is tainted, you must see the world through mud-colored glasses”, but in the sense that this is where our inheritance comes from, these influences are still visible, this is the soil we grew from.

It’s not trivial to break away and create alternatives. People do.  Every concept of goodness-as-pattern or of universal justice is an alternative to abuse-logic, which is always personal and emotional.  But it’s hard to break away completely.

References 

[1]McGuire, Michael T., M. J. Raleigh, and C. Johnson. “Social dominance in adult male vervet monkeys: Behavior-biochemical relationships.” Information (International Social Science Council) 22.2 (1983): 311-328.

[2]Gilbert, Paul, and Michael T. McGuire. “Shame, status, and social roles: Psychobiology and evolution.” (1998).

[3]Keltner, Dacher. “Signs of appeasement: Evidence for the distinct displays of embarrassment, amusement, and shame.” Journal of personality and social psychology 68.3 (1995): 441.

[4]Leary, Mark R., and Robin M. Kowalski. Social anxiety. Guilford Press, 1997.

[5]Sapolsky, Robert M. “The influence of social hierarchy on primate health.” Science 308.5722 (2005): 648-652.

[6]Lewis, Michael, Steven M. Alessandri, and Margaret W. Sullivan. “Differences in shame and pride as a function of children’s gender and task difficulty.” Child development 63.3 (1992): 630-638.

[7]Kirschbaum, Clemens, Stefan Wüst, and Dirk Hellhammer. “Consistent sex differences in cortisol responses to psychological stress.” Psychosomatic Medicine 54.6 (1992): 648-657.

[8]Gruenewald, Tara L., Margaret E. Kemeny, and Najib Aziz. “Subjective social status moderates cortisol responses to social threat.” Brain, behavior, and immunity 20.4 (2006): 410-419.

[9]”Threat, Social-Evaluative, and Self-Conscious Emotion.”Miller & Tangney, 1994

Gleanings from Double Crux on “The Craft is Not The Community”

Epistemic status: This is a bunch of semi-remembered rephrasings of a conversation.

At the CFAR alumni reunion, John Salvatier and I had a public double crux on my last post.

A double crux is a technique CFAR invented, which I think is much better than a debate. The goal is to simply pin down where exactly two people disagree. This can take a while. Even the best, most respectful debates are adversarial: it’s my opinion vs. yours, and we see which is stronger in an (ideally fair) contest. A double crux is collaborative: we’re just trying to find which is the exact point of contention here, so that if we go on to have an actual debate we won’t be talking past each other.

John’s motivation for disagreeing with my post was that he didn’t think I should be devaluing the intellectual side of the “rationality community”. My post divided projects into into community-building (mostly things like socializing and mutual aid) versus outward-facing (business, research, activism, etc.); John thought I was neglecting the importance of a community of people who support and take an interest in intellectual inquiry.

I agreed with him on that point — intellectual activity is important to me — but doubted that we had any intellectual community worth preserving.  I was skeptical that rationalist-led intellectual projects were making much progress, so I thought the reasonable thing to do was to start fresh.

John is actually working on an intellectual project of his own — he’s trying to explore what the building blocks of creative thinking are, and how one can improve it — and he thinks his work is productive/useful, so that seemed a good place to dig in deeper.

I mentioned that by a lot of metrics, his work doesn’t have a lot of output. He has done a lot of one-on-one conversations and informal experiments with people in the community, but there’s no writeup, and certainly no formal psychological research, papers, or collaboration with psychologists. How could an outsider possibly tell if there’s a real thing here?

John said that I might be over-valuing formality. He’s pretty confident that the “informal” phase of work — the part when you’re just playing with an idea, or planning out your strategy, before you sit down to execute — is actually the most important part, in the sense that it’s highest-leverage. After some discussion, I came to agree with him.

I’ve definitely had the experience that creative work is “bursty” — that most days you produce piles of junk, and some days you produce solid gold, whether it’s writing, math, or code. I’ve also heard this from other people, both friends and famous historical figures.  It also seems that when something’s going right about your “pre-work” cognitive processes — planning, imagining, even emotional attitudes — you do much better work at the formal, sit-down-and-produce-output stage.  Work goes hugely better when the “muse” is friendly.

John additionally believes that it’s possible to “train your muse” to help you work better, and said that learning to do this himself allowed him to contribute much better to open-source software projects (where he built a statistics library.)

He also pointed out that when it comes to dealing with the distant future, general-purpose and speculative cognitive processes will have to be more important than trained skills, because the future will contain unfamiliar situations that we haven’t trained for. People who excel at the sit-down-and-execute activities that help you succeed in your field aren’t necessarily going to be able to reason about the weirdness of a changing world.

(I agreed that the ability to “philosophize” well seems to be much rarer than the ability to execute well; I’ve seen many prominent computer scientists whose theories about general intelligence just don’t make sense.)

So the speculative, philosophical, imaginative stuff that comes before sitting down and executing is important for success, important for humanity, and maybe something we can learn to do better. John certainly thinks so, and wants the rationality community to be a sort of laboratory or nursery for these ideas.

It’s also true that formally executing on these ideas can be really hard, if you define “formally” strictly enough. Here’s Scott Alexander reflecting on the bureaucratic hell of trying to get a psychiatry study on human subjects approved by an IRB — when it only involved giving them a questionnaire!  If that’s what it takes to do academic experimental research on humans, I don’t want to claim that anybody who’s thinking about the human mind without publishing papers can be rounded down to “doing nothing.”

That still leaves us with the question of “how do I know — not an IRB, not the ‘general public’, but Sarah, your friendly acquaintance — that you’re making real progress?”  I’m still going to need to be shown some kind of results, if not peer-reviewed ones.  This is why I’m a fan of blogging and something in the neighborhood of “citizen science.”  If a programmer tests the speed of two different programs and writes up the results, code included, I believe them, and if I’m skeptical, I can try to duplicate their results. It’s in the spirit of the scientific method, even if it’s not part of the official edifice of Science(TM).

So, John and I still have an unresolved disagreement about the general status of these “how to think real good” projects in the community.  He thinks they’re moving forward; I still haven’t seen evidence that convinces me.  This is our “double crux” — both of us agree (the “double” part) that it’s the key (“crux”) to our disagreement.

But I definitely agree with John that if there were promising ways to “think real good” being developed in our community, then it would be important to support and encourage that exploration.

One interesting thing that we had in common was that we both viewed “community” from a strongly individualist standpoint. John said he would evaluate someone as a potential collaborator on a project pretty much the same way whether they were a community member or not — track records for success, recommendations from friends he respects, and so on.  The “community” is useful because it’s a social network that sometimes floats cool people to his attention.  Deeper notions of tribe or belonging didn’t seem to apply, at least concerning his intellectual aims.  He had no interest in kicking people out for not following community standards, or trying to get everybody in the community to be a certain way; if a person considered themselves “part of the community” but John couldn’t see benefit from associating with that person, he just wouldn’t associate.  This is not everybody’s point of view — in fact, some people might say that John’s idea of a community is equivalent to not having a community at all.  So a lot of the things that seem to spark a lot of debate these days — community standards, community norms, etc — just didn’t show up in this double-crux at all, because neither of us really had strong intuitions about governance or collective issues.

Mostly, I came away with a lot of food for thought about the reflection vs. execution thing.  If there’s a spectrum between musing about the thing and doing the thing, I’m pretty far towards the “musing” side relative to the general population, so I’d generally assumed that I do too much musing and not enough executing.  “Head-in-the-clouds dreamer” and “impractical intellectual” and all that.  (Introspection falls into this category too; thinking too much about your own psyche is “navel-gazing”.)  But reflecting well seems to be incredibly high-reward relative to the time and effort spent, for compounding reasons. Strategizing so that you work on the right project, or putting attention into your mental health now so that you’re systematically more productive in the future, has a much bigger impact than just spending one more marginal hour on the daily slog.  Reflecting and strategizing gave my friend Satvik much more success at work.

It’s always felt a little presumptuous to me — like “who am I to think about what I’m doing? I’m supposed to keep my head down, keep slogging, and not ask questions!  Isn’t it terribly selfish to wonder what helps me do my best, rather than just doing my duty?”  But that’s a set of norms that gets applied to children, soldiers, and laborers (and maybe it shouldn’t even then), not to people like me. My peers expect that a person who does “knowledge work” for a living and writes essays will, of course, reflect on what she’s doing.

So maybe I ought to be going back and reading what reflective people write, taking it seriously this time around. “The unexamined life is not worth living.”  What if you literally meant that?  What if thinking about stuff was not a half-forbidden luxury but the most important thing about being human?

The Craft is Not The Community

Epistemic status: argumentative. I expect this to start a discussion, not end it.

“Company culture” is not, as I’ve learned, a list of slogans on a poster.  Culture consists of the empirical patterns of what’s rewarded and punished within the company. Do people win promotions and praise by hitting sales targets? By coming up with ideas? By playing nice?  These patterns reveal what the company actually values.

And, so, with community cultures.

It seems to me that the increasingly ill-named “Rationalist Community” in Berkeley has, in practice, a core value of “unconditional tolerance of weirdos.”  It is a haven for outcasts and a paradise for bohemians. It is a social community based on warm connections of mutual support and fun between people who don’t fit in with the broader society.

I think it’s good that such a haven exists. More than that, I want to live in one.

I think institutions like sharehouses and alloparenting and homeschooling are more practical and humane than typical American living arrangements; I want to raise children with far more freedom than traditional parenting allows; I believe in community support for the disabled and mentally ill and mutual aid for the destitute.  I think runaways and sexual minorities deserve a safe and welcoming place to go.  And the Berkeley community stands a reasonable chance of achieving those goals!  We’re far from perfect, and we obviously can’t extend to include everyone (esp. since the cost of living in the Bay is nontrivial), but I like our chances. I think we may actually, in the next ten years, succeed at building an accepting and nurturing community for our members.

We’ve built, over the years, a number of sharehouses, a serious plan for a baugruppe, preliminary plans for an unschooling center, and the beginnings of mutual aid organizations and dispute resolution mechanisms.  We’re actually doing this.  It takes time, but there’s visible progress on the ground.

I live on a street with my friends as neighbors. Hardly anybody in my generation gets to say that.

What we’re not doing well at, as a community, is external-facing projects.

And I think it’s time to take a hard look at that, without blame or judgment.

The thing about external-outcome-oriented projects is that they require standards. You have to be able to reject people for incompetence, and expect results from your colleagues.  I don’t think there’s any other way to achieve goals.

That means that an external-oriented project can’t actually serve all of a person’s emotional needs.  It can’t give you unconditional love. It can’t promise you a vibrant social scene. It can’t give you a place of refuge when your life goes to hell.  It can’t replace family or community.

As Robert Frost said, “Home is where, when you go there, they have to take you in.”

But Tesla Motors and MIT don’t have to take you in. And they wouldn’t work if they did.

Internally focused groups, whose goals are about the well-being of their own members, are intrinsically different. You have to care more about inclusion, consensus, and making the process itself rewarding and enjoyable for the participants. If you’re organizing parties for each other, making the social group gel well and making everyone feel welcome is not a side issue — it’s part of the main goal.  A Berkeley community organization that didn’t serve the people who currently live in Berkeley and meet their needs would no longer be an organization for our community; you can’t fire the community and get another.  The whole point is benefiting these specific people.

An externally-focused goal, by contrast, can and should be “no respecter of persons” — you have to focus on achieving good outcomes, regardless of who’s involved.

So far, when members of our community focus on external goals, I think they’ve done much better when they haven’t tried to marry those goals with making community institutions.

Some rationalists have created successful startups and many more have successful careers in the tech industry — but these are basically never “rationalist endeavors”, staffed exclusively by community members or focused on serving this community.  And they shouldn’t be. If you want to build a company, you hire the most competent people for the job, not necessarily your friends or neighbors. A company is oriented towards an external outcome, and so has to be objective and strategic about that goal. It’s by nature outward-facing, not inward-facing to the community.

My own outward-facing goal is to make an impact on treating disease.  Mainly I’m working towards that through working in drug development — at a company which is by no means a “rationalist community project.” It shouldn’t be! What we need are good biologists and engineers and data scientists, regardless of what in-jokes they tell or who they’re friends with.

In the long run, I hope to work on things (like anti-aging or tighter bench-to-bedside feedback loops) that are somewhat more controversial. But I don’t think that changes the calculus. You still want the most competent people you can get, who are also willing to get on board with your mission. Idealism and radicalism don’t negate the need for excellence, if you’re working on an external goal.

Some other people in the community have more purely intellectual projects, that are closer to Eliezer Yudkowsky’s original goals. To research artificial intelligence; to develop tools for training Tetlock-style good judgment; to practice philosophical discourse.  But I still think these are ultimately outcome-focused, external projects.

Artificial intelligence research is science, and requires the strongest possible computer scientists and engineers. (And perhaps cognitive scientists and philosophers.) To their credit, I think most people working on AI are aware of the need for expertise and are trying to attract great talent, but I still think it needs to be said.

“Good judgment” or reducing cognitive biases is social science, and requires people with expertise in psychology, behavioral economics, decision theory, cognitive science, and the like. It might also benefit from collaboration with people who work in finance, who (according to Tetlock’s research) are more effective than average at avoiding cognitive biases, and have a long tradition of valuing strategy and quantitative thinking.

Even philosophical discourse, in my opinion, is ultimately external-outcome-focused. For all that it’s hard to measure success, the people who want to create better discourse norms do have a concern with quality, and ultimately consider this a broad issue affecting modern society, not exclusively a Berkeley-local issue.  Progress on improving discourse should produce results (in the form of writing or teaching) that can be shared with the wider world. It might be worth prioritizing good humanists, writers, teachers, and scholars who have a track record of building high-quality conversations.

None of these projects need to be community-focused!  In fact, I think it would be better if they freed themselves from the Berkeley community and from the particular quirks and prejudices of this group of people. It doesn’t benefit your ability to do AI research that you primarily draw your talent from a particular social group.  It also doesn’t straightforwardly benefit the social group that there’s a lot of overlap with AI research.  (Is your research going to make you better at babysitting? Or cooking? Or resolving roommate drama?)

Cross-pollination between the Berkeley community and outcome-oriented projects would still be good. After all, ambitious people make good company!  I don’t think that the Bay Area is going to stop being a business and academic hub any time soon, and it makes sense for there to be friendships and relationships between people who primarily focus on community and people who primarily focus on external projects.  (After all, that’s one traditional division of labor in a marriage.)

But I think it muddies the water tremendously when people conflate community-building with external-facing projects.

Does maintaining good social cohesion within the Berkeley community actually advance the art of human rationality? I’m skeptical, because rationality training empirically doesn’t improve our scores on reasoning questions.  [I seem to recall, though I can’t find the source, that community members also don’t score higher than other well-educated people on the Cognitive Reflection Test, a standard measure of cognitive bias.] [ETA: I remembered wrong! As of the 2012 LessWrong survey, LessWrongers scored significantly better on cognitive bias questions than the participants in the original papers.  So it’s still possible, though not obvious, that we’re in some sense a more-rational-than-average community.]  If we’re not actually more rational than you’d expect in the absence of a community, why should rationality-promoters necessarily focus on community-building within Berkeley? Social cohesion is good for people who live together, but it’s a stretch to say that it promotes the cause of critical thinking in general.

Does having fun discussions with friends advance the state of human discourse?  Does building interesting psychological models and trying self-help practices advance the state of psychology?  Again, it’s really easy to confuse that with highbrow forms of just-for-fun socializing. Which are good in themselves, because they are enjoyable and rewarding for us!  But it’s disingenuous to call that progress in a global and objective sense.

I consider charismatic social crazes to be essentially a form of entertainment. People enjoy getting swept up in the emotional thrill of a cult of personality or mass movement for pretty much the same reasons they enjoy falling in love, watching movies, or reading adventure stories. Thrills are personal (they only create pleasure for the recipient and don’t spill over much to the wider world) and temporary (you can’t stay thrilled or entertained by the same thing forever).  Interpersonal thrills, unlike works of art, are inherently ephemeral; they last only as long as the personal relationship does.  These factors place limits on how much value can be derived from charisma alone, if it doesn’t build more lasting outcomes.

That means personality cults and mass enthusiasms belong in the “community-building” bucket, not the “outward-facing project” bucket. Even from a community perspective, you might not think they’re are a great idea, and that’s a separate discussion. But I’m primarily pushing back against the idea that they can be world-saving projects.  Something that only affects us and the insides of our heads, without leaving any lasting products or documents that can be shared with the world, is a purely internal affair.  Essentially, it’s just a glorified personal relationship.  And so it should be evaluated on the basis of whether it’s good for the people involved and the people they have personal relationships with. You look at it wearing your “community member” hat, not your “world-changing” hat.  Even if it’s nominally a nonprofit or a corporation, or associated with some ideology, if it doesn’t produce something for the world at large, it’s a community institution.

(An analogy is fandom debates. Sometimes these pose as political activism, but they are really arguments about fiction, by fans and for fans, with barely any impact on the non-fandom world. Fandom is a leisure activity, and so fandom debates are also a leisure activity.  Real activism, as practiced by professionals, is work; it’s not always fun, has standards for competence, and has tangible external goals that matter to people other than the activists themselves.)

I think distinguishing external-facing goals from community goals sidesteps the eternal debates over “what should the rationalist community be, and who should be in it?”

I think, in practice, the people who go to the same events in Berkeley, live together, parent together, and regularly communicate with each other, form a community. That community exists and deserves the love and attention of the people who value being part of it.  Not for any external reason, but, as they say in Red Dawn, “because we live here.”  We are people, our quality of life matters, our friendships matter, and putting effort into making our lives good is valuable to us.  We won’t choose the universal best way of life for all mankind, because that doesn’t exist; we’ll have the community norms and institutions that suit us, which is what having a local community means.

But there are individual people who are dissatisfied because that particular community, as it exists today, is not well-suited to accomplishing their external-facing goals. And I think that’s also a valid concern, and the natural solution is to divorce those goals from the purely communitarian ones. If you wonder “why doesn’t anybody around here care about my goal?” the natural thing to do is to focus on finding collaborators who do care about your goal — who may not be here!

If you’re frustrated that this isn’t a community based around excellence, I think you’ll be more likely to find what you’re looking for in institutions that have external goals and standards for membership. Some of those exist already, and some are worth creating.

A local, residential community isn’t really equipped to be a team of superstars.  Certainly a multigenerational community can’t be a team of superstars — you can’t just exclude someone’s kid if they don’t make the cut.

I don’t want to overstate this — Classical Athens was a town, and it had a remarkable track record of producing human achievement. But even there, we were talking about a population of 300,000 people.  Most of them didn’t go down in history.  Most of them were the “populace” that Plato thought were not competent to rule.  90% of them weren’t even adult male citizens. I don’t know how you build a new Athens, but it’s important to remember that it’s going to contain a lot of farming and weaving along with the philosophy and poetry.

Small teams of excellent people, though, are pretty much the tried-and-true formula for getting external-facing things done, whether practical or theoretical.  And the usual evaluative tools of industry and academia are, I think, correct in outline: judge by track records, not by personal relationships; measure outcomes objectively; consider ideas that challenge your preconceptions; publish, or ship, your results.

I think more of us who have concrete external goals should be seeking these kinds of focused teams, and not relying on the residential community to provide them.

In Defense of Individualist Culture

Epistemic Status: Pretty much serious and endorsed.

College-educated Western adults in the contemporary world mostly live in what I’d call individualist environments.

The salient feature of an individualist environment is that nobody directly tries to make you do anything.

If you don’t want to go to class in college, nobody will nag you or yell at you to do so. You might fail the class, but this is implemented through a letter you get in the mail or on a registrar’s website.  It’s not a punishment, it’s just an impersonal consequence.  You can even decide that you’re okay with that consequence.

If you want to walk out of a talk in a conference designed for college-educated adults, you can do so. You will never need to ask permission to go to the bathroom. If you miss out on the lecture, well, that’s your loss.

If you slack off at work, in a typical office-job environment, you don’t get berated. And you don’t have people watching you constantly to see if you’re working. You can get bad performance reviews, you can get fired, but the actual bad news will usually be presented politely.  In the most autonomous workplaces, you can have a lot of control over when and how you work, and you’ll be judged by the results.

If you have a character flaw, or a behavior that bothers people, your friends might point it out to you respectfully, but if you don’t want to change, they won’t nag, cajole, or bully you about it. They’ll just either learn to accept you, or avoid you. There are extremely popular advice columns that try to teach this aspect of individualist culture: you can’t change anyone who doesn’t want to change, so once you’ve said your piece and they don’t listen, you can only choose to accept them or withdraw association.

The basic underlying assumption of an individualist environment or culture is that people do, in practice, make their own decisions. People believe that you basically can’t make people change their behavior (or, that techniques for making people change their behavior are coercive and thus unacceptable.)  In this model, you can judge people on the basis of their decisions — after all, those were choices they made — and you can decide they make lousy friends, employees, or students.  But you can’t, or shouldn’t, cause them to be different, beyond a polite word of advice here and there.

There are downsides to these individualist cultures or environments.  It’s easy to wind up jobless or friendless, and you don’t get a lot of help getting out of bad situations that you’re presumed to have brought upon yourself. If you have counterproductive habits, nobody will guide or train you into fixing them.

Captain Awkward’s advice column is least sympathetic to people who are burdens on others — the depressive boyfriend who needs constant emotional support and can’t get a job, the lonely single or heartbroken ex who just doesn’t appeal to his innamorata and wants a way to get the girl.  His suffering may be real, and she’ll acknowledge that, but she’ll insist firmly that his problems are not others’ job to fix.  If people don’t like you — tough! They have the right to leave.

People don’t wholly “make their own decisions”.  We are, to some degree, malleable, by culture and social context. The behaviorist or sociological view of the world would say that individualist cultures are gravely deficient because they don’t put any attention into setting up healthy defaults in environment or culture.  If you don’t have rules or expectations or traditions about food, or a health-optimized cafeteria, you “can” choose whatever you want, but in practice a lot of people will default to junk.  If you don’t have much in the way of enforcement of social expectations, in practice a lot of people will default to isolation or antisocial behavior. If you don’t craft an environment or uphold a culture that rewards diligence, in practice a lot of people will default to laziness.  “Leaving people alone”, says this argument, leaves them in a pretty bad place.  It may not even be best described as “leaving people alone” — it might be more like “ripping out the protections and traditions they started out with.”

Lou Keep, I think, is a pretty good exponent of this view, and summarizer of the classic writers who held it. David Chapman has praise for the “sane, optimistic, decent” societies that are living in a “choiceless mode” of tradition, where people are defined by their social role rather than individual choices.  Duncan Sabien is currently trying to create a (voluntary) intentional community designed around giving up autonomy in order to be trained/social-pressured into self-improvement and group cohesion.  There are people who actively want to be given external structure as an aid to self-mastery, and I think their desires should be taken seriously, if not necessarily at face value.

I see a lot of writers these days raising problems with modern individualist culture, and it may be an especially timely topic. The Internet is a novel superstimulus, and it changes more rapidly, and affords people more options, than ever before.  We need to think about the actual consequences of a world where many people are in practice being left alone to do what they want, and clearly not all the consequences are positive.

But I do want to suggest some considerations in favor of individualist culture — that often-derided “atomized modern world” that most of us live in.

We Aren’t Clay

It’s a common truism that we’re all products of our cultural environment. But I don’t think people have really put together the consequences of the research showing that it’s not that easy to change people through environmental cues.

  • Behavior is very heritable. Personality, intelligence, mental illness, and social attitudes are all well established as being quite heritable.  The top ten most replicated findings in behavioral genetics starts with “all psychological traits show significant and substantial genetic influence”, which Eric Turkheimer has called the “First Law of behavioral genetics.”  A significant proportion of behavior is also explained by “nonshared environment”, which means it isn’t genetic and isn’t a function of the family you were raised in; it could include lots of things, from peers to experimental error to individual choice.
  • Brainwashing doesn’t work. Cult attrition rates are high, and “brainwashing” programs of POWs by the Chinese after the Korean War didn’t result in many defections.
  • There was a huge boom in the 1990’s and 2000’s in “priming” studies — cognitive-bias studies that showed that seemingly minor changes in environment affected people’s behavior.  A lot of these findings didn’t replicate. People don’t actually walk slower when primed with words about old people. People don’t actually make different moral judgments when primed with words or videos of cleanliness or disgusting bathrooms.  Being primed with images of money doesn’t make people more pro-capitalist.  Girls don’t do worse on math test when primed with negative stereotypes. Daniel Kahneman himself, who publicized many of these priming studies in Thinking, Fast and Slow, wrote an open letter to priming researchers that they’d have to start replicating their findings or lose credibility.
  • Ego depletion failed to replicate as well; using willpower doesn’t make you too “tired” to use willpower later.
  • The Asch Conformity Experiment was nowhere near as extreme as casual readers generally think: the majority of people didn’t change their answers to wrong ones to conform with the crowd, only 5% of people always conformed, and 25% of people never conformed.
  • The Sapir-Whorf Hypothesis has generally been found to be false by modern linguists: the language one speaks does not determine one’s cognition. For instance, people who speak a language that uses a single word for “green” and “blue” can still visually distinguish the colors green and blue.

Scott Alexander said much of this before, in Devoodooifying Psychology.  It’s been popular for many years to try to demonstrate that social pressure or subliminal cues can make people do pretty much anything.  This seems to be mostly wrong.  The conclusion you might draw from the replication crisis along with the evidence from behavioral genetics is “People aren’t that easily malleable; instead, they behave according to their long-term underlying dispositions, which are heavily influenced by inheritance.”  People may respond to incentives and pressures (the Milgram experiment replicated, for instance), but not to trivial external pressures, and they can actually be quite resistant to pressure to wholly change their lives and values (becoming a cult member or a Communist.)

Those who study culture think that we’re all profoundly shaped by culture, and to some extent that may be true. But not as much or as easily as social scientists think.  The idea of mankind as arbitrarily malleable is an appealing one to marketers, governments, therapists, or anyone who hopes that it’s easy to shift people’s behavior.  But this doesn’t seem to be true.  It might be worth rehabilitating the notion that people pretty much do what they’re going to do.  We’re not just swaying in the breeze, waiting for a chance external influence to shift us. We’re a little more robust than that.

People Do Exist, Pretty Much

People try to complicate the notion of “person” — what is a person, really? Do individuals even exist?  I would argue that a lot of this is not as true as it sounds.

A lot of theorists suggest that people have internal psychological parts (Plato, Freud, Minsky, Ainslie) or are part of larger social wholes (Hegel, Heidegger, lots and lots of people I haven’t read).  But these, while suggestive, are metaphors and hypotheses. The basic, boring fact, usually too obvious to state, is that most of your behavior is proximately caused by your brain (except for reflexes, which are controlled by your spinal cord.)  Your behavior is mostly due to stuff inside your body; other people’s behavior is mostly due to stuff inside their bodies, not yours.  You do, in fact, have much more control over your own behavior than over others’.

“Person” is, in fact, a natural category; we see people walking around and we give them names and we have no trouble telling one person apart from another.

When Kevin Simler talks about “personhood” being socially constructed, he means a role, like “lady” or “gentleman.” The default assumptions that are made about people in a given context. This is a social phenomenon — of course it is, by design!  He’s not literally arguing that there is no such entity as Kevin Simler.

I’ve seen Buddhist arguments that there is no self, only passing mental states.  Derek Parfit has also argued that personal identity doesn’t exist.  I think that if you weaken the criterion of identity to statistical similarity, you can easily say that personal identity pretty much exists.  People pretty much resemble themselves much more than they resemble others. The evidence for the stability of personality across the lifespan suggests that people resemble themselves quite a bit, in fact — different timeslices of your life are not wholly unrelated.

Self-other boundaries can get weird in certain mental conditions: psychotics often believe that someone else is implanting thoughts inside their heads, people with DID have multiple personalities, and some kinds of autism involve a lot of suggestibility, imitation, and confusion about what it means to address another person.  So it’s empirically true that the sense of identity can get confused.

But that doesn’t mean that personal identity doesn’t usually work in the “normal” way, or that the normal way is an arbitrary convention. It makes sense to distinguish Alice from Bob by pointing to Alice’s body and Bob’s body.  It’s a distinction that has a lot of practical use.

If people do pretty much exist and have lasting personal characteristics, and are not all that malleable by small social or environmental influences, then modeling people as individual agents who want things isn’t all that unreasonable, even if it’s possible for people to have inconsistent preferences or be swayed by social pressure.

And cultural practices which acknowledge the reality that people exist — for example, giving people more responsibility for their own lives than they have over other people’s lives — therefore tend to be more realistic and attainable.

 

How Ya Gonna Keep Em Down On The Farm

Traditional cultures are hard to keep, in a modern world.  To be fair, pro-traditionalists generally know this.  But it’s worth pointing out that ignorance is inherently fragile.  As Lou Keep points out , beliefs that magic can make people immune to bullets can be beneficial, as they motivate people to pull together and fight bravely, and thus win more wars. But if people find out the magic doesn’t work, all that benefit gets lost.

Is it then worth protecting gri-gri believers from the truth?  Or protecting religious believers from hearing about atheism?  Really? 

The choiceless mode depends on not being seriously aware that there are options outside the traditional one.  Maybe you’ve heard of other religions, but they’re not live options for you. Your thoughts come from inside the tradition.

Once you’re aware that you can pick your favorite way of life, you’re a modern. Sorry. You’ve got options now.

Which means that you can’t possibly go back to a premodern mindset unless you are brutally repressive about information about the outside world, and usually not even then.  Thankfully, people still get out.

Whatever may be worth preserving or recreating about traditional cultures, it’s going to have to be aspects that don’t need to be maintained by forcible ignorance.  Otherwise it’ll have a horrible human cost and be ineffective.

Independence is Useful in a Chaotic World

Right now, anybody trying to build a communitarian alternative to modern life is in an underdog position.  If you take the Murray/Putnam thesis seriously — that Americans have less social cohesion now than they did in the mid-20th century, and that this has had various harms — then that’s the landscape we have to work with.

Now, that doesn’t mean that communitarian organizations aren’t worth building. I participate in a lot of them myself (group houses, alloparenting, community events, mutual aid, planning a homeschooling center and a baugruppe).  Some Christians are enthusiastic about a very different flavor of community participation and counterculture-building called the Benedict Option, and I’m hoping that will work out well for them.

But, going into such projects, you need to plan for the typical failure modes, and the first one is that people will flake a lot.  You’re dealing with moderns! They have options, and quitting is an option.

The first antidote to flaking that most people think of — building people up into a frenzy of unanimous enthusiasm so that it doesn’t occur to them to quit — will probably result in short-lived and harmful projects.

Techniques designed to enhance group cohesion at the expense of rational deliberation — call-and-response, internal jargon and rituals, cults of personality, suppression of dissent  — will feel satisfying to many who feel the call of the premodern, but aren’t actually that effective at retaining people in the long term.  Remember, brainwashing isn’t that strong.

And we live in a complicated, unstable world.  When things break, as they will, you’d like the people in your project to avoid breaking.  That points in the direction of  valuing independence. If people need a leader’s charisma to function, what are they going to do if something happens to the leader?

Rewarding Those Who Can Win Big

A traditionalist or authoritarian culture can help people by guarding against some kinds of failure (families and churches can provide a social safety net, rules and traditions can keep people from making mistakes that ruin their lives), but it also constrains the upside, preventing people from creating innovations that are better than anything within the culture.

An individualist culture can let a lot of people fall through the cracks, but it rewards people who thrive on autonomy. For every abandoned and desolate small town with shrinking economic opportunity, there were people who left that small town for the big city, people whose lives are much better for leaving.  And for every seemingly quaint religious tradition, there are horrible abuse scandals under the surface.  The freedom to get out is extremely important to those who aren’t well-served by a traditional society.

It’s not that everything’s fine in modernity. If people are getting hurt by the decline of traditional communities — and they are — then there’s a problem, and maybe that problem can be ameliorated.

What I’m saying is that there’s a certain kind of justice that says “at the very least, give the innocent and the able a chance to win or escape; don’t trade their well-being for that of people who can’t cope well with independence.”  If you can’t end child abuse, at least let minors run away from home. If you can’t give everybody a great education, at least give talented broke kids scholarships.  Don’t put a ceiling on anybody’s success.

Immigrants and kids who leave home by necessity (a lot of whom are LGBT and/or abused) seem to be rather overrepresented among people who make great creative contributions.  “Leaving home to seek your freedom and fortune” is kind of the quintessential story of modernity.  We teach our children songs about it.  Immigration and migration is where a lot of the global growth in wealth comes from.  It was my parents’ story — an immigrant who came to America and a small-town girl who moved to the city.  It’s also inherently a pattern that disrupts traditions and leaves small towns with shrinking populations and failing economies.

Modern, individualist cultures don’t have a floor — but they don’t have a ceiling either. And there are reasons for preferring not to allow ceilings. There’s the justice aspect I alluded to before — what is “goodness” but the ability to do valuable things, to flourish as a human? And some if people are able to do really well for themselves, isn’t limiting them in effect punishing the best people?

Now, this argument isn’t an exact fit for real life.  It’s certainly not the case that everything about modern society rewards “good guys” and punishes “bad guys”.

But it works as a formal statement. If the problem with choice is that some people make bad choices when not restricted by rules, then the problem with restricting choice is that some people can make better choices than those prescribed by the rules. The situations are symmetrical, except that in the free-choice scenario, the people who make bad choices lose, and in the restricted scenario, the people who make good choices lose.  Which one seems more fair?

There’s also the fact that in the very long run, only existence proofs matter.  Does humanity survive? Do we spread to the stars?  These questions are really about “do at least some humans survive?”, “do at least some humans develop such-and-such technology?”, etc.  That means allowing enough diversity or escape valves or freedom so that somebody can accomplish the goal.  You care a lot about not restricting ceilings.  Sure, most entrepreneurs aren’t going to be Elon Musk or anywhere close, but if the question is “does anybody get to survive/go to Mars/etc”, then what you care about is whether at least one person makes the relevant innovation work.  Playing to “keep the game going”, to make sure we actually have descendants in the far future, inherently means prioritizing best-case wins over average-case wins.

Upshots

I’m not arguing that it’s never a good idea to “make people do things.”  But I am arguing that there are reasons to be hesitant about it.

It’s hard to make people do what you want; you don’t actually have that much influence in the long term; people in their healthy state generally are correctly aware that they exist as distinct persons; surrendering judgment or censoring information is pretty fragile and unsustainable; and restricting people’s options cuts off the possibility of letting people seek or create especially good new things.

There are practical reasons why “leave people alone” norms became popular, despite the fact that humans are social animals and few of us are truly loners by temperament.

I think individualist cultures are too rarely explicitly defended, except with ideological buzzwords that don’t appeal to most people. I think that a lot of pejoratives get thrown around against individualism, and I’ve spent a lot of time getting spooked by the negative language and not actually investigating whether there are counterarguments.  And I think counterarguments do actually exist, and discussion should include them.

 

 

Regulatory Arbitrage for Medical Research: What I Know So Far

Epistemic status: pretty ignorant. I’m sharing now because I believe in transparency.

I’ve been interested in the potential of regulatory arbitrage (that is, relocating to less regulated polities) for medical research for a while. Getting drugs or devices FDA-approved is expensive and extremely slow.  What if you could speed it up by going abroad to do your research?

I talked to some people who work in the field, and so far this is my distillation of what I got out of those conversations.  It’s a very rough draft and I expect to learn more.

Q: Why don’t pharma companies already run trials in developing countries?

A: They do! A third of clinical trials run by US-based pharma companies are outside the US, and that number is rapidly growing — a more than 2000% increase over the past two decades. Labor costs in India, China, and Russia are much lower, and it’s easier to recruit participants in countries where a clinical trial may be the only chance people have to get access to the latest treatments.

But in order to sell to American markets, those overseas trials still have to be conducted to FDA standards (with correspondingly onerous reporting requirements.) Many countries, like China, are starting to harmonize their regulatory standards with the FDA.  It’s not the Wild West.

Q: Ok, but why not sell drugs to foreign countries and bypass the US entirely?

A: The US is by far the biggest pharmaceutical market. As of 2014, US sales made up about 38% of global pharmaceutical sales; the European market was about 31%, and is roughly as tightly regulated. The money in pharma comes from selling to the developed world, which has strict standards for demonstrating safety and efficacy.

Q: Makes sense. But why not run cheap, preliminary, unofficial trials just to confirm for yourself whether drugs work, before investing in bigger and more systematic FDA-compliant trials for the successful ones?

A: I don’t know for sure, but it seems like pharma companies are generally not very interested in choosing their drug portfolio based on the likely efficacy of early-stage drug candidates.  When I’ve tried to do research into how they decide which drug candidates to pursue through clinical trials, what I found was that there’s a lot of portfolio management: mathematical models, sometimes quite complex, based on discounted cash flow analysis.  A drug candidate is treated as a random variable which has some distribution over future returns, based on the market size and the average success rate of trials.

What doesn’t seem to be involved in the decision-making process is analysis of which drug candidates are more likely to succeed in trials than others. Most drug candidates don’t work: 92% of preclinical drug candidates fail to be efficacious when tested in humans, and that attrition rate is only growing.  As clinical trials grow more expensive, failed trials are a serious and increasing drag on the pharma industry, but I’m not sure there’s interest in trying to cut those costs by choosing drug candidates more selectively.

On the few occasions when I’ve tried to pitch to large pharma companies the idea of trying to “pick winners” among early-stage drugs based on data analysis (of preclinical results, the past performance of the drug class, whatever), the idea was rejected.

Investors in biotech startups, of course, do try to pick winners among preclinical drug candidates; but an investor told me that, based on his experience, it wouldn’t be much easier to raise money if you had a successful but non-FDA-compliant preliminary human trial than if you had no human trials at all.

My impression is that (perhaps as a rational reaction to high rates of noise or fraud) decisionmakers in the industry aren’t very interested in making bets based on weak or preliminary evidence, and tend to round it down to no evidence at all.

Q: So are there any options left for trying to do medical research outside of an onerous regulatory environment?

A: Well, one option is legal exemptions. For example, the FDA’s Rare Disease Program can offer faster options for reviewing applications for a drug candidate that treats a life-threatening disease where no adequate treatment exists.

Another option is selling supplements, which do not need FDA approval. You need to make sure they’re safe, you can’t sell controlled substances, and you can’t claim that supplements treat any disease, but other than that, for better or worse, you can sell what you want.  One company, Elysium Health, is actually trying to develop serious anti-aging therapies and market them as supplements; Leonard Guarente, one of the pioneers of geroscience and the head of MIT’s aging lab, is the co-founder.

The problem with supplements, of course, is that you can’t sell them as treatments. Aging isn’t legally a disease, and the FDA is not approving anti-aging therapies, so Elysium’s model makes sense. But if you had a cure for cancer, you’d have a hard time selling it as a supplement without running afoul of the law.

There’s also medical tourism, which is a $10bn industry as of 2012, and expected to reach $32bn by 2019.  Most medical tourism is for conventional medical procedures, especially cosmetic surgery and dentistry, as customers seek cheaper options abroad.  Sometimes there are also experimental procedures, like stem cell therapies, though a lot of those are fraudulent and dangerous.  It might be possible to open a high-quality translational-research clinic in a developing country, and eventually collect enough successful results to advertise it globally as a medical tourism destination.  The key challenge, from what people in the field tell me, is to get the official blessing of the local government.

Q: Could you do it on a ship?

A: Maybe, but it would be hard.

Yes, technically international waters are not under any country’s jurisdiction.  But if a government really doesn’t want you doing your thing, they can still stop you. Pirate radio (unlicensed radio broadcasting from ships in international waters) was technically legal in the 1960’s, where it was very popular in the UK, but by 1967 the legal loophole had been shut down.

Also, ships are in the water. If you compare a cruise ship to a building of equivalent square-footage, the ship needs to be staffed with people with nautical expertise, and it needs more regular maintenance.  In most situations, I’d expect it to be much more expensive to run a ship clinic than a land clinic.

There’s also the sobering example of BlueSeed, which was to be a cruise ship where international entrepreneurs could live and work in international waters, without the need for a US visa. It was put “on hold” in 2013 due to lack of investor funding.  And, obviously, a “floating condo/office” is a much easier goal than a “floating clinic.”

Q: Would cryptocurrencies help?

A: Noooooo. No no no no no.

You’re probably thinking about black markets, which are risky in themselves; and anyway, cryptocurrencies do not help with black markets because they are not anonymous.

Bitcoin helpfully points out that Bitcoin is not anonymous.  It is incredibly not anonymous.  It is literally a public record of all your transactions.  Ross Ulbricht of Silk Road, tragically, didn’t understand this.

Q: So, can regulatory arbitrage work?

A: It’s definitely not trivial, but I haven’t ruled it out yet. The medical tourism model currently seems like the most promising method.

I think that transparency would be essential to any big win — yes, there’s lots of shady gray-market stuff out there, but even aside from ethical concerns, if you have to fly under the radar, it’s hard to grow big.  If you’re doing clinical research, it’s impossible to get anything done unless you’re transparent with the scientific community.  If you’re trying to push medical tourism towards the mainstream, you have to inspire trust in patients.  Controversy is inevitable, but if a model like this can work at all, the results would have to be good enough to speak for themselves.

Miscellany

  1. I have a Twitter feed. It’s just journal articles (and commentary on them), I don’t use it as a social network, but if you want to see what’s on my mind, check it out.  For instance, what are the implications if most polygenic traits are affected by nearly all genes?
  2. I quit cross-posting to LessWrong because the discussion didn’t seem that good and I didn’t have the energy to single-handedly try to shift the flow. That may be changing now that they’re setting up a new, more troll-proof website, now in private beta. I’ll see how it goes and link when it’s open to the public.
  3. I highly recommend Lapham’s Quarterly, a magazine that brings together excerpts from historical and contemporary writers on a common theme. It’s an easy way to get some perspective, since we live in a really ahistorical culture.
  4. Elizabeth of Aceso Under Glass is now trying to go pro with her writing and research:

    My passion is the things I do for this blog- research, modeling, writing.  So obviously a lot of my newfound free time will go here.  But I’d also like to look for paid opportunities to use those skills.  If you are or know of someone who needs writing or research like I do for this blog (deep scientific investigation, synthesizing difficult sources into something easy to read, effectiveness analysis, media reviews, all of these together), please reach out to me via elizabeth at this domain.   Have a thing you really want me to blog about?  Now’s a good time to ask.

If you like my lit reviews, and want to commission someone to research the answer to a question, go to Elizabeth. She’s excellent, and she actually has the time and opportunity to do freelance work, which I currently don’t.

Momentum, Reflectiveness, Peace

Epistemic Status: Personal

I’ve been writing a lot lately about the mental habits that make calm and reflection possible. This is because a lot of “rationality” seems to depend on dispositions — things like the propensity to question your first assumptions, seek new information, examine evidence in a fair or dispassionate manner, and so on.  It’s very difficult to be motivated towards reflective behavior if you’re so upset that the mental motion of “stop and think” is impossible for you.  Knowing about cognitive biases isn’t much use if you don’t want to do anything except your default reactions to stimuli.

Reflectiveness, I think, is simply the capacity to question, “Is this what I want to be doing?”  The opposite of reflectiveness is momentum: when you feel like “whatever I happen to be doing, I want to keep doing it, good and hard!”  Reflectiveness is “Hmm, could things be otherwise than they are?” Momentum is “Things shall be exactly as they are! Except more so!”

Social media feedback loops are an example of momentum. You happened to start fooling around on social media, so you want to continue.  Similarly, you notice that something is beginning to trend, so you want to jump on the trend and ride it higher.  This is momentum in the sense of the momentum term in a stochastic process.

I suspect that psychological reactance and momentum are linked. When you think, “whatever I’m doing, I don’t want to change, and if you suggest I change, I’ll only do it more!” there’s something of a momentum flavor.

“Do whatever is being done, but more so” is what Michele Reilly calls “pragmatism”:

Pragmatism creates a call for conformity, implicit pressure for agreement and unquestioning support for whatever is representative of power. Its philosophy is a submission to threats.  Intellectualism as I am using the term, points directly away from those things.

Reflectiveness, then, is “consider what is not being done, what is not representative of power, what is not in agreement with the default.”  Consider deviations and alternatives and original approaches.  Consider whether the current direction of society might not be optimal. Consider whether what you’re doing might not be for the best.  Consider whether the last thing you read might not be correct. Consider whether to turn in a different direction.

This is the mental motion of “stop, think, ask a question.”

As I understand it, it is similar to sattva, the peaceful, aware state of mind.  Like air, it is mobile; it can change direction.  Like air, it is light; it feels mildly pleasant to be intellectually engaged.

But getting to reflectiveness is often scary and threatening. If you really want something at the moment, you have to let go long enough to think about “do I want to want this?” If you are doing something at the moment, you have to stop long enough to think about “do I want to do this?”  And if you had to change your behavior, or change an entire chunk of the world, that would be a lot of work.  The prospect of extra work, or of stepping back from the object of your present desire, is really stressful.

My current hack towards reflectiveness is to simply start with the stop.

Rest is the first thing. Sleep deprivation makes people more emotionally reactive and less reflective.  I found that a day of focused rest — when I deliberately spent all day sleeping whenever I wanted, eating as much as I wanted, quietly daydreaming or meditating without talking to anyone or consuming any media, and focusing on regaining a sense of wellness and satiety, was really helpful.

A related thing is cultivating a sort of contentment. “All is well, literally everything is fine, I don’t have to do anything except be.  Everything can be left in peace.”

I know that there are a lot of problems with contentment, if I were to present it as a totalizing philosophy. Lots of people are not fine. Many things are worth doing. Eternal apathy isn’t most people’s idea of a great life plan.

But I’m not thinking of contentment as the whole of one’s life or mind. I’m thinking of it as a base. There is a very low-level sense of “things are all right, I can rest and be nourished, I am welcome in the universe” that I think is probably important for living things.  And to cultivate that base, sometimes you have to stop doing things and rest your body and mind.  You don’t have to do anything right now. No obligations bind. You can rest in peace and freedom.

And out of that restful state, sometimes reflectiveness becomes more accessible. For instance, if you believe you don’t have an obligation to act on a particular idea you read about, you can begin to merely consider it, abstractly, hypothetically. With a certain airy gentleness.

(In a weird way, I think this may be akin to Kant’s notion of public reason. He says that in a state with a sovereign strong enough that one can be certain that mere intellectual discussion of reforms won’t lead to revolution, it becomes possible to actually achieve “enlightened” reforms, slowly and over time, whereas revolutions tend to merely replace one form of arbitrary power with another.  Similarly, if you can merely consider an idea intellectually, while temporarily promising yourself that you don’t have to do anything about it, then in the long run you might become more able to change your behavior on the basis of such reflections.)

Cultivating this sense of restful, contented peace made it more possible for me to engage with ideas without feeling pressured to agree with them.  If lots of alternatives are possible, but none are obligatory, then entertaining hypothetical concepts is a rather gossamer-light experience, like looking at a soap bubble or a rainbow.

It’s also easier to behave with gentleness and self-restraint to other people, if you tap into that sense of eternal peace; people can put no duties upon you, they are simply fellow-creatures sharing the world with you, and you can separate from them if you like.

I’ll have to wait and see if this leads to more thorough abilities to consider alternatives and act on the basis of reflection, but it seems promising.

My current motto is “Turn — slowly.”  I can only adapt slowly, improve slowly, originate useful ideas slowly.  I still need stretches of rest and peace. A slow positive trajectory is still worth it. (And can be more productive in the long run. I get dramatically more work done after rest.)  Turning slowly towards truth seems to be the best way available.

Update on Sepsis: Donations Probably Unnecessary

Epistemic Status: Pretty Confident

So, remember how I was urging people to donate for a randomized controlled trial of a new treatment for sepsis?

I’ve been informed by some people who work with the Open Philanthropy Project, which does research into giving opportunities that I really respect, that there are already foundations which are likely to fund an RCT for the treatment.  This means that donations from private individuals are no longer necessary.

(A quick rundown of the logic behind this: if you’re trying to give “optimally”, you want to pay attention to the marginal returns of your dollars.  If you give the first dollar to a great opportunity that nobody else will fund, your marginal impact is huge. If you give a dollar to the same great opportunity, but somebody else has already pledged $10M, then your dollar has become a lot less useful, because pretty much any goal has diminishing marginal returns on investment.  If your motivation for giving to charity is achieving a goal as cheaply as possible, you should move away from charities that are already adequately funded, and towards opportunities that are underfunded. This is a simple idea but it took me a surprisingly long time to understand!)

If you already gave to Eastern Virginia Medical School for the sepsis trial, your money’s not refundable, but it’s still being dedicated to sepsis research.

General implications I’d draw from this:

  • This is another example, as GiveWell and OpenPhil have found many times, of the principle that finding good giving opportunities is hard. It’s hard for the same reason finding good investment opportunities is hard. If something is obviously great, there’s a good chance that professionals have already invested in it. If something is undervalued, it’s probably not obviously great (it’s at least likely to be controversial.)
  • This is a positive update on the success of the philanthropic community, esp. in medicine.  Drug companies may not have an incentive to fund trials of cheap, unpatentable treatments, but perhaps foundations do.
  • Unfortunately for those of us on the awkward and scruffy side, this suggests that talking to rich people is a useful skill in finding out what’s actually going on in the world.

 

Kindness Against The Grain

Epistemic Status: Unformed Thoughts

I’ve heard from a number of secular-ish sources (Carse, Girard, Arendt) that the essential contribution of Christianity to human thought is the concept of forgiveness.  (Ribbonfarm also has a recent post on the topic of forgiveness.)

I have never been a Christian and haven’t even read all of the New Testament, so I’ll leave it to commenters to recommend Christian sources on the topic.

What I want to explore is the notion of kindness without a smooth incentive gradient.

Most human kindness is incentivized. We do things for others, and get things in return. Contracts and favors alike are reciprocal actions.  And this makes a lot of sense, because trade is sustainable. Systems of game-theoretic agents that do some variant of tit-for-tat exchange tend to thrive, compared to agents that are freeloaders or altruists. Freeloaders can only exploit so long until they destroy the system they’re exploiting, or suffer from the retribution of tit-for-tat players; pure altruists burn themselves out quickly.

Sometimes kindness is reciprocated at the genetic rather than the personal level (see kin selection.)

Sometimes it’s reciprocated by long-term or indirect means — you can sometimes get social credit for being kind, even if the person you help can’t directly reciprocate. A reputation for generosity to allies and innocents makes you look strong and worth allying with, so you come out ahead in the long run.

And one of the ways we implement the incentives towards kindness in practice is through sympathy. When we see another’s suffering, we feel an urge to be kind to them, and a warm fuzzy reward if we help them.  That way, kindness is feasible along local emotional incentive gradients.

But, of course, sympathy itself is carefully optimized to make sure we only sympathize with those whom we’d come out ahead by helping. Sympathy is not merely a function of suffering. It is easier to sympathize with children than with adults, with the grateful than the ungrateful, with those who have experienced culturally acceptable “grounds for sympathy” (such as divorce, loss of a loved one, illness, job loss, crime victimization, car trouble, or fatigue, according to this sociological study).  We sympathize more with those whose suffering is perceived as unjust — though this may be something of a circular notion.

This leaves out certain forms of suffering.

  • The stranger, who is not part of your group, will receive less sympathy.  So will the outsider or social deviant.
  • The person with a permanent problem that can’t be easily fixed will eventually receive less sympathy, because he cannot be restored to happiness and in a position to show gratitude or return favors.
  • The overly self-reliant person will receive less sympathy; if sympathy is like a “credit account”, the person who has never opened one will be offered less credit than one who maintains a modest balance. We require vulnerability and a show of weakness before our sympathy will turn on.
  • The angry or assertive person who does not show gratitude or deference will receive less sympathy.  Appeasement displays evoke sympathy and reconciliation.
  • The person whose suffering takes an illegible form will receive less sympathy.

To be a recipient of sympathy one must be both weak and strong; weak, to show one really has received a misfortune; strong, to show one can be a useful ally someday. Children are the perfect example, because they are small and vulnerable today, but can grow to be strong as adults.  The victims of temporary and easily reversible bad luck are in a similar position: vulnerable today, but soon to be restored to strength.  Permanently disadvantaged adults, people who may be poor/disabled/nonwhite/etc and have developed the self-reliance or resentment associated with coping with long-term deprivation that isn’t going away, are less easy to sympathize with.

Some of this has been shown experimentally; subjects in an experiment who viewed other subjects appearing to receive electric shocks were more unsympathetic when they were told the shocks would continue in a subsequent session, versus when they were told the shocks had ended, or when they were told that their choices could stop the shocks. Permanent suffering is less sympathetic than temporary or fixable suffering.

Sympathy provides an immediate emotional incentive to respond to suffering with kindness, and it’s pretty well calibrated to be “good game theory” — but it’s not perfect by any means.

Cooperation Without Sympathy

Imagine a space alien — a grotesque creature, one whose appearance makes you want to vomit — offers you a deal. Let’s say this alien is, like the creatures in Octavia Butler’s Xenogenesis trilogy, a “gene trader”, one who can splice DNA with its bodily organs, and has a drive towards genetic engineering analogous to what Earth animals experience as a sex drive.  If you have “sex” with the alien and produce part-alien babies, it will give you and your children access to the vastly advanced powers in its alien genes, in exchange for gratifying its biological urge and allowing it to benefit from your genes.

From an intuitive standpoint, this is grotesque. The alien is not sexy. You cannot feel compassion for its desires to trade genes with you. It feels violating, disgusting, unacceptable. You were never evolved to want to breed with aliens.

And yet the game theory is sound. Superpowers are a grand thing to have. Even sexiness exists as a way to incentivize you to have strong children — and your alien children will undoubtedly be strong.

It’s a game-theoretic win-win but not a sympathetic win-win. Other humans will not find your alien babies sympathetic, or your choice to cooperate with the aliens a pro-social one.

It’s a sort of betrayal against your fellow humans, in that you are breaking the local game of “sex is between humans” and unilaterally gaining superpowered alien babies; but it’s a choice that any human could make as easily as you, so you aren’t leaving others permanently worse off, or depleting a valuable commons. Since all humans would be better off with alien genes, it’s not really a “defection” if you take the lead in doing something that would be beneficial if done by everyone.

Butler is really good at expressing how a “peaceful win-win” — on paper, an obviously correct choice — can feel disgusting.  Sympathy incentives can’t get you to win-win cooperation, if the thing that the other person wants is not something that you can imagine wanting.

This is an example of incentives for cooperation being present but not smooth.  It is in your interest to “gene trade”, but you only know that intellectually; you cannot be guided to it naturally through sympathy.

In the same way, helping someone “unsympathetic” but valuable is a “good investment” but doesn’t feel like it.  You often hear about this in disability contexts. “All you have to do is give me a relatively cheap accommodation and suddenly I become way more productive! How is this not a good deal for you?”  Well, it may be a good deal but not a sympathetic deal, because people’s mental accounting doesn’t match reality; if they think that the person “ought to be able to” get along without the accommodation, sympathy doesn’t provoke them to help, and if they don’t have a strong intuitive sense of people being plastic, so that they function differently in different environments, they don’t really intuitively believe that a blind person can be an expert programmer if given a screen reader, for instance.  Abstractly it’s a good deal, but concretely it’s not being guided smoothly by emotional gradients, it requires an act of detached cognition.

In practice, you can guide a situation back to sympathy, and that’s usually the best way to get the trade done. Try to play up the sympathetic qualities of the trade partner, try to analogize the requested action to things that are considered moral duties in one’s social context.  Try to set up emotional guardrails, engineer the social environment so the deal can be done without abstract thought.

But this isn’t really feasible for a single individual to do.  If you’re alone and nobody wants to help you, even if you reciprocate, because you’re not a “sympathetic character”, you can’t reshape social pressures to make yourself sympathetic all by yourself.  If we aren’t going to brutally destroy the lives of valuable people who don’t already have a posse, somebody is going to have to think, to go beyond gradient-following.

I think that to get the best results, thought is actually necessary.  By “thought” I mean the God’s-eye view, the long-view, the ability to ask “where do I want to go?” and potentially have an answer that isn’t “whichever way I’m currently going.” But what emotional or psychological or behavioral scaffolding promotes thought?  We are, after all, made of meat.  Since sometimes humans do think, there must be a way to build thought out of meat.  I’m still trying to understand how that is done.

Forgiveness and the Very Long Term

Forgiveness, on a structural level, is choosing not to call in a debt. I’m entitled to compensation, according to the rules of whatever game I’m playing, but I don’t demand it.

Forgiveness is a local loss to the forgiver. If everyone forgave everything all the time, it wouldn’t be remotely sustainable.

But a little bit of forgiveness is useful, in exactly the same way that bankruptcy is useful.  Bankruptcy means that there’s a floor to how much debt you can get in, which allows loss-averse humans to be willing to take on debt at all, which means that more high-expected-value investments get made.

Tit-for-tat with forgiveness outperforms plain tit-for-tat.

You can also think of forgiveness as a function of time. If you expect that someone will be net positive to you in the long run, you can accept them costing you in the short run, and not demand payment now. In other words, you extend them cheap credit.  As your time horizon goes to infinity (or your discount rate goes to zero), it can become possible to not demand payment at all, to forgive the loan entirely.  If it doesn’t matter whether they pay you back tomorrow, or in a hundred years, or in a thousand, but you expect them to be able to pay someday, then you don’t really need the repayment at any time, and you can drop it.

This is sort of similar to the heuristic of “be tolerant and kind to all persons, you never know when they might be valuable.” The fairy tales and myths about being kind to strangers and old ladies, in case they’re gods in disguise. You don’t want to burn bridges with anybody, you don’t want to kick anybody wholly out of the game, if you expect that eventually (and eventually may be very long indeed, and perhaps not within your lifetime), this will pay off.

Tit-for-tat or reinforcement-learning or behaviorism — reward what you want to see, punish what you don’t — makes a lot of sense, except when you factor in time and death. If you punish someone so hard that they die before they have a chance to turn around and improve, you’ve lost them.

And, on a more abstract level: it can make sense to disincentivize the slightly worse thing in general, that’s how evolution works, but that leads to things like rare languages dying out. Yes, it’s perfectly rational to speak Spanish rather than Zapotec, and Zapotec-speakers need to make a living too, but my inner Finite and Infinite Games says “wouldn’t you like to preserve Zapotec from dying out altogether? Couldn’t it come in handy someday?”  Language preservation is an example of preserving a “loser” because, if the world went on forever, nothing would be permanently guaranteed to lose.

It’s like having a slightly noisy update mechanism. Mostly, you reinforce what works and penalize what doesn’t. But sometimes, or to a small degree, you forgive, you rescue someone or something that would ordinarily be penalized, and save it, in case you need it later. In gradient descent, a little stochasticity keeps you from getting stuck on local maxima. In economics, a little bankruptcy or the occasional jubilee keeps you from getting stuck in stagnant, monopolistic conditions. You don’t ruthlessly weed out the “bad” all the time.

Sometimes you throw some resources at someone who “doesn’t deserve them” just in case you’re wrong, or to get out of the nasty feedback loops where someone behaves badly in response to being treated badly.  If you unilaterally gave them some help, you might allow them to escape into a cooperative, reciprocal-benefit situation, which you’d actually like better!  Even if this didn’t work one particular time, doing it in general, at some frequency, might in expectation work out in your favor.

A sense of the very long term may also make sympathy easier, because in the very long term nothing is permanent and everything is eventually mutable. If permanent suffering is what makes people unsympathetic, then a sense of the very long term makes it possible to realize that under different circumstances that person might become fine, and thus their suffering is ultimately the “temporary kind” that can elicit sympathy.  “The stone that the builders rejected/ has become the cornerstone” — well, if you wait long enough, that might actually happen. Things could change; the “loser”‘s or “villain”‘s status on the bottom is not eternal; so with a long-enough-term mindset it’s not actually appropriate to treat him as definitively a “loser” or a “villain.”

Forgiveness can be a lot easier to implement than “cooperation without sympathy”, which requires you to actually ascertain where win-wins are, with your mind. You can mindlessly add a little forgiveness to a system.  Machine-learning algorithms can do it.  Which may make it a useful tool in the process of “trying to build thought out of meat.”