Epistemic status: argumentative. I expect this to start a discussion, not end it.
“Company culture” is not, as I’ve learned, a list of slogans on a poster. Culture consists of the empirical patterns of what’s rewarded and punished within the company. Do people win promotions and praise by hitting sales targets? By coming up with ideas? By playing nice? These patterns reveal what the company actually values.
And, so, with community cultures.
It seems to me that the increasingly ill-named “Rationalist Community” in Berkeley has, in practice, a core value of “unconditional tolerance of weirdos.” It is a haven for outcasts and a paradise for bohemians. It is a social community based on warm connections of mutual support and fun between people who don’t fit in with the broader society.
I think it’s good that such a haven exists. More than that, I want to live in one.
I think institutions like sharehouses and alloparenting and homeschooling are more practical and humane than typical American living arrangements; I want to raise children with far more freedom than traditional parenting allows; I believe in community support for the disabled and mentally ill and mutual aid for the destitute. I think runaways and sexual minorities deserve a safe and welcoming place to go. And the Berkeley community stands a reasonable chance of achieving those goals! We’re far from perfect, and we obviously can’t extend to include everyone (esp. since the cost of living in the Bay is nontrivial), but I like our chances. I think we may actually, in the next ten years, succeed at building an accepting and nurturing community for our members.
We’ve built, over the years, a number of sharehouses, a serious plan for a baugruppe, preliminary plans for an unschooling center, and the beginnings of mutual aid organizations and dispute resolution mechanisms. We’re actually doing this. It takes time, but there’s visible progress on the ground.
I live on a street with my friends as neighbors. Hardly anybody in my generation gets to say that.
What we’re not doing well at, as a community, is external-facing projects.
And I think it’s time to take a hard look at that, without blame or judgment.
The thing about external-outcome-oriented projects is that they require standards. You have to be able to reject people for incompetence, and expect results from your colleagues. I don’t think there’s any other way to achieve goals.
That means that an external-oriented project can’t actually serve all of a person’s emotional needs. It can’t give you unconditional love. It can’t promise you a vibrant social scene. It can’t give you a place of refuge when your life goes to hell. It can’t replace family or community.
As Robert Frost said, “Home is where, when you go there, they have to take you in.”
But Tesla Motors and MIT don’t have to take you in. And they wouldn’t work if they did.
Internally focused groups, whose goals are about the well-being of their own members, are intrinsically different. You have to care more about inclusion, consensus, and making the process itself rewarding and enjoyable for the participants. If you’re organizing parties for each other, making the social group gel well and making everyone feel welcome is not a side issue — it’s part of the main goal. A Berkeley community organization that didn’t serve the people who currently live in Berkeley and meet their needs would no longer be an organization for our community; you can’t fire the community and get another. The whole point is benefiting these specific people.
An externally-focused goal, by contrast, can and should be “no respecter of persons” — you have to focus on achieving good outcomes, regardless of who’s involved.
So far, when members of our community focus on external goals, I think they’ve done much better when they haven’t tried to marry those goals with making community institutions.
Some rationalists have created successful startups and many more have successful careers in the tech industry — but these are basically never “rationalist endeavors”, staffed exclusively by community members or focused on serving this community. And they shouldn’t be. If you want to build a company, you hire the most competent people for the job, not necessarily your friends or neighbors. A company is oriented towards an external outcome, and so has to be objective and strategic about that goal. It’s by nature outward-facing, not inward-facing to the community.
My own outward-facing goal is to make an impact on treating disease. Mainly I’m working towards that through working in drug development — at a company which is by no means a “rationalist community project.” It shouldn’t be! What we need are good biologists and engineers and data scientists, regardless of what in-jokes they tell or who they’re friends with.
In the long run, I hope to work on things (like anti-aging or tighter bench-to-bedside feedback loops) that are somewhat more controversial. But I don’t think that changes the calculus. You still want the most competent people you can get, who are also willing to get on board with your mission. Idealism and radicalism don’t negate the need for excellence, if you’re working on an external goal.
Some other people in the community have more purely intellectual projects, that are closer to Eliezer Yudkowsky’s original goals. To research artificial intelligence; to develop tools for training Tetlock-style good judgment; to practice philosophical discourse. But I still think these are ultimately outcome-focused, external projects.
Artificial intelligence research is science, and requires the strongest possible computer scientists and engineers. (And perhaps cognitive scientists and philosophers.) To their credit, I think most people working on AI are aware of the need for expertise and are trying to attract great talent, but I still think it needs to be said.
“Good judgment” or reducing cognitive biases is social science, and requires people with expertise in psychology, behavioral economics, decision theory, cognitive science, and the like. It might also benefit from collaboration with people who work in finance, who (according to Tetlock’s research) are more effective than average at avoiding cognitive biases, and have a long tradition of valuing strategy and quantitative thinking.
Even philosophical discourse, in my opinion, is ultimately external-outcome-focused. For all that it’s hard to measure success, the people who want to create better discourse norms do have a concern with quality, and ultimately consider this a broad issue affecting modern society, not exclusively a Berkeley-local issue. Progress on improving discourse should produce results (in the form of writing or teaching) that can be shared with the wider world. It might be worth prioritizing good humanists, writers, teachers, and scholars who have a track record of building high-quality conversations.
None of these projects need to be community-focused! In fact, I think it would be better if they freed themselves from the Berkeley community and from the particular quirks and prejudices of this group of people. It doesn’t benefit your ability to do AI research that you primarily draw your talent from a particular social group. It also doesn’t straightforwardly benefit the social group that there’s a lot of overlap with AI research. (Is your research going to make you better at babysitting? Or cooking? Or resolving roommate drama?)
Cross-pollination between the Berkeley community and outcome-oriented projects would still be good. After all, ambitious people make good company! I don’t think that the Bay Area is going to stop being a business and academic hub any time soon, and it makes sense for there to be friendships and relationships between people who primarily focus on community and people who primarily focus on external projects. (After all, that’s one traditional division of labor in a marriage.)
But I think it muddies the water tremendously when people conflate community-building with external-facing projects.
Does maintaining good social cohesion within the Berkeley community actually advance the art of human rationality? I’m skeptical, because rationality training empirically doesn’t improve our scores on reasoning questions. [I seem to recall, though I can’t find the source, that community members also don’t score higher than other well-educated people on the Cognitive Reflection Test, a standard measure of cognitive bias.] [ETA: I remembered wrong! As of the 2012 LessWrong survey, LessWrongers scored significantly better on cognitive bias questions than the participants in the original papers. So it’s still possible, though not obvious, that we’re in some sense a more-rational-than-average community.] If we’re not actually more rational than you’d expect in the absence of a community, why should rationality-promoters necessarily focus on community-building within Berkeley? Social cohesion is good for people who live together, but it’s a stretch to say that it promotes the cause of critical thinking in general.
Does having fun discussions with friends advance the state of human discourse? Does building interesting psychological models and trying self-help practices advance the state of psychology? Again, it’s really easy to confuse that with highbrow forms of just-for-fun socializing. Which are good in themselves, because they are enjoyable and rewarding for us! But it’s disingenuous to call that progress in a global and objective sense.
I consider charismatic social crazes to be essentially a form of entertainment. People enjoy getting swept up in the emotional thrill of a cult of personality or mass movement for pretty much the same reasons they enjoy falling in love, watching movies, or reading adventure stories. Thrills are personal (they only create pleasure for the recipient and don’t spill over much to the wider world) and temporary (you can’t stay thrilled or entertained by the same thing forever). Interpersonal thrills, unlike works of art, are inherently ephemeral; they last only as long as the personal relationship does. These factors place limits on how much value can be derived from charisma alone, if it doesn’t build more lasting outcomes.
That means personality cults and mass enthusiasms belong in the “community-building” bucket, not the “outward-facing project” bucket. Even from a community perspective, you might not think they’re are a great idea, and that’s a separate discussion. But I’m primarily pushing back against the idea that they can be world-saving projects. Something that only affects us and the insides of our heads, without leaving any lasting products or documents that can be shared with the world, is a purely internal affair. Essentially, it’s just a glorified personal relationship. And so it should be evaluated on the basis of whether it’s good for the people involved and the people they have personal relationships with. You look at it wearing your “community member” hat, not your “world-changing” hat. Even if it’s nominally a nonprofit or a corporation, or associated with some ideology, if it doesn’t produce something for the world at large, it’s a community institution.
(An analogy is fandom debates. Sometimes these pose as political activism, but they are really arguments about fiction, by fans and for fans, with barely any impact on the non-fandom world. Fandom is a leisure activity, and so fandom debates are also a leisure activity. Real activism, as practiced by professionals, is work; it’s not always fun, has standards for competence, and has tangible external goals that matter to people other than the activists themselves.)
I think distinguishing external-facing goals from community goals sidesteps the eternal debates over “what should the rationalist community be, and who should be in it?”
I think, in practice, the people who go to the same events in Berkeley, live together, parent together, and regularly communicate with each other, form a community. That community exists and deserves the love and attention of the people who value being part of it. Not for any external reason, but, as they say in Red Dawn, “because we live here.” We are people, our quality of life matters, our friendships matter, and putting effort into making our lives good is valuable to us. We won’t choose the universal best way of life for all mankind, because that doesn’t exist; we’ll have the community norms and institutions that suit us, which is what having a local community means.
But there are individual people who are dissatisfied because that particular community, as it exists today, is not well-suited to accomplishing their external-facing goals. And I think that’s also a valid concern, and the natural solution is to divorce those goals from the purely communitarian ones. If you wonder “why doesn’t anybody around here care about my goal?” the natural thing to do is to focus on finding collaborators who do care about your goal — who may not be here!
If you’re frustrated that this isn’t a community based around excellence, I think you’ll be more likely to find what you’re looking for in institutions that have external goals and standards for membership. Some of those exist already, and some are worth creating.
A local, residential community isn’t really equipped to be a team of superstars. Certainly a multigenerational community can’t be a team of superstars — you can’t just exclude someone’s kid if they don’t make the cut.
I don’t want to overstate this — Classical Athens was a town, and it had a remarkable track record of producing human achievement. But even there, we were talking about a population of 300,000 people. Most of them didn’t go down in history. Most of them were the “populace” that Plato thought were not competent to rule. 90% of them weren’t even adult male citizens. I don’t know how you build a new Athens, but it’s important to remember that it’s going to contain a lot of farming and weaving along with the philosophy and poetry.
Small teams of excellent people, though, are pretty much the tried-and-true formula for getting external-facing things done, whether practical or theoretical. And the usual evaluative tools of industry and academia are, I think, correct in outline: judge by track records, not by personal relationships; measure outcomes objectively; consider ideas that challenge your preconceptions; publish, or ship, your results.
I think more of us who have concrete external goals should be seeking these kinds of focused teams, and not relying on the residential community to provide them.
(I’m the author of the linked report.)
“rationality training empirically doesn’t improve our scores on reasoning questions”
I disagree with this paraphrase of the results. That study (which was mainly looking at other things) was underpowered for detecting improvement at reasoning, or even for detecting reasoning errors in the first place. On 3 of the 4 biases, incoming participants were unbiased – within the (rather large) margin of error. On the 4th, alums did a bit better than incoming participants but the difference was not statistically significant and the confidence interval was wide.
On the 2012 LW Census/Survey rationalists were less susceptible to a few standard biases, both when comparing LW Survey respondents to previous published studies and when comparing more community-involved respondents with less community-involved respondents. (Though it’s hard to know if the effect is causal.)
Good points!
With respect to the first study, I think what you just stated indicates that CFAR participation did not improve reasoning biases, though you make a good point that it’s underpowered and we don’t know for sure.
Thanks for the link to the 2012 LW survey — I hadn’t found that, and it does contradict my half-remembered claim that LWers aren’t better at unbiased reasoning than the controls.
On 3 measures, people were already unbiased (insofar as our measures could tell) so nothing could have improved their reasoning on those measures; CFAR participation of course did not improve their reasoning on those measures.
On 1 measure, people’s reasoning improved after CFAR participation by an amount that could easily have just been noise. Observed result was improvement from 34% accuracy to 43% accuracy, with a 95% confidence interval on the improvement of -9 to +26 percentage points. We weren’t expecting a large effect (the reasoning error being measured is one that was not covered directly at CFAR workshops), so this seems pretty close to being the maximally uninformative result.
“And the usual evaluative tools of industry and academia are, I think, correct in outline: judge by track records, not by personal relationships; ”
This seems backwards, as in, by far the most common form of hiring in industry is via personal relationships. I can’t find the study right now, and without remembering the methodology I am dubious, but I have a cached belief that more than 80% or 90% of jobs get filled via personal references without a job posting ever being created.
Here is the relevant article that I remembered:
https://www.linkedin.com/pulse/new-survey-reveals-85-all-jobs-filled-via-networking-lou-adler
I’m aware that most jobs are found by personal relationships, but I think this is due to a limitation on knowledge — you can be more confident in people you know personally, and interviews aren’t that effective in sorting job candidates. There’s evidence going the other way too: that work samples are better than interviews, that companies that are audited externally are more successful, and so on.
This reminds me of this old post, which gives useful nearby concepts and immediately prompted similar thoughts about the rationalist community when I first read it.
As far as I can tell, folk-values dramatically increase the probability of raising children with heroic values.
Considering this in more detail, it also seems highly plausible that folk-values are the protocols intended to work from the bottom-up, in a fractal world where even unreliable value-expression can be effective. This is usually the main strategy that matters in life, but it can’t do everything.
I basically agree with this post. But it still gives me a sense of disappointment. Ten* years ago I was pitched, “join the movement save the world!”. Now I see, “join the movement be nice to your neighbors”. I was already nice to my neighbors, but the world wasn’t already being saved.
I am honestly happy that people have found a warm and supportive community. That’s an important thing in peoples’ lives. But in the old days rationality was going to “be” an external facing movement. It was going to “raise the sanity waterline” aim to broadly improve norms of discourse and reasoning and push for a world that had its sh!t together. But the good that is getting done for the most part seems to be people who have day jobs in outside organizations. And these jobs don’t overall seem very different than what we might see in a similar group of privileged, high intelligence, socially conscious, people with degrees from good schools, who had not ever looked at rationality.
*ten is picked for being a nice round number, actual day count may vary
I think that’s sad, yes, but I also think that mass movements are *terrible* at doing things like “raising the sanity waterline”, in ways I didn’t appreciate years ago. Schools and writers seem to be the top candidates for who is good at educating the public (in a way that doesn’t rapidly degrade into teaching the public slogans.) Eliezer himself, for instance, was a great educator and science-popularizer. But that doesn’t mean that everybody who *reads* his stuff is *also* a good teacher.
Mass movements are terrible, but hopefully rationalism isn’t aspiring to be a mass movement. Maybe a town-scale political movement associated with a global intellectual discourse among a city state worth of people. Corporations are OBVIOUSLY even worse at raising the sanity waterline, and schools seem bad at it too these days.
I think that recently, and especially with the advent of EA, recruitment tactics have been modeled on mass movements, and so that’s what we’re looking at. I was sad while it was happening, and people told me “no, no, we need to grow the movement!” Now it’s happened, and I have to move on with my life.
…huh. I’m surprised much of anyone thought that was a good idea. I took the sudden growth largely to be a result of HPMOR’s unexpected popularity rather than any deliberate recruiting effort. Wondering how it was thought that the usual pitfalls would be avoided.
HPMOR was a deliberate plan by Eliezer to write as popular a fic as he could, to bring rationality to a wider audience. If that was a bad call, I think you have to blame Eliezer.
Oh, that’s true, I’d forgotten.
I think HPMOR brought a different crowd from EA, and was very definitely still not an attempt to build a mass movement. I think EA has provided extremely valuable data on why complete cultural separation from the mainstream is necessary and commend Leverage on their appropriate response to that information.
This post is really on point and expresses something I think is very important: Our local community should not try to be everything to everyone, should not try to be the best by some standard of excellence, and should not try to be the most world-changing group of people in existence. It’s for us.
I want to disagree somewhat with one bit of it:
“It doesn’t benefit your ability to do AI research that you primarily draw your talent from a particular social group.”
Speaking from literal experience with a group of rationalists doing AI research: It actually really, really does.
Psychological safety is a real thing. Being able to talk to your coworkers about your life, without worrying you’ll accidentally say some Wrongspeak, is a real thing. Trusting that they care about you as a person even beyond your work makes doing that work effectively much, much easier.
That said, not all people you can trust are rationalists. But if someone is part of this community, I can trust that I can predict their behavior reasonably well, and feel safer around them in many ways.
Hmm. I agree that outward facing projects need high standards, need to be able to engage with the rest of the world on the world’s terms, and any serious thing will end up needing to capable of hiring people not-in-the-community and reject people-in-the-community.
But, as Quixote said, the reason I’m excited about *this* community is that it feels like it can in fact play a role in having a strong impact in the world. I’m driving across the country to be there right now because despite all the drama I hear about, the fact remains that whenever I visit the Bay there’s a palpable buzz in the air that “of *course* saving the world is the sort of thing you might do, or which you might try to optimize your socially-rewarding-stuff such that it ends up benefiting the world to some degree. This Buzz was what changed my life, and I think having it matters a lot.
I do think this requires more self-awareness, and requires people to be able to reasonably say “I’m working on this thing that is totally not world saving but will make our home nicer” and not receive subtle vibes that they’re doing the wrong thing.
I guess it’d be more helpful if we had concrete examples where you think something different should be happening (although it seemed like you had avoided that sort of example, possibly for drama-avoidance reasons). Can you give at least a hypothetical sort of pattern that you see that should be different?
Many people came here because they *no longer believe in the possibility of changing the world via outward facing projects as evaluated by traditional methods and existing authorities.* For those people, the rationality community is no longer a place, though it may grow to raise actual rationalist children in the future. In general, I think that those people are moving out of rationalism and AI interest into blockchain, and are also likely to be pulled out of the Bay Area in the near future.
I don’t think traditional authorities are necessary for outward-facing projects. Track records and standards of competence *are*, just by the nature of having a goal. (Including in blockchain, IMO!)
Like, I’m basically just emphasizing the value of doing your homework and being good at your job. I think *being good at stuff, in real life* is necessary for succeeding at that stuff. It’s almost inarguable!
What’s arguable is what ‘In real life’ means. Very different people are good at, for instance, making commercial and open source software.
I disagree with this very strongly.
Reblogged this on Left Conservative and commented:
I think Otium undersells why shared external goals and shared internal community get linked together so often (namely, the energy of a big goal can unite a group, and being part of a close knit tribe can increase efficiency working together and get people to put in more hours), but overall this is a good cry to seriously consider your priorities in large groups and projects.
There are different kinds of community-building, and some of them have more aspects of external projects. At one end of the spectrum, people who happen to live in the same town or neighborhood form a natural community that serves their needs and desires. At the other end, there are clubs formed around specific activities. The “local community”-type doesn’t have many standards beyond living in the area, but the club has them even if they’re not explicit (“If you don’t like reading, don’t join our book club”), and they may even be standards for excellence (“Our chess club is for people with an Elo above X”). A community between the two has to reconcile fulfilling the needs of its existing members with continuing to maintain the goal of being whatever kind of community it is.
I see the rationalist community as somewhere in the middle of that spectrum. It’s not formally about anything like a book or chess club, and there aren’t any necessary or sufficient criteria for belonging, but central nodes in the thingspace cluster include at least general agreement with the Sequences (and other important LW posts), as well as possessing and using a certain conceptual toolbox. It’s also a community that’s tolerant of weird people and contains many of them, but I think to a large degree that’s a consequence of the above, not something independent. If you think about what you really want, take ideas seriously, know and viscerally understand common failure modes, etc, being weird is a likely consequence. And tolerance for weird people isn’t far behind, both because you want to be tolerated and to avoid unknown blind spots (several people willing to be unconventional and compare notes will find more opportunities than one would find alone).
Being that kind of community is a goal – it’s not world-saving, but nor is it purely internally-facing, because it’s not just about serving the needs and desires of community members qua community members.
I think that may have been true in 2012 or so, but I don’t think it’s true now.
Reposted from facebook:
Hastily written, so sorry if this is unnuanced, but I think you are importantly wrong.
I read the article as saying something like, “the nominal purpose of the rationality community has been ‘save the world’, but it doesn’t seem very well optimized for that and maybe it never could have been, so we should instead make the purpose of the rationality community the people inside of it”. (I could be mistaken about what srconstantin intended but I think lots of other people will get this message).
My disagreement is that:
I disagree that the rationality community is not well optimized for ‘save the world’. I think it is vastly imperfect, but also correctly aimed at the problem along a dimension that nothing else on this planet is.
I think it makes sense that lots of people in the community will not perceive this, because the results are not high profile.
But it *is* working. The rationality community has been crucial work my work. My work is not very visible yet, but if you doubt that its crucial, come talk to me. The rationality community has been crucial in the formation of the ‘vassar crowd’ (Jessica, Michael, Ben and many others), who I disagree with a lot, but could not possibly be accused of being unserious in approaching the problem of ‘save the world’. The continued evolution of CFAR is similar. There are more things I haven’t mentioned, and no doubt some I don’t know about.
The purpose of the rationality community is to foster things like this. The ultimate purpose of the rationality community is not to be kind and good to the people in it. Though being kind and good to the people in it is extremely valuable.
The purposes of the people in the community will be their own varied purposes. Many of their goals are not that close to ‘save the world’. That is good and right and proper. They can still find value in the community and the community can still find value in them.
But you should not seek to change the ultimate purpose of the community to be something else. If you do, you will be smashing something important.
A thought experiment:
If you were part of a very pious community founded to give glory to god and you lost your belief in god, it would be good for you to try to convince people that god does not exist. But it would be bad to try to turn the church community into a non church community, if there were people who believed in the original mission. That would be defecting in a pretty bad way. (of course creating a secular community to the side or something could still be good).
Yeah, I think you & I have a substantive disagreement. We should talk sometime!
I agree!
It feels like the core claims driving this post are not actually present in the post. I agree with many object level claims in the post but that they don’t add up to “and therefore external and internal things should be totally segregated.”
Forces in a loaded structure, such as a bridge, transmit through until an element that can’t take them is found. Likewise, externalization of the blindspots/dark sides of a community will shuffle around the community until they find outlet in those least able to shuffle off the externality onto others.
Bit of a tangent: reading this a second time, the phrase “unconditional tolerance of weirdos” immediately jumps out at me and makes me think of the well-known line of argument about unconditional tolerance of weirdos having large downsides, a few people having substantial negative externalities, the community serving the mostly-harmless majority better if it’s willing to exclude those few people, etc. (“Well-known” as in Five Geek Social Fallacies plus a huge number of articles about the phenomenon of predatory abusers in tolerant communities.) I don’t notice the rationalist community losing large amounts of value to people who cause widespread negative externalities, but I could easily not notice if it were; so I wonder whether (a) we are (b) we don’t actually have the kind of value of tolerance that causes that problem (c) we’re escaping it by some other means.
(Insofar as I’ve observed the community to perform poorly at dealing with Bad or potentially-Bad people or related problems, it feels like the problem is less a positive virtue of tolerance than a (somewhat similar in its effects) simple aversion to hard situations and actions and being harsh for the long-run good. I don’t know that I have the information or perspective to make this perception very meaningful, though.)
I think it has the same downsides here that it does in other geek/alternative communities. I have the sense that we have less of a problem with abusers than many other subcultures, but I don’t know how much of that is to our credit (versus, say, we’re drawing from a pool of people who are more harmless than average.)
Thank you very much for writing this, it articulates a major concern I’ve had about the rationalist community recently and had trouble expressing clearly.
I think there is a related tension stemming from the fact that the rationalist community contains many people who were attracted to it by the promise (made implicitly and explicitly in both the Sequences and HPMOR, which were written specifically to promote the community) that it would turn them into superheroes who were generally Good At Things. After spending time in the community and observing that they still aren’t superheroes (as you note, CFAR doesn’t work), some of them double down in desperation with extreme and dubious ideas like forming a militaristic authoritarian group-house, some twist themselves further and further in denial, and some seek distraction. Short of miraculously inventing a way to make people superheroes I don’t know what can be done about the tension arising from this community-shaping misleading sales pitch, but attempting to separate the community from outward-facing work seems likely to run up against or exacerbate it.
I would like to live in a baugruppe, too. I’m in Toronto but have been working on a plan to get myself to the Bay Area (quit current programming job; level up programming skills/interviewing skills; get hired by a Bay Area company that can grease the visa wheels). Are there any concrete pieces of advice for meeting this yummy sounding rationalist community?
As a fascinated outsider on the periphery of the “rationalist community” it strikes me that there are other organizations which share the property of being partially an effort to build and foster local community ties, and partly to do outward facing projects with the goal of saving the world: They’re called Churches.
Now, I’m quite likely totally misunderstanding the dynamic here, but from my perspective the rationalist community tends to be notably terrible at evangelism. A lot of you talk about human beings in a way that even I as a man with asperger’s find off-puttingly bizarre and mechanical. I tend to suspect that a lot of people are joining the rationalist community because the discourse norms of normal society are upsetting and off-putting to them, and they want to retreat from those worldly vices. In other words, they’re looking for a monastery, not a church.
How common this is I don’t know, but I do have at least one friend who feels this way.
Actually, this is one of the main sources of my extreme disquiet with the whole rationalist movement. I see, in parts of it, a rejection of the temporal world and its vices, but without a monk’s renunciation of temporal power. The acquisition of temporal power and money is actually quite an explicit goal. Barring extreme disruptions, programmers, and to a greater extent a handful of tech billionaires, are going to be a major driving factor in the direction of human civilization. Supposing I can’t afford to live in the Bay Area, supposing I feel too stupid to understand logic (Especially with all the vocabulary words and short-hand you guys use), supposing I don’t feel driven enough to re-orient myself away from my middling art skills towards programming… what then?
For me, the insularity of your community exacerbates those concerns, rather than what I assume is your goal of mitigating or eliminating them. The fact that you’re seeking to save the world becomes, not comforting, but frightening, because my individual concerns, fears and well-being are nothing compared to saving the world. This becomes particularly true if my concerns are emotional in nature, and/or difficult to articulate. This is then exacerbated by the fact that many of you have a facility for acquiring temporal power, while I work a minimum wage job as a doorman.
Here’s something I only noticed while re-reading this article: Your “outward-facing” projects aren’t distinguished from community projects by their interaction with communities outside the rationalist one; they’re distinguished in that they have empirically measurable goals other than improvement of the rationalist community. It is still entirely possible to imagine “outwardly facing” projects staffed entirely by community members, so long as they understand that the needs of the project are paramount.
Neighborliness, evangelical recruiting, or any other discussion of the ways the rationalist community should interact with the non-rationalist community are only present here very implicitly.
This sentence struck a chord, a challenge being faced daily with Alaskan Native Corporations, “If you want to build a company, you hire the most competent people for the job, not necessarily your friends or neighbors. A company is oriented towards an external outcome, and so has to be objective and strategic about that goal. It’s by nature outward-facing, not inward-facing to the community.”.
The 13 ANCSA Corporations were set up to be for profit, my understanding is the very basic idea was they would enable financial success. They deal with the complicated process of working towards improved living for their tribes while retaining the historical cultural ideas and knowledge that makes them distinct from other races and constantly…debating among themselves which goal is more critical to their communities survival. Hence the conflict between Native hire and Corporation’s ability to compete in the business world today; since success has very different definitions in the two cultures.
I realize it is a different focus from Berkeley, but perhaps examining how these (very culturally and geographically diverse) Corporations balance the sanctity of their tribal communities against the need for financial security to protect their people’s future might be interesting and even instructive.
Sounds a lot like judaism. Rationalism and judaism (as seen by some outsiders) are interchangeable in the following:
Members of rationalism are expected to spend as much time as they are able furthering the cause of the rationalism, and should explicitly avoid doing things harmful to rationalism, but recognize that there are many tasks which non-rationlists are needed to perform.
When possible, being a fellow rationalist is a positive selection criteria for a prestigious role, but non-rationalists can be used if an appropriate rationalist is unavailable. This is ok, because the proceeds of such a project will always be directed to the advancement of rationalism as a cause.
Some rationalists are good at being rationalist, but not so great at participating in projects that divert resources from the outside world into the rationalist community. These people are to be encouraged to refine the practice of rationalism and merit support simply for that purpose.
Inspiration:
Scott Aaronson: https://www.scottaaronson.com/blog/?p=476
Scott Alexander: http://slatestarcodex.com/2017/05/26/the-atomic-bomb-considered-as-hungarian-high-school-science-fair-project/
Taking it a step further, by adoption of obscure language (hpmor), impenetrable rituals (effective altruism?), and exceptionally insular cultural practices (polygamy/andry)…rationalists are preparing for a coming world of technology-crafted scarcity. Robots will replace the non-rationalists on projects, and resources not dominated by rationalists will be acquired through superior mental…whatever I’m not finishing this thought.
Tell me though, how would a devout Mormon with one stay-at-home wife, a large brood, and a talent for mathematics do in the bay area?
Has anyone become a prominent figure in ‘rationalism’ without carrying on a sexual relationship with one or more other well known rationalists, including possibly a few who are in the ‘not great at external projects’ category?