Distinctions in Types of Thought

Epistemic status: speculative

For a while, I’ve had the intuition that current machine learning techniques, though powerful and useful, are simply not touching some of the functions of the human mind.  But before I can really get at how to justify that intuition, I would have to start clarifying what different kinds of thinking there are.  I’m going to be reinventing the wheel a bit here, not having read that much cognitive science, but I wanted to write down some of the distinctions that seem important, and trying to see whether they overlap. A lot of this is inspired by Dreyfus’ Being-in-the-World. I’m also trying to think about the questions raised in the post “What are Intellect and Instinct?”

Effortful vs. Effortless

In English, we have different words for perceiving passively versus actively paying attention.  To see vs. to look, to hear vs. to listen, to touch vs. to feel.  To go looking for a sensation means exerting a sort of mental pressure; in other words, effort. William James, in his Principles of Psychology, said “Attention and effort, as we shall see later, are but two names for the same psychic fact.”  He says, in his famous introduction to attention, that

Every one knows what attention is. It is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Localization, concentration, of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others, and is a condition which has a real opposite in the confused, dazed, scatterbrained state which in French is called distraction, and Zerstreutheit in German.

Furthermore, James divides attention into two types:

Passive, reflex, non-voluntary, effortless ; or Active and voluntary.

In other words, the mind is always selecting certain experiences or thoughts as more salient than others; but sometimes this is done in an automatic way, and sometimes it’s effortful/voluntary/active.  A fluent speaker of a language will automatically notice a grammatical error; a beginner will have to try effortfully to catch the error.

In the famous gorilla experiment where subjects instructed to count passes in a basketball game failed to notice a gorilla on the basketball field, “counting the passes” is paying effortful attention, while “noticing the gorilla” would be effortless or passive noticing.

Activity in “flow” (playing a musical piece by muscle memory, or spatially navigating one’s own house) is effortless; activities one learns for the first time are effortful.

Oliver Sacks’ case studies are full of stories that illustrate the importance of flow. People with motor disorders like Parkinson’s can often dance or walk in rhythm to music, even when ordinary walking is difficult; people with memory problems sometimes can still recite verse; people who cannot speak can sometimes still sing. “Fluent” activities can remain undamaged when similar but more “deliberative” activities are lost.

The author of intellectualizing.net thinks about this in the context of being an autistic parent of an autistic son:

Long ago somewhere I can’t remember, I read a discussion of knowing what vs. knowing how. The author’s thought experiment was about walking. Imagine walking with conscious planning, thinking consciously about each muscle and movement involved. Attempting to do this makes us terrible at walking.

When I find myself struggling with social or motor skills, this is the feeling. My impression of my son is the same. Rather than trying something, playing, experimenting he wants the system first. First organize and analyze it, then carefully and cautiously we might try it.

A simple example. There’s a curriculum for writing called Handwriting Without Tears. Despite teaching himself to read when barely 2, my son refused to even try to write. Then someone showed him this curriculum in which letters are broken down into three named categories according to how you write them; and then each letter has numbered strokes to be done in sequence. Suddenly my son was interested in writing. He approached it by first memorizing the whole Handwriting Without Tears system, and only then was he willing to try to write. I believe this is not how most 3-year-olds work, but this is how he works.

One simple study (“Children with autism do not overimitate”) had to do with children copying “unnecessary” or “silly” actions. Given a demonstration by an adult, autistic kids would edit out pointless steps in the demonstrated procedure. Think about what’s required to do this: the procedure has to be reconstructed from first principles to edit the silly out. The autistic kids didn’t take someone’s word for it, they wanted to start over.

The author and his son learn skills by effortful conscious planning that most people learn by “picking up” or “osmosis” or “flow.”

Most of the activity described by Heidegger’s Being and Time, and Dreyfus’ commentary Being-In-The-World, is effortless flow-state “skilled coping.”  Handling a familiar piece of equipment, like typing on a keyboard, is a prototypical example. You’re not thinking about how to do it except when you’re learning how for the first time, or if it breaks or becomes “disfluent” in some way.  If I’m interpreting him correctly, I think Dreyfus would say that neurotypical adults spend most of their time, minute-by-minute, in an effortless flow state, punctuated by occasions when they have to plan, try hard, or figure something out.

William James would agree that voluntary attention occupies a minority of our time:

There is no such thing as voluntary attention sustained for more than a few seconds at a time. What is called sustained voluntary attention is a repetition of successive efforts which bring back the topic to the mind.

(This echoes the standard advice in mindfulness meditation that you’re not aiming for getting the longest possible period of uninterrupted focus, you’re training the mental motion of returning focus from mind-wandering.)

Effortful attention can also be viewed as the cognitive capacities which stimulants improve. Reaction times shorten, and people distinguish and remember the stimuli in front of them better.

It’s important to note that not all focused attention is effortful attention. If you are playing a familiar piece on the piano, you’re in a flow state, but you’re still being “focused” in a sense; you’re noticing the music more than you’re noticing conversation in another room, you’re playing this piece rather than any other, you’re sitting uninterrupted at the piano rather than multitasking.  Effortless flow can be extremely selective and hyper-focused (like playing the piano), just as much as it can be diffuse, responsive, and easily interruptible (like navigating a crowded room).  It’s not the size of your window of salience that distinguishes flow from effortful attention, it’s the pressure that you apply to that window.

Psychologists often call effortful attention cognitive disfluency, and find that experiences of disfluency (such as a difficult-to-read font) improve syllogistic reasoning and reduce reliance on heuristics, while making people more likely to make abstract generalizations.  Disfluency improves results on measures of “careful thinking” like the Cognitive Reflection Test as well as on real-world high-school standardized tests, and also makes them less likely to confess embarrassing information on the internet. In other words, disfluency makes people “think before they act.”  Disfluency raises heart rate and blood pressure, just like exercise, and people report it as being difficult and reliably disprefer it to cognitive ease.  The psychology research seems consistent with there being such a thing as “thinking hard.”  Effortful attention occupies a minority of our time, but it’s prominent in the most specifically “intellectual” tasks, from solving formal problems on paper to making prudent personal decisions.

What does it mean, on a neurological or a computational level, to expend mental effort?  What, precisely, are we doing when we “try hard”?  I think it might be an open question.

Do the neural networks of today simulate an agent in a state of “effortless flow” or “effortful attention”, or both or neither?  My guess would be that deep neural nets and reinforcement learners are generally doing effortless flow, because they excel at the tasks that we generally do in a flow state (pattern recognition and motor learning.)

Explicit vs. Implicit

Dreyfus, as an opponent of the Representational Theory of Mind, believes that (most of) cognition is not only not based on a formal system, but not in principle formalizable.  He thinks you couldn’t possibly write down a theory or a set of rules that explain what you’re doing when you drive a car, even if you had arbitrary amounts of information about the brain and human behavior and arbitrary amounts of time to analyze them.

This distinction seems to include the distinctions of “declarative vs. procedural knowledge”, “know-what vs. know-how”, savoir vs. connaître.  We can often do, or recognize, things that we cannot explain.

I think this issue is related to the issue of interpretability in machine learning; the algorithm executes a behavior, but sometimes it seems difficult or impossible to explain what it’s doing in terms of a model that’s simpler than the whole algorithm itself.

The seminal 2001 article by Leo Breiman, “Statistical Modeling: The Two Cultures” and Peter Norvig’s essay “On Chomsky and the Two Cultures of Statistical Learning” are about this issue.  The inverse square law of gravitation and an n-gram Markov model for predicting the next word in a sentence are both statistical models, in some sense; they allow you to predict the dependent variable given the independent variables. But the inverse square law is interpretable (it makes sense to humans) and explanatory (the variables in the model match up to distinct phenomena in reality, like masses and distances, and so the model is a relationship between things in the world.)

Modern machine learning models, like the n-gram predictor, have vast numbers of variables that don’t make sense to humans and don’t obviously correspond to things in the world. They perform well without being explanations.  Statisticians tend to prefer parametric models (which are interpretable and sometimes explanatory) while machine-learning experts use a lot of non-parametric models, which are complex and opaque but often have better empirical performance.  Critics of machine learning argue that a black-box model doesn’t bring understanding, and so is the province of engineering rather than science.  Defenders, like Norvig, flip open a random issue of Science and note that most of the articles are not discovering theories but noting observations and correlations. Machine learning is just another form of pattern recognition or “modeling the world”, which constitutes the bulk of scientific work today.

These are heuristic descriptions; these essays don’t make explicit how to test whether a model is interpretable or not. I think it probably has something to do with model size; is the model reducible to one with fewer parameters, or not?  If you think about it that way, it’s obvious that “irreducibly complex” models, of arbitrary size, can exist in principle — you can just build simulated data sets that fit them and can’t be fit by anything simpler.

How much of human thought and behavior is “irreducible” in this way, resembling the huge black-box models of contemporary machine learning?  Plausibly a lot.  I’m convinced by the evidence that visual perception runs on something like convolutional neural nets, and I don’t expect there to be “simpler” underlying laws.  People accumulate a lot of data and feedback through life, much more than scientists ever do for an experiment, so they can “afford” to do as any good AI startup does, and eschew structured models for open-ended, non-insightful ones, compensating with an abundance of data.

Subject-Object vs. Relational

This is a concept in Dreyfus that I found fairly hard to pin down, but the distinction seems to be operating upon the world vs. relating to the world. When you are dealing with raw material — say you are a potter with a piece of clay — you think of yourself as active and the clay as passive. You have a goal (say, making a pot) and the clay has certain properties; how you act to achieve your goal depends on the clay’s properties.

By contrast, if you’re interacting with a person or an animal, or even just an object with a UI, like a stand mixer, you’re relating to your environment.  The stand mixer “lets you do” a small number of things — you can change attachments or speeds, raise the bowl up and down, remove the bowl, fill it with food or empty it. You orient to these affordances. You do not, in the ordinary process of using a stand mixer, think about whether you could use it as a step-stool or a weapon or a painting tool. (Though you might if you are a child, or an engineer or an artist.) Ordinarily you relate in an almost social, almost animist, way to the stand mixer.   You use it as it “wants to be used”, or rather as its designer wants you to use it; you are “playing along” in some sense, being receptive to the external intentions you intuit.

And, of course, when we are relating to other people, we do much stranger and harder-to-describe things; we become different around them, we are no longer solitary agents pursuing purely internally-driven goals. There is such a thing as becoming “part of a group.”  There is the whole messy business of culture.

For the most part, I don’t think machine-learning models today are able to do either subject-object or relational thinking; the problems they’re solving are so simple that neither paradigm seems to apply.  “Learn how to work a stand mixer” or “Figure out how to make a pot out of clay” both seem beyond the reach of any artificial intelligence we have today.

Aware vs. Unaware

This is the difference between sight and blindsight.  It’s been shown that we can act on the basis of information that we don’t know we have. Some blind people are much better than chance at guessing where a visual stimulus is, even though they claim sincerely to be unable to see it.  Being primed by a cue makes blindsight more accurate — in other words, you can have attention without awareness.

Anosognosia is another window into awareness; it is the phenomenon when disabled people are not aware of their deficits (which may be motor, sensory, speech-related, or memory-related.) In unilateral neglect, for instance, a stroke victim might be unaware that she has a left side of her body; she won’t eat the left half of her plate, make up the left side of her face, etc.  Sensations may still be possible on the left side, but she won’t be aware of them.  Squirting cold water in the left ear can temporarily fix this, for unknown reasons.

Awareness doesn’t need to be explicit or declarative; we aren’t formalizing words or systems constantly when we go through ordinary waking life.  It also doesn’t need to be effortful attention; we’re still aware of the sights and sounds that enter our attention spontaneously.

Efference copy signals  seem to provide a clue to what’s going on in awareness.  When we act (such as to move a limb), we produce an “efference copy” of what we expect our sensory experience to be, while simultaneously we receive the actual sensory feedback. “This process ultimately allows sensory reafferents from motor outputs to be recognized as self-generated and therefore not requiring further sensory or cognitive processing of the feedback they produce.”  This is what allows you to keep a ‘still’ picture of the world even though your eyes are constantly moving, and to tune out the sensations from your own movements and the sound of your own voice.

Schizophrenics may be experiencing a dysfunction of this self-monitoring system; they have “delusions of passivity or thought insertion” (believing that their movements or thoughts are controlled from outside) or “delusions of grandeur or reference” (believing that they control things with their minds that they couldn’t possibly control, or that things in the outside world are “about” themselves when they aren’t.) They have a problem distinguishing self-caused from externally-caused stimuli.

We’re probably keeping track, somewhere in our minds, of things labeled as “me” and “not me” (my limbs are part of me, the table next to me is not), sensations that are self-caused and externally-caused, and maybe also experiences that we label as “ours” vs. not (we remember them, they feel like they happened to us, we can attest to them, we believe they were real rather than fantasies.)

It might be as simple as just making a parallel copy of information labeled “self,” as the efference-copy theory has it.  And (probably in a variety of complicated and as-yet-unknown ways), our brains treat things differently when they are tagged as “self” vs. “other.”

Maybe when experiences are tagged as “self” or labeled as memories, we are aware that they are happening to us.  Maybe we have a “Cartesian theater” somewhere in our brain, through which all experiences we’re aware of pass, while the unconscious experiences can still affect our behavior directly.  This is all speculation, though.

I’m pretty sure that current robots or ML systems don’t have any special distinction between experiences inside and outside of awareness, which means that for all practical purposes they’re always operating on blindsight.

Relationships and Corollaries

I think that, in order of the proportion of ordinary neurotypical adult life they take up, awareness  > effortful attention > explicit systematic thought. When you look out the window of a train, you are aware of what you see, but not using effortful attention or thinking systematically.  When you are mountain-climbing, you are using effortful attention, but not thinking systematically very much. When you are writing an essay or a proof, you are using effortful attention, and using systematic thought more, though perhaps not exclusively.

I think awareness, in humans, is necessary for effortful attention, and effortful attention is usually involved in systematic thought.  (For example, notice how concentration and cognitive disfluency improve the ability to generalize or follow reasoning principles.) . I don’t know whether those necessary conditions hold in principle, but they seem to hold in practice.

Which means that, since present-day machine-learners aren’t aware, there’s reason to doubt that they’re going to be much good at what we’d call reasoning.

I don’t think classic planning algorithms “can reason” either; they’re hard-coding in the procedures they follow, rather than generating those procedures from simpler percepts the way we do.  It seems like the same sort of misunderstanding as it would be to claim a camera can see.

(As I’ve said before, I don’t believe anything like “machines will never be able to think the way we do”, only that they’re not doing so now.)

The Weirdness of Thinking on Purpose

It’s popular these days to “debunk” the importance of the “intellect” side of “intellect vs. instinct” thinking.  To point out that we aren’t always rational (true), are rarely thinking effortfully or explicitly (also true), can’t usually reduce our cognitive processes to formal systems (also true), and can be deeply affected by subconscious or subliminal processes (probably true).

Frequently, this debunking comes with a side order of sneer, whether at the defunct “Enlightenment” or “authoritarian high-modernist” notion that everything in the mind can be systematized, or at the process of abstract/deliberate thought itself and the people who like it.  Jonathan Haidt’s lecture on “The Rationalist Delusion” is a good example of this kind of sneer.

The problem with the popular “debunking reason” frame is that it distracts us from noticing that the actual process of reasoning, as practiced by humans, is a phenomenon we don’t understand very well yet.  Sure, Descartes may have thought he had it all figured out, and he was wrong; but thinking still exists even after you have rejected naive rationalism, and it’s a mistake to assume it’s the “easy part” to understand. Deliberative thinking, I would guess, is the hard part; that’s why the cognitive processes we understand best and can simulate best are the more “primitive” ones like sensory perception or motor learning.

I think it’s probably better to think of those cognitive processes that distinguish humans from animals as weird and mysterious and special, as “higher-level” abilities, rather than irrelevant and vestigial “degenerate cases”, which is how Heidegger seems to see them.  Even if the “higher” cognitive functions occupy relatively little time in a typical day, they have outsize importance in making human life unique.

Two weirdly similar quotes:

“Three quick breaths triggered the responses: he fell into the floating awareness… focusing the consciousness… aortal dilation… avoiding the unfocused mechanism of consciousness… to be conscious by choice… blood enriched and swift-flooding the overload regions… one does not obtain food-safety freedom by instinct alone… animal consciousness does not extend beyond the given moment nor into the idea that its victims may become extinct… the animal destroys and does not produce… animal pleasures remain close to sensation levels and avoid the perceptual… the human requires a background grid through which to see his universe… focused consciousness by choice, this forms your grid… bodily integrity follows nerve-blood flow according to the deepest awareness of cell needs… all things/cells/beings are impermanent… strive for flow-permanence within…”

–Frank Herbert, Dune, 1965

 

“An animal’s consciousness functions automatically: an animal perceives what it is able to perceive and survives accordingly, no further than the perceptual level permits and no better. Man cannot survive on the perceptual level of his consciousness; his senses do not provide him with an automatic guidance, they do not give him the knowledge he needs, only the material of knowledge, which his mind has to integrate. Man is the only living species who has to perceive reality, which means: to be conscious — by choice.  But he shares with other species the penalty for unconsciousness: destruction. For an animal, the question of survival is primarily physical; for man, primarily epistemological.

“Man’s unique reward, however, is that while animals survive by adjusting themselves to their background, man survives by adjusting his background to himself. If a drought strikes them, animals perish — man builds irrigation canals; if a flood strikes them, animals perish — man builds dams; if a carnivorous pack attacks them animals perish — man writes the Constitution of the United States. But one does not obtain food, safety, or freedom — by instinct.”

–Ayn Rand, For the New Intellectual, 1963

(bold emphasis added, ellipses original).

“Conscious by choice” seems to be pointing at the phenomenon of effortful attention, while “the unfocused mechanism of consciousness” is more like awareness.  There seems to be some intuition here that effortful attention is related to the productive abilities of humanity, our ability to live in greater security and with greater thought for the future than animals do.  We don’t usually “think on purpose”, but when we do, it matters a lot.

We should be thinking of “being conscious by choice” more as a sort of weird Bene Gesserit witchcraft than as either the default state or as an irrelevant aberration. It is neither the whole of cognition, nor is it unimportant — it is a special power, and we don’t know how it works.

43 thoughts on “Distinctions in Types of Thought

  1. > “Fluent” activities can remain undamaged when similar but more “deliberative” activities are lost.

    Nonononono. Reality doesn’t carve along these lines! Some of the examples use different parts of the brain. Whatever the layman impression, more than just attention and practice separate the activities.

    • I didn’t mean “if you bang your head, all fluent activities will survive and all deliberative activities won’t.” I mean that we see examples where the activities that survive seem to be the fluent ones. Also, even if they use different parts of the brain, maybe that’s because different parts of the brain are used for different kinds of knowledge?

      I know I should always assume, in neuroscience, that it’s probably more complicated than any one-sentence summary, but I think I was making allowances for that already.

  2. There’s a lot here, I’ll think about it more carefully later, but one thing that popped out:

    > There seems to be some intuition here that effortful attention is related to the productive abilities of humanity

    I think I have a very different association here. I tend to associate intuition and pattern-matching to the *good* bits! E.g. the bit where you finally see *why* a mathematical proof works.

    Whereas I associate effortful attention with the more tedious logical chain-of-reasoning side, and all it tends to produce on its own is the answer, which is nice but not enough without an understanding of why it works (which to me feels more like the ‘human’ bit.)

    I don’t know, I am annoyingly poor at explicit reasoning, and this does cause me problems, so a lot of this probably comes from a place of frustration. At the very least, it’s true that you often have to do some explicit reasoning before you get to the good bits. And probably a lot more is going on there.

    • You seem to be lumping “flashes of insight” in with “effortless flow-state”. I don’t think they’re the same. For one thing, inspiration generally comes in bursts, while flow-states can persist for a while (driving on a highway, playing the piano, etc.) Definitely, “flashes of insight” aren’t the same type of thought as “effortful attention” — insight feels easy, instant, and unforced. But they might be their own, unique category of thought. Still working out my ontology here.

      • I’m not sure I was lumping insight and flow state together so much as ignoring flow state and launching straight into my own pet topic of insight vs step-by-step reasoning 🙂

        I agree, though, that insight and flow state are not the same thing. There seem to be a bunch of these interesting dichotomies (exploration of a few here: https://mindlevelup.wordpress.com/2017/06/09/dichotomies/), they are not all the same thing, and I’m definitely doing too much lumping together at the moment. I’m also trying to mash pieces of the puzzle together a lot at the moment, and finding I don’t have enough pieces.

      • I agree they’re not the same, but I do note that they’re highly correlated. Flashes of insight are far more common in effortless flow-state; a given moment of flow-state is unlikely to involve such insights, but it is far more likely than would be a moment of purposeful thinking without flow. I have a System-1 intuition that it’s even more than that, and that the act of getting a flash of insight requires or encompasses entering a flow state in order to happen.

        (Disclaimer of course that I work in pretty weird ways and this might be highly non-typical, and/or I might be using terms in weird ways, etc)

  3. I agree in part with the distinction you make here:

    > “Conscious by choice” seems to be pointing at the phenomenon of effortful attention, while “the unfocused mechanism of consciousness” is more like awareness.

    What I would like to get your feedback on is in regards to “Intelligence Reconsidered” (http://mailchi.mp/ribbonfarm/intelligence-reconsidered?e=80da034a3e); specifically on the following:

    > ” Intelligence is construed as a tool or instrument, a means to ends, a capacity for problem-solving and surviving in increasingly complex environments and varied scenarios. Viewed from a functional intelligence perspective, thinking is something you have to do to achieve your ends; a behavior which should cease when those ends are achieved, and avoided entirely if they are achievable without thinking. But what if thinking is viewed as an autotelic behavior? Something you get to do, and would do for pleasure even if you didn’t have to?”

    And also, “thinking is something you can enjoy”.

    The “conscious by choice” and it’s link to effortful attention appears to me to be in conflict with thinking as “something you can enjoy” as autoelic behavior. The ability to think in an autoelic way, without the particular aim at problem solving, can sometimes be “unfocused”, like awareness, and allow “eureka moments” that come to mind without systematic forethought, yet still aid in problem solving.

    • Yeah, I was thinking about Venkat’s essay, and noticing that we carve up the world very differently.

      Can pure thinking happen in a state of flow? It feels like it can, to me; William James thinks it can’t, though. I’m not sure how “autotelic” thinking fits in here.

      I think Venkat is preoccupied with the fear of things getting boring (or meaningless, or full of existential ennui) in a way that I’m just not, at all. Maybe it’s because I’m good at finding the interesting stuff; maybe it’s because I have depression, so I’m so busy avoiding *pain* that boredom seems like a minor problem. But questions like “what’s the point of life?” always seem meaningless to me. Life is so full of points it might as well be a pincushion!

      • It’s also relevant that Venkat is a “thought leader.” He works in a milieu where interestingness is currency. It’s always surprised me when he thinks my ideas are valuable for being interesting — or when he’s quick to think truisms and formulaic stories are boring. Originality just isn’t that big a deal in my world. Originality is easy; it’s diligent execution that’s hard.

      • My experience is strongly that pure thinking can happen in a state of flow. I’d even go a step further, and say that my best pure thinking often happens in a state of flow where all the things I’ve previously thought about purposefully in non-flow states come together and I reach new insights. Then I go back and figure out what I was thinking; often after I exit a state of flow (especially competitive situations) I can rattle off a ton of detail and lots of explanations of why I did why I did, far more and better considered than I could have done at the time or beforehand. Often I come up with new ideas in the moment that way and do a ton of thinking about them. Then again, it’s possible that this is because I’m attached (and perhaps you are as well) to the idea that my power comes from thinking on purpose, as opposed to actually describing blindsight and just being able to analyze that blindsight after the fact. Maybe I create post hoc reasons and explicit reasoning that has nothing to do with what actually happened. Hard to say. I do know that without thinking, the biggest challenges I’ve faced and my solutions to them seem like they would be impossible.

        But to me, flow represents the state where I’ve got an intense problem I’m thinking about with all of my power, so of course I’d be able to do pure thinking! I mean, that’s the whole point, right? To be able to fully focus your attention consciously on problems and have it work at full power, including backup from all the unconscious systems too? Or perhaps I’m misunderstanding what “pure” means in this context?

        A question I have about Venkat is the extent to which he is actually worried about being bored versus worrying about being seen as boring, again because of thought leader. I notice that I am no longer worried at all about being bored (as opposed to worrying about simply wasting time), but I still worry about being boring even though on another level I know that worry is silly.

  4. This was fantastic; I love it! Partly by reflecting on ML, you’ve drawn out implications of Heidegger/Dreyfus that are important, and that make their thinking more accessible to STEM folks, and that most people miss.

    I agree with you (and with Dreyfus) that a taxonomy of thinking is important and currently unavailable. The single thing I think Dreyfus got most wrong was to assimilate several different distinctions in that space. Especially, he identified conscious/unconscious with explicit/implicit, formal/informal, rational/circumspective, and so on. These all seem to me to be independent axes, although there are significant correlations—but *not* with conscious/unconscious, at least as I understand those terms! Flow-state practical activity (hammering) is entirely *conscious*, it’s just “unreflected” or something.

    Dreyfus was very interested in the late-80s / early-90s backprop work, because it seemed to him like modeling one side of his collapsed distinctions (“unconscious flow”). I talked with him about it several times. I said then—and I still think now—that it’s only metaphorically similar, and the analogy is superficial. The fact that backprop/DL doesn’t use rational methods doesn’t meant that it’s otherwise similar to what humans do when we aren’t using rational methods. The only reason people can get away with saying AlphaGo “uses intuition” is that we haven’t got any more-credible model of non-rational intelligence.

    All the way back in the 1980s, psychophysics and neurophysiology had explained quite a lot about specifically visual attention. Anne Triesman’s classic work on visual search differentiated between the bottom-up process of a thing-in-the-world *drawing* your attention effortlessly, vs. the effortful, top-down goal-directed *direction* of attention. It turned out to be pretty easy to make a believable AI model of this, which I did in my PhD work. Is the direction of attention in thinking only metaphorically similar to attention in vision, or does it use similar neural mechanisms? I don’t know if there’s any evidence about this (and would love to hear if someone else does know!).

    I agree strongly with you that, although rational thinking is “derivative” and much less important in everyday life than the rationalist tradition imagines, it *is* critical to human progress. And, available models of it (logic, e.g.) are badly wrong and misleading, which is a big problem.

    I’m pursuing “metarationality” as a project of trying to understand rationality more accurately, in the light of insights from Heidegger/Dreyfus and others, so we can use it better.

  5. I think the emergence of ego and noticing the Self is a relatively late development. It probably requires language. The best evidence I’ve seen is this from Helen Keller (half way down): https://entirelyuseless.wordpress.com/2017/10/06/the-self-and-disembodied-predictive-processing/. So with pre-language Homo Sapiens, or children, or probably all animals, there is likely no conscious labeling of Self. Of course, they still mostly act as-if, as that correlates with survival. This would imply that most organisms aren’t aware that they’re an embodied thing but it seems quite likely they need the embodiment with constant feedback between action and stimulus in order to evolve the complex prediction machine that Blindsight-only mode is.

    I agree with you that the “higher-level” capacities are the “special sauce” that have enabled us to dominate the planet. With simpler organisms, say C. Elegans, it seems plausible they’re operating on something like Blindsight-only. With great apes, it’s clear they’re doing something like effortful attention at least some of the time. They can’t “think” as well as humans but they’re not operating purely off of instinct. Another relevant data point is how “uncompetitive” humans were before language/culture and how quickly (evolutionarily-speaking) we developed after that barrier was crossed.

    The sheer speed of transition from apes to skyscrapers puts some limits on how different our brains could be. To me, this implies the “hard part”, in terms of engineering complexity, being the earlier Blindsight-only mode, followed by some minimal ability to apply effortful attention and planning, followed by high-level reasoning and language.
    As David points out, ANN-based ML doesn’t seem to do the Blindsight-only thing the way brains do, but it is behaving similarly in many ways. If that continues to scale to the point where we can simulate simple organisms (C. Elegans would be impressive, let alone small animals), that would suggest to me that most of the problem has actually been solved.

    • Yep, you’re not the only one to notice that the fast evolutionary timeline from hominids to modern humans implies that “higher” intelligence is relatively easy, and I should be taking account of that.

    • If apea are doing a blindsight only thing, which I doubt. I expect that we’ll before were talking about primates, and maybe by turtles, we’re doing more than blindsight, but apes passed a critical benchmark in communication and manipulation to create a new sort of information processing collective noosphere. Worms to turtles isn’t that much shorter than turtles to humans. Worms to monkeys is one order of magnitude longer than money to humans. Speed of selection shouldn’t be in units of time though.

  6. Frequently, this debunking comes with a side order of sneer, whether at the defunct “Enlightenment” or “authoritarian high-modernist” notion that everything in the mind can be systematized, or at the process of abstract/deliberate thought itself and the people who like it. Jonathan Haidt’s lecture on “The Rationalist Delusion” is a good example of this kind of sneer.

    Deliberate thought is very expensive, slow and error prone compared to ‘flow,’ so it seems appropriate to delineate a hierarchy where deliberate thought depends on a foundation of ‘flow’ and not the reverse. Here’s a proposition – the purpose of deliberate thought is to make itself obsolete, to encode something that can be used in flow-state. The future is ever-decreasing, rather than increasing, return on deliberate thought and information processing ability generally. The present is characterized by high but greatly overestimated returns on deliberate thought and information processing ability. Peter Thiel’s sneering at improvements and focus on ‘the world of bits rather than the world of atoms’ is right on target despite being technically nonsensical.

    As for deliberate rational thought ABOUT the world of bits, which is what I am doing, this line of reasoning says the appropriate response is probably a sneer on a sneer. A sneeneer. Unfortunately it’s exactly the sort of thing I find intrinsically valuable. I’m not sure how to forge ahead in the world of atoms. Maybe I’ll buy a nice big hammer and start hitting things with it. Work on gruffness.

    I think you’re on target and onto something important with your points about the likely prevalence of black-box fudge factor systems and the existence and importance of the distinctions in modes of thought you outline. I wonder how you avoid being extremely bearish on the prospects of AI in light of this. This would mean that advances will likely will be slow and steady, with no ‘bootstrapping’ accelerationist stuff likely.

    Economically, the implication could be that if performance at a given task isn’t really close to good enough, it won’t be good enough for a long while. No sudden onset ‘humans need not apply’ attack of the robots.

    • I *am* vaguely bearish on AI! I’ve wound up with a rather boringly conventional perspective: AI is easy enough to be worth developing esp. in a few high-value contexts like autonomous vehicles, but not so easy that we’re likely to all get killed very soon.

      • I’d kind of like if people taboo “soon” in cases like this and state explicit timelines. (i.e. not likely to be killed in the next 20 years? Next century?) I think ambiguity about this has made it harder to progress the overall conversation.

  7. Excellent Sarah! You’re definitely showing strong promise as a powerful ‘super-clicker’ Jedi! 🙂

    My view is that to understand reflection and concept-formation, essentially, we’re looking for a method of knowledge representation that can be applied across *all* levels of abstraction – how do you model minds modelling reality?

    This was a question tackled by two of history’s greats – Immanuel Kant, and Charlies Pierce. Kant was the first philosopher to really hit on the right general idea – he searched for the fundamental a-priori categories that he postulated underlay all thought.
    https://en.wikipedia.org/wiki/Category_(Kant)
    Although he had the right idea, his ‘categories’ wasn’t what we’re looking for.

    But then came the second great-mind – Charlies Pierce! It’s in the writings of Pierce that you really start to see ideas that are beginning to anticipate the modern field of knowledge representation. He too searched for the fundamental ‘a-priori categories’ of thought, and although he’s still wide of mark, he comes closer than Kant. He makes important contributions to ontology, and there’s even ideas in his writings that seem to vaguely anticipate Yudkowsky’s principle of ultra-finite recursion! – he talks about the importance of ‘thirdness’:
    https://en.wikipedia.org/wiki/Categories_(Peirce)

    So who will complete this historical search for the fundamental categories of thought?
    For now, I’ll just state the punch-lines for my best current ideas: I think you’re looking for a logical structure that splits into 27 levels of abstraction, and these in fact will yield the long-sought fundamental a-priori categories of thought. You can generate them by splitting reality into 3 levels of abstraction along 3 different dimensions:

    1st axis of abstraction:
    Abstract (not localized in space and time), Concrete (localized in time *and* space), Quasi-abstract (localized in time but *not* space)

    2nd axis of abstraction:
    Cosmological origin (generated by fundamental laws), Biological origin (generated by evolution – blind evolution), Teleological origin (generated by intelligent design – comes from intelligent minds)

    3rd axis of abstraction:
    Structural level (intrinsic properties-things in themselves- objects) , Network level (relational properties – relationships between pairings of objects), Global level (global properties imposing global limitations or conditions on how objects can interact).

    So 3 dimensions of abstraction and 3 sets of properties for each, yields 3^3= 27 a-priori categories of thought!

    Once you have precise mathematical ways to definite the a-priority categories of thought I’ve intuitively implied above, I think the solutions to the puzzles of knowledge representation and conscious reflection will fall out quite naturally.

    Briefly, I think each a-priori category needs to be specified not as a static thing, but as a periodic *process* I call a *time-flow*. By the term *time-flow* I just mean any repeating (periodic) physical process – for instance two dance partners repeating the same dance-steps over and over is a *time-flow*.

    Then you combine the individual *time-flows* to get what I call *temporal fractals* – many repeating periodic processes that coordinate such that the temporal pattern is same across multiple time-scales- for example – a whole hall of dance partners not only repeating dance steps locally, but also moving around the hall in a coordinated way to repeat the same patterns on a larger scale.

    And I think that this is, in short, the secret sauce that generates consciousness and reflection! 😉

      • What I’m getting at is that the basic building blocks of reflective thought could be identified by constructing an upper ontology:
        https://en.wikipedia.org/wiki/Upper_ontology
        “an upper ontology (also known as a top-level ontology or foundation ontology) is an ontology (in the sense used in information science) which consists of very general terms (such as “object”, “property”, “relation”) that are common across all domains.”
        It’s the rationalist program of looking for a ‘universal language’ (Leibniz), that could serve as ‘the stuff (language) of thought’ (Pinker).
        For an interesting modern attempt see the website of John Sowa:
        http://jfsowa.com/ontology/index.htm
        As an example, Sowa identifies ontological primitives such as ‘Occurrents’ (events, processes) and ‘Continuents’ (static snapshots).
        The idea is that we identify the building blocks of reflective thought with ontological primitives such as these.
        I’m saying that the ideas of Kant and Pierce were precursors to the correct way to carry out this program, and then I hint at my own scheme (which has 27 ontological primitives).
        Stephen Wolfram is basically working on the same thing, attempting to carry through out Leibniz’s quest for a universal ‘language of thought’:
        http://blog.stephenwolfram.com/2016/10/computational-law-symbolic-discourse-and-the-ai-constitution/

  8. I’m convinced by the evidence that visual perception runs on something like convolutional neural nets

    This seems plausible to me but it so, where are the analogously spooky-but-surprisingly-robust adversarial examples for human image recognition?

    • It’s easy to see how you can learn textures using convolutions, and I think it’s highly plausible that the brain does that in a way pretty similar to the way nets do. I don’t see how you can do a good job with 3D shape recognition using current DL techniques (because the image data depends radically on object orientation and flexion). I think that the current DL image classifiers rely much more heavily on texture and color info than humans, and much less on shape. The pattern of errors they make, and the adversarial image perturbation results, are consistent with this. The adversarial perturbations basically add highly-textured, sparkly-colored noise, which screws up convolutional methods and doesn’t bother people because it doesn’t alter shapes.

      Adversarial examples for human image recognition are exactly pareidolia: https://en.wikipedia.org/wiki/Pareidolia

      These are images that fool us because of shape coincidences, whereas current DL classifiers are fooled by texture coincidences.

      • Weirdly enough, “shape” vs. “texture/color” is a major distinction people keep drawing in Western art. If you take an art history course, the Italian Renaissance is going to look like a story of “texture/color” (Medieval & Byzantine art) being supplanted by “shape/form” (Masaccio & the development of perspective). You also see famous contrasts within an era: Titian being the color guy to Michelangelo’s shape guy, Delacroix the color guy to Ingres’ shape guy, Matisse the color guy to Picasso’s shape guy. There seems to be a trend (which always annoyed me) where the shape/texture art gets read as “lower” (as merely “decorative” or “illustrative”) than the form/shape art.

      • As of the early 90s, there was good neurophysiological evidence for these being computed by separate “channels” in the visual system that didn’t interact much. I don’t know what the current state of evidence is.

  9. I was curious about Herbert lifting those passages from Rand. Google books suggests that it is true, with no earlier author. Google web suggests that you are the first to notice.

    For completeness, the dates you give for both works are for 2nd editions, both 2 years after the original, which already contain the relevant phrases. Rand’s book is based on a 1960 lecture and first appeared in print in 1961. Dune was a “fix-up” of two serialized works. The passage appears in the first installment, in 12/1963.

  10. >”There is no such thing as voluntary attention sustained for more than a few seconds at a time.”

    This strikes my intuition as the same sort of sneer a deluded intellectual makes when he thinks he’s successfully proven everyone but him is stupid. Like Dennet in “Consciousness Explained [Away]”. Perhaps in context there’s more nuance to the claim, but I doubt it is substantially different from a morbidly obese man claiming its impossible for any human to exercise more than 30 minutes a day. Granted, only a handful of marathon runners actually manage to exercise from waking up to laying down, and furthermore the majority of people don’t exercise at all, but more than enough have trained themselves to work for hours a day on a regular basis. I’m inclined to think focused cognition is similar, if you build and reinforce a habit of constantly cogitating, then its likely to become a more comfortable state than relaxing into “flow”.

  11. Are you familiar with Ericsson’s work on deliberate practice (either one of the early papers (http://www.unoeste.br/lhabsim/arquivos/Deliberate%20Practice%20and%20the%20Acquisition%20and%20Maintenance%20of%20Expert%20Performance%20in%20Medicine%20and%20Related%20Domains.pdf) or his more recent popular book, Peak (https://www.amazon.com/dp/B011H56MKS/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1)? In the framing you describe, Ericsson’s view can be summarized real improvement beyond effortlessly achievable levels requires non-flow-inducing effortfully aware practice. Unfortunately, Ericsson’s research tends to cite examples from classical music, sports, and chess, all areas where mastering a fixed set of rules matters more than defining the right ruleset. It’s possible this makes it less applicable to research-y work like solving an open math problem.

Leave a reply to srconstantin Cancel reply