Friday, April 25, 2008

Neural Antecedents of Decision: Some Phenomenological Skepticism

Web-happy philo-types are by now familiar with the recent study on “Unconscious determinants of free decisions in the human brain” by Soon et al., published in Nature Neuroscience. The study, which expands on the famous experiments performed by Benjamin Libet, purportedly demonstrates a seven second gap between the onset of neural activity involved in making a choice and the subject’s awareness of the choice. The details are discussed, among other places, at Not Exactly Rocket Science, Mixing Memory, NeuroLogica, and Conscious Entities; some interesting comments also at Alexander Pruss's blog. Essentially, participants were asked to press one of two buttons, and to take note of the letter showing on a screen in front of them at the instant they first become aware of having made a decision (Libet’s original experiments asked subjects to remember the position of a hand on a clock); all the while, fMRI scans were recording their brain activity. The comparison, then, is between two heterogeneous sorts of things: neural events, and conscious awareness.

One problem I have with these experiments is that they seem to assume a temporally thin notion of consciousness. Neural processes take time, which is why you can measure how long they go on before an action is carried out. They are temporally thick processes. But conscious awareness is apparently assumed to be instantaneous: we know the exact moment we become aware of something. This is a temporally thin notion: there is no gap in time between the instant we become aware of something and the instant we become aware that we are aware of it or, as in the set-up of the experiment, between the instant we make a choice and the instant we become aware of having made the choice. One obvious reply is to deny that there is any problem here, given a basic assumption that “becoming aware” and “becoming conscious” are synonymous. Surely there is an instant when I am conscious of my decision, say, or conscious of the position of the hand on a clock face (or the letter on a screen). And there is no real sense in which we can speak of something like being aware of such things without also being conscious of them.

But this sounds dubious. What these experiments measure, after all, is not simply when the subjects become aware of a decision. Instead, they measure when the subjects become conscious of that awareness. This is a reflexive process. But is there reason to think that the reflexiveness itself does not take up time?

To complicate matters, there are two reflexive processes going on. On the one hand, the subjects must become conscious of making a decision. On the other, they must become conscious of the letter on the screen at the instant that they become conscious of making a decision. This sounds like a fairly complex process to me, though maybe I am wrong. In any case, though, the process has got to take at least some time to perform. (That there is likely also some temporal gap between seeing the letter on the screen and registering that one has seen it complicates this even further.) What I am suggesting, in other words, is that there might be two temporal gaps that the experiments do not address sufficiently. First, there might be a gap between becoming conscious of a decision and becoming conscious of that consciousness. Second, there might be a gap between becoming conscious of that consciousness and associating this second-order consciousness with the awareness of a particular letter on a screen.

I worry about this, particularly, because the reflexive process is not involved in normal decision making. I either continue typing, or I stop to scratch my nose. But I do this without the second-order consciousness. I cannot, looking back at my action, pinpoint the exact instant when I decided to scratch my nose; normally, I cannot upon reflection even establish that I ever made such a decision, but this fact does not undermine my experience of having made the decision nevertheless. (This suggests, to me, that we may be better off not treating decisions, choices, or volitions as if they were events, and instead recognize them as interpretative abstractions.) So what I am questioning here is the idea that being aware of making a decision—in the normal way in which we experience making decisions in everyday situations—is really connected to the sort of consciousness of deciding that these experiments look at. What they are looking at is the process by means of which we thematize our decisions in consciousness; but this is neither something we normally do, nor is it something that seems central to our awareness of ourselves as deciding.

To top it off, I wonder to what extent the results of these experiments are even transferable to our everyday decisions. The subjects are specifically asked to pay attention to their decision and note the instant when they become conscious of it. But this is not something we normally do. Try it. Right now. Decide to scratch your nose, and then scratch it. When I do this, it feels weird: there is a doubling effect going on, as if I am performing the same action twice. In Searle’s terminology, the decision to scratch my nose is a prior intention, while the mental process involved in the actual nose-scratching is the intention in action. The prior intention in such simple actions is completely redundant. So if you are specifically looking for it, this seems to just distort what it is you normally do when you make decisions. When we ask people to locate the temporal instant at which they make a decision, we are asking them to do something extremely unusual, and the experimental data obtained from such exercises seems unlikely to be telling us very much about normal human decisions making; it might be telling us not what is going on in the brain when we make decisions, but what is going on in the brain when we try to catch ourselves making decisions, which is going to be a very different and very slippery task.

I am not just trying to be skeptical. What I am curious about is just the claim the scientists performing experiments like these seem to be making, i.e., the claim that we can scientifically study the relation between neural processes and consciousness or, at least, that we can do this given current technologies without either distorting or oversimplifying the precise thing we are studying. I am perfectly sure that we can study the neural processes. But it is not at all clear that we have the tools for scientifically studying consciousness. Why, then, should we think scientifically studying the relationparticularly the temporal relationbetween the two a currently plausible proposal? These experiments are certainly interesting for all sorts of reasons; I am uncertain that the light they claim to shed on conscious choice is one of them.


  1. Maybe the problem is in treating 'decision', 'consciousness', 'self' as entities in a there/not there binary sense. If we think of that stuff as a dynamic relationship with situations, something more like zen 'flow' becomes possible to imagine.

    When a serve comes in hard, I swing and hit it, 'I' am only dimly 'aware' of 'deciding' what to do. Practice, habit, disposition, positioning, skill all have their moments in that event. The more I try to grasp consciously what I'm doing, the longer it takes for the feedback systems to sync up and the more likely that I'll flub the shot.

  2. Hi Carl,

    I don't know much about zen flow, but I think the basic thought you're getting at is similar to what, say, Dreyfus & co. have been driving at. Namely: that expertise in action involves more or less automatic, reactional behavior, that cannot be reduced to explicit thought processes (or rules, or concepts). A few points:

    The tennis player can't explicitly think about his swing; if he does, he'll mess it up. But there is a difference between swinging at the ball in the course of a game, and swinging at a ball that, out of nowhere, flies out at you as you walk past a dark alley. The latter action seems to involve no more than a reflex; the former involves a stronger notion of decision, though the action itself cannot reasonably be reduced to a series of decisions constituting the entire action of taking a swing.

    But these experiments don't involve that sort of decision--they call on subjects to deliberately press a button, and they have a more or less indeterminate period of time in which to decide which button to press and when to press it. So I'm not sure you can analyze the consciousness of deciding in these experiments in the same way you might analyze the consciousness of deciding involved in a game calling for quick responses.

    Your point about binary there/not there is well taken. What I am suggesting here is just that "decision" might have different senses (e.g., completely reflexive, deliberate but reactive, deliberate and thematized). And I'll grant that decisions, etc, do involve situations--the activity involved in choosing a button to press isn't limited to the brain, or some dualistic correlate of the brain--the buttons and the hand pressing them seem to be part of the situation, and thus necessary elements of anything that could, in this case, reasonably be called a decision.

  3. Yes, agreed. I do think it all points to dynamic interactions of decisioning systems and situations. Not to get all reductive but there's a brain component to this. You point to an activity in which the lizard brain is doing the work (the ball from the alley). The lizard no doubt considers the experimental button for a moment, then hands it off to the front of the brain for a more deliberate decision. And in situations where there's no urgency (or in retrospect), the full-frontal process of recursive reflection may be enabled. So again, I'm fully agreeing that "'decision' might have different senses." Very interesting, thank you.

  4. Glad you enjoyed.

    Quite a bit hangs on what you mean by "decisioning systems" and "situations." There is fully reductive way of talking about these things: we can, for example, describe the neural mechanisms involved in deciding anything, and the physical aspects of the situations in which we find ourselves, and then talk about the physical interactions between the situations, the brain, and the body. I don't think that's what you are suggesting.

    I think there is certainly a lot of interesting stuff to work out by looking at the relations between different neural systems. But if this neural research is to give us meaningful data about our decisions, then we first have to figure out what decisions are in a non-reductive way. So we have to first figure out how to distinguish different kinds of decisions (reflexive, reactive, deliberate, etc) phenomenologically, and then contrive situations likely to produce decisions of each sort, and only then can we meaningfully study the neural mechanisms involved in decision.

  5. You're right, that's not what I meant, but we'll need that data. It just won't tell us what to make of it by itself. And when I get my tumor I'd like them to know how to cut it out and take the least of the meadian 'me' with it they can.

    For a more robust understanding, starting with phenomenology would be one way to do it, and a good one. There's a tendency to circularize the phenomenology there, of course, where the brain research becomes simply another fancy way to describe what's been subjectively reported.

    Another way would be to fire up the fMRI and track activity locations and rates through a series of situations, then make behaviorist inferences about the sorts of decisions involved. You'd still need some sort of hypothesized decision-type schema, though, which might lead back to phenomenology; or to the whole history of epistemology where there's plenty of testable speculation to work with. I suppose I'm disputing that phenomenology is the only way to get at "meaningful study," or rather the study of meaning (since that's what we'll need to distinguish kinds of decision), while granting that it's a dang good one.

  6. You may be right, though I tend toward the view that, ultimately, you're going to need to connect first-personal access with the neural correlates. Neuroscience can go a long way without getting there, of course (though my point was that in the present study the difficulty is that there is an appeal to first-personal testimony without a theory that accounts for such testimony). But third-personal observation can only get you so far. We can observe the consequences of thought (actions), and its neural correlates, but not thought itself. If we drop phenomenology, we end up with something like "acting as one would expect a jealous person to act just is what it means to be jealous." And this formula seems odd because it eliminates the very term it is trying to define.

  7. Agreed. Thanks again for this.