Deliberation, Alternatives, and Thought Experiments
Since I've been reading some free-will stuff for a while now, the prevalence of thought experiments there has been bugging me. Here I want to take on one such experiment. Frankfurt examples have received a great deal of attention in philosophy of action in the continuing controversy over the question of whether moral responsibility requires having alternative possibilities. Dana Nelkin modifies these examples in order to make a different, though related, point: that in order to deliberate about a course of action, we need not believe that we are capable of doing otherwise. The example she sets up is as follows:
Imagine that you know that a brilliant scientist has the ability to fiddle with your brain in a way that causes you to act as she wishes you to. You know that she wants you to vote for Gore over Bush in the upcoming presidential race, and that if you do not decide to vote as she wishes, she will cause you to vote that way. So, for instance, you know that if you were to prepare to vote for Bush or otherwise fail to decide to vote for Gore, the brilliant scientist would cause you to vote for Gore. It seems to me that you could still evaluate the reasons for voting for each candidate and decide to vote for Gore on the basis of those reasons. (1)
This seems reasonable: even if I believe that I do not have a choice, ultimately, about which way I will end up voting, I can still make a responsible decision to take the only path open to me. So what, if not a belief that alternatives are genuinely open for me, is needed for deliberation? To approach this question, Nelkin constructs another thought experiment: it is virtually identical to the first one, except that now I believe that the brilliant scientist will force me to vote for Gore regardless of what I decide to do: even if I were to choose to vote for Gore, she would ensure that I do so by fiddling with my brain anyway. In this revised case, Nelkin admits, perhaps no real deliberation is possible at all. But if deliberation is possible in the original case but not in the revised case, the difference cannot be one of whether or not I believe there to be alternatives to my action, since I lack alternatives in both situations. Rather, the belief we need to have in order to deliberate is the belief that the deliberation can be causally efficacious. In the first case, then, deliberation makes sense because my decision to vote for Gore can cause me to do so, whereas in the second case it cannot.
But, despite an immediate intuitive appeal, do we need to accept the conclusions drawn from the first voting case, or does the thought experiment manipulate us into assenting to the conclusion? Let’s construct another case:
But, despite an immediate intuitive appeal, do we need to accept the conclusions drawn from the first voting case, or does the thought experiment manipulate us into assenting to the conclusion? Let’s construct another case:
I need to go meet my friend, but my legs are tired from dancing all night. Walking sounds very unappealing. On the other hand, my arms feel just fine, and so I consider flying to meet my friend instead of walking. After some thought, I decide to walk after all (perhaps because flying takes up too much energy, and I cannot afford the amount of food I would need to replenish it; also, I fear the ecological consequences of spreading my deodorant throughout the lower atmosphere).It seems to me highly implausible that anything like deliberation can take place here. Someone who genuinely deliberates about whether to walk or fly across town has to be a nut. What, however, distinguishes this flying case from the original voting case? In both situations I believe that there is only one thing I can do, but I am still able to decide to do it based on deliberation. It is only that, for some reason, in the flying case it does not seem like I can genuinely deliberate, whereas in the voting case it does. I think the difference lies precisely in our ability, while considering the thought experiment, to place ourselves in the position of the agent involved. I can easily imagine myself in the flying example and immediately see that deliberation in this situation would make no sense, since I already believe that I cannot fly. But to put myself into the voting example, I need to somehow fully convince myself that, really, a scientist can fiddle with my brain and force me to vote a certain way and that this really will happen. The problem for Nelkin’s argument, I think, is this: if I were to fully convince myself of that, then the intuition that deliberation is possible in this case would vanish. Her reading of this thought experiment seems to remain plausible largely because it is notoriously difficult to genuinely convince oneself of a counterfactual.
Nelkin addresses the possible objection to her view which claims that individuals who undertake deliberation while believing that they do not have alternative possibilities are really manifesting contradictory beliefs. She insists that the burden of proof must then fall not on her, but on the critic who wants to assign contradictory beliefs to otherwise rational agents. But I believe that the burden of proof still rests on her to show that genuine deliberation is possible without a belief in alternative possibilities. The plausibility of her intuition that one can deliberate within the voting case, it seems to me, rests precisely on our attempt to take up contradictory beliefs: both the belief that evil scientists do not manipulate our brains, and the counterfactual belief that in this one case they might. Since we—standing outside the actual voting case but only attempting to project ourselves into it—cannot make ourselves take up these contradictory beliefs, the possibility of deliberation within the thought experiment seems possible to us only because we have failed—without noticing—to accept the setup of the experiment.
(1) Dana K. Nelkin, “The Sense of Freedom” in Campbell, O’Rourke, Shier, eds., Freedom and Determinism (MIT: 2004)
No comments:
Post a Comment