Imagine that you know that a brilliant scientist has the ability to fiddle with your brain in a way that causes you to act as she wishes you to. You know that she wants you to vote for Gore over Bush in the upcoming presidential race, and that if you do not decide to vote as she wishes, she will cause you to vote that way. So, for instance, you know that if you were to prepare to vote for Bush or otherwise fail to decide to vote for Gore, the brilliant scientist would cause you to vote for Gore. It seems to me that you could still evaluate the reasons for voting for each candidate and decide to vote for Gore on the basis of those reasons. (1)
But, despite an immediate intuitive appeal, do we need to accept the conclusions drawn from the first voting case, or does the thought experiment manipulate us into assenting to the conclusion? Let’s construct another case:
I need to go meet my friend, but my legs are tired from dancing all night. Walking sounds very unappealing. On the other hand, my arms feel just fine, and so I consider flying to meet my friend instead of walking. After some thought, I decide to walk after all (perhaps because flying takes up too much energy, and I cannot afford the amount of food I would need to replenish it; also, I fear the ecological consequences of spreading my deodorant throughout the lower atmosphere).It seems to me highly implausible that anything like deliberation can take place here. Someone who genuinely deliberates about whether to walk or fly across town has to be a nut. What, however, distinguishes this flying case from the original voting case? In both situations I believe that there is only one thing I can do, but I am still able to decide to do it based on deliberation. It is only that, for some reason, in the flying case it does not seem like I can genuinely deliberate, whereas in the voting case it does. I think the difference lies precisely in our ability, while considering the thought experiment, to place ourselves in the position of the agent involved. I can easily imagine myself in the flying example and immediately see that deliberation in this situation would make no sense, since I already believe that I cannot fly. But to put myself into the voting example, I need to somehow fully convince myself that, really, a scientist can fiddle with my brain and force me to vote a certain way and that this really will happen. The problem for Nelkin’s argument, I think, is this: if I were to fully convince myself of that, then the intuition that deliberation is possible in this case would vanish. Her reading of this thought experiment seems to remain plausible largely because it is notoriously difficult to genuinely convince oneself of a counterfactual.
Nelkin addresses the possible objection to her view which claims that individuals who undertake deliberation while believing that they do not have alternative possibilities are really manifesting contradictory beliefs. She insists that the burden of proof must then fall not on her, but on the critic who wants to assign contradictory beliefs to otherwise rational agents. But I believe that the burden of proof still rests on her to show that genuine deliberation is possible without a belief in alternative possibilities. The plausibility of her intuition that one can deliberate within the voting case, it seems to me, rests precisely on our attempt to take up contradictory beliefs: both the belief that evil scientists do not manipulate our brains, and the counterfactual belief that in this one case they might. Since we—standing outside the actual voting case but only attempting to project ourselves into it—cannot make ourselves take up these contradictory beliefs, the possibility of deliberation within the thought experiment seems possible to us only because we have failed—without noticing—to accept the setup of the experiment.
(1) Dana K. Nelkin, “The Sense of Freedom” in Campbell, O’Rourke, Shier, eds., Freedom and Determinism (MIT: 2004)