Cheating, Belief in Determinism, and Rationality
Those who know me might know that I am actually fairly obsessed with cheating—rules against cheating and lying tend to control my behavior. But there is nothing obviously moral about this: I may avoid cheating just because I am uncomfortable with it, or worried that I’ll get caught, or concerned that if I get caught, I won’t be able to account for my behavior. That is: I find it easier not to cheat. And since I’ve internalized this rule, I tend to get angry when other people cheat. In other words, and pretty obviously, one’s reasons for not cheating are not necessarily moral ones; they are probably likely to not be moral, at least in any strong sense. (Some consequentialists, of course, might not buy this distinction between moral and non-moral rejections of teaching; if so, that would just show that they don’t quite get morality.)
Significantly, also, “cheating” is not necessarily immoral. As with most other types of behavior, its morality or immorality is contextual. In most situations—like those involving classroom work—cheating seems to generally be unethical. But what about an experimental setting? You are in a setting—as the students in this experiment were—where you can quite easily cheat on a test and walk away with some extra cash. So there is—as is often the case—an immediately obvious benefit to cheating, and virtually no risk of negative consequences if you get caught. From the standpoint of instrumental rationality, then, you should cheat.
But are there moral considerations that militate against the instrumental ones? Are you somehow violating justice? Or causing someone harm, by cheating? Perhaps: you are walking away with money that those who didn’t cheat also didn’t get, so you are outearning them by dishonest means and you are also, perhaps, slightly ripping off the experimenters, whose money you are taking. But the most significant consideration, it might seem, is that you are messing up the experiment, since you’ve been led to think that your honest performance, not your cheating, is what’s necessary for the experiment’s success. And this is where questions arise:
First, would you really be messing up the experiment by cheating? The experimenter has said she must leave the room, and you are to score yourself. If you know anything about psychology experiments, you should then assume that this experiment is already fatally flawed, and its data will be useless in any case. Second, if you know a little more about psychology experiments, you might suspect that such blatant opportunities for cheating are somehow factored into the experiment. If you suspect this, it seems to cancel out your moral objections to cheating; and since you know that cheating will get you more money, your instrumental rationality gives you an overriding reason to cheat. In other words: cheating might well be the most rational course of action in this experiment, and most rational not just in the sense of instrumentally more rational, but instrumentally + normatively more rational, since the normative considerations have essentially been cancelled out. In this case, if you refuse to cheat, you are not actually sticking to moral principles. You are sticking to habits that you mistakenly take to be moral—mistakenly, because in this context there is nothing clearly moral about them.
Of course I could be wrong on this; maybe cheating in this case really is immoral. But it is at least not obviously so. And if you stick to your (normally) moral principles even in situations where they are not moral, then you are not acting morally. You are just acting irrationally. And this, then, is the upshot: belief in determinism might simply make people more rational.
No comments:
Post a Comment