Friday, July 4, 2008

Responsibility Attribution in Milgram Experiments

Mind Hacks mentions a NYT article on some recent semi-replications of the Milgram experiments, which, as most of you no doubt know, involved a researcher telling a participant to administer shocks to another participant (who was not really getting shocks), of higher and higher voltage, in order to see how many people would obey even to the point of killing the other person. I am curious about a point mentioned here: in exit interviews, the experimenters “found that those who stopped generally believed themselves to be responsible for the shocks, whereas those who kept going tended to hold the experimenter accountable.” Aside from the general interest this should (I hope) evoke in x-phi circles, I am wondering whether this finding—if accurate—confirms or falsifies one of my pet theories about the experiments.

It has always seemed to me that the experiment wasn’t really about authority at all—or at least not in the sense intended. The point was to see whether ordinary people would obey an authority figure who instructed them to do something obviously wrong. And the idea—at least the way it is usually presented—is that the experiments showed that people do, for the most part, follow authority over their own moral convictions—as well as their moral sentiments (since responding to a person shouting in pain is, I’m fairly certain, not fundamentally a matter of conviction or belief, but of hardwired altruism; in any case, I am guessing, and I’d be interested in seeing work on this if there is some, that in high-stress situations like this, our higher order moral reasoning functions don’t play much of a direct role). Well, yes, the Milgram experiments do show this. But I don’t know if they show just what they are often taken to show, i.e., that most people are sheep. Now some people certainly are sheep—the recent cases of calls to fast food restaurants do seem to suggest that there are people who really will do whatever a police officer tells them just because the person telling them this is a police officer. But maybe even in these cases—and I suspect in many of the Milgram cases—another motive is in play.

We are all stuck in societies that require us to follow rules. Not murdering innocent people, not causing massive and unnecessary harm to others, and not doing to people what they don’t want done to them (at least in the absence of overriding reasons), are central to our social codes. In one form or another (though with many limits on which people are to be protected by these rules), these norms have existed—and probably must exist—in pretty much every human society. It’s also true that many people, not just as teenagers, enjoy breaking rules (psychoanalysis can of course provide quite a bit on this: the killing of the father, jouissance, etc; and speaking of which, I haven’t seen much serious psychoanalytic work on Milgram for some reason; suggestions anyone?). I have a feeling, too, that many very moral people—genuinely virtuous people, who generally do the right thing for the sake of doing the right thing—have a deep need to act immorally (a need that is forced on them by morality itself, rather than by some resentment against morality; I’ll post on this later). Violating little rules—shoplifting, for example—creates small pleasures. Violating big rules, on the other hand, is in some way character-transforming.

Most people who generally act morally (not genuinely virtuous agents, but ordinary people) probably do so in part because of some combination of habit and fear: you avoid violating norms because you’re afraid of the consequences, and you get into the habit of sticking to those norms. So most of us wouldn’t seriously consider killing someone—not just in the sense that we’d think it is a bad idea, but we wouldn’t even entertain the thought—at least in part because we don’t want to go to jail, and in part because—having already learned that we don’t want to go to jail—we are accustomed to not taking murderous thoughts seriously. These factors probably motivate most of us more than we’d like to acknowledge, and likely play a stronger role in moral motivation than feelings of respect for the moral law or for the other or for the dignity of persons. Knowing that one can get away with violating norms breaks the habit of following them, making the possibility of violating them a live one. (And of course being put in a situation where one might kill someone already breaks the habit, since we don’t really have habits of not killing people, because situations in which that is a real possibility don’t come up that often.) So if an official researcher, in an experiment, tells you to keep shocking someone, suddenly you might realize: here’s my chance!

If this is what is going on in at least some of the Milgram cases—and I’ve always suspected that it was at some level going on in most of them—then the test subjects who go through to the highest voltage level are not sheep. They are, from a certain perspective, the opposite of sheep. Their desire to hurt and to kill is what drives them; the researcher merely provides a convenient excuse. If they’re not sheep, one might ask, why would they need an excuse, why not just follow their dark heart’s desire on their own? Partly because they are normally sheep—in that they don’t consider acting on the desire a live option—and in the experiment they break out of that sheep-hood. And partly because committing murder is instrumentally irrational—you might get caught (watching crime dramas like Law and Order has more or less convinced me that I could never pull off a real crime and avoid getting caught; though I think Crime and Punishment convinced me of that on other grounds years ago). Generally following instrumental rationality isn’t essentially sheep-like (though it might be sheep-like in some circumstances).

Most of this has probably occurred—in must more concise and less annoyingly preachy ways—to people who have thought about the Milgram experiments. The question now is whether the new findings—that is, the interview reports mentioned above—seem to falsify this thesis. This depends, I think, on whether we take responsibility attribution as primitive. If we do, then the thesis falls apart. If responsibility attribution is primitive, i.e., if people’s reports of their feeling of responsibility are not based on deeper psychological functions, then the interviews confirm that people are sheep. Those who follow through take the researcher, rather than themselves, to be responsible, and this suggests that they did not do something they really wanted to do; they did something they did not want to do because the researcher told them. But we probably shouldn’t take these responsibility attributions as primitive. In a way, if the non-sheep thesis is right, this is exactly what we’d expect. If the subjects are acting out their own desire, but one that their usual norm-following forbids them to act out, then they can only act it out by attributing their action to the researcher. This is the whole point: you can get away with murder in this case, but only provided you are not the one committing it. I don’t mean, of course, that the people who attribute responsibility to the researcher are lying; rather, one explanation for their attribution is that they’ve given the researcher the power to let them fulfill their desires; they then interpret the researcher as the agent, since otherwise they could not act.

4 comments:

  1. involving as it does the general idea of murderous desires activated from their usual condition of latency by the opportunity afforded by the pretext of circumstances, your theory strongly recalls Freud (cf. "Thoughts for the Times on War and Death", SE 14: 273-300); and so I wonder how the elaboration of your theory would avoid one of the problems that seems to dog a characteristically Freudian explanation of behavior: namely, in what sense have the high-voltage experimental subjects who sincerely attribute responsibility to the researcher *given* the latter the power to let them fulfill their desire(s)?

    I can imagine perhaps taking up Davidson's suggestions (developed in the last for essays of his collection "Problems of Rationality") of porous and semi-overlapping compartments of the mind, but I still wonder if this "lateral"-psychological approach would satisfy the explanatory problems the "depth"-psychological approach is designed to resolve. There seems to be a problem in how to makes sense of the dynamic aspect of the desire without recourse to homuncular explanation.

    Also, I'm intrigued by your suggestion that morality itself can force on genuinely virtuous people the need to act immorally. This also is evocative of Freud, especially the final two chapters of CIVILIZATION AND ITS DISCONTENTS, and I look forward to seeing how close the affinity is between your views.

    ReplyDelete
  2. Thanks for the comment. These are interesting suggestions. I confess, though, that as my knowledge of clinical psychology is very limited, I am loath to engage in armchair speculation about how the mechanisms might work. But I'm curious what you think the difficulty is.

    The researchers, at least, seem to assume that it is possible for the test subjects to sincerely attribute responsibility to the researcher. Granting this possibility, I am not sure what further problems my suggestion presents. The test subjects' action of continuing to press the shock button seems to fit the standard conditions for responsible action: they are adults capable of making decisions, the movement of pressing the button is under their physical control, and they ostensibly possess the reasons-responsive mechanisms appropriate to blocking their actions. In other words, it seems reasonable to say that the test subjects are, on most existing accounts of action, free to stop and thus responsible for not stopping. And yet they are able to deny this responsibility and load it off onto the researcher.

    How people do this might be fairly mysterious (though such self-deceptive phenomena are by no means rare). But if we accept that people can do this, we've already accepted that people can give someone else the power to subvert their agency. If so, what is the further problem posed by the suggestion that they are giving the researcher power to let them fulfill deep desires?

    The difference, I suppose, is in the sort of motive that acts as the test subject's motive: it is either a motive of obedience to authority, or a motive of fulfilling a desire to cruelty. Do you mean that the problem with the second kind of motive is that the test subject is not explicitly aware of it? That doesn't seem to me to be especially problematic, since we act on motives we are not explicitly aware of--and motives we are unwilling to admit to ourselves--all the time.

    About the suggestion that morality itself provides a motive to evil: I'll try to work this out and post on it soon. But I suspect you'll be disappointed, as my approach is not very Freudian at all.

    ReplyDelete
  3. What puzzles me is how to regard a person responsible for behavior if either (1) the person herself sincerely attributes responsibility for it to another person or (2) if a sufficiently informed observer attributes responsibility for it to a particular motive of which she is unaware. An account seems to be required that establishes some kind of identification of the person with the behavior that the person herself can endorse, and I don’t see how the circumstances of the experiment allow for such an account. In other words, I don’t see that any point exists at which the offloading of responsibility on to the experimenter (or a point at which the process leading up to such offloading was set in motion) can itself be attributed to the subject as an agent.

    Perhaps it simply comes down to whether the subject can be persuaded in retrospect to identify with the behavior.

    ReplyDelete
  4. Ah, I see. That's interesting. You seem to be more interested in the question of whether the subject can be held responsible for offloading the responsibility than in the question of whether the subject is responsible for the action itself. Though I get the sense that you are addressing both issues.

    Well, I do think it is possible for people to have and act on motives of which they are not explicitly aware, and I don't think being unaware of one's motives is intrinsically responsibility-undermining. As for the issue of endorsement, my thought is that it isn't all that important for the issue of responsibility (see Nomy Arpaly's excellent work on this issue).

    Here's an example: take a man who, at every opportunity, will cheat and steal but who at the same time believes himself to be a very moral person and refuses to endorse his cheating and stealing behavior. It seems reasonable to hold him responsible for this behavior anyway (of course there are confusing degrees here: if he is genuinely incapable of stopping himself). Then there is the further question: should we also hold him responsible for failing to recognize himself as a thieving cheat? This is a tougher issue. I suspect there is a level at which questions about agents' responsibility for their own self-conception simply stop making sense. But that point lies pretty far down. If you (1) pressed the shock button repeatedly as instructed and (2) cannot accept responsibility, then I would think you acted wrongly in pressing the button and are continuing to act wrongly in denying responsibility--insofar as having a correct self-conception is a pre-requisite for virtuous action, accepting a correct self-conception seems to be part and parcel of virtue.

    I like your last point. As I indicated in an earlier post, I am interested in developing a conception of responsibility that is largely first-personal and teleological.

    ReplyDelete