Showing posts with label action. Show all posts
Showing posts with label action. Show all posts

Saturday, March 26, 2011

Thompson’s “Naïve Action Theory”: Some Questions

Michael Thompson's "Naïve Action Theory", an article reprinted as Part II of his Life and Action, hasn't gotten a lot of attention. This is unfortunate, because he bills his account as an alternative to standard accounts of action theory, and those who have paid attention to this work do tend to insist that it is novel. (With the exception of Elijah Millgram, who focuses less on what Thompson takes to be his break with accepted action theory and views it instead as continuous with Humean causal theories—in fact, Millgram tends to treat Thompson's account as a paradigm of what action theory today comes to.) But what I have not seen is an account of just what Thompson's theory entails and—more importantly—how it can function as an alternative to the sort of action theory descended from Davidson.

Thompson contrasts his account with what he calls the "sophisticated" view, namely the view that actions are primarily explicable in terms of their ends or the agent's desires or pro-attitudes in favor of those aims. Instead, Thompson argues that there is—for lack of a better word—a more primordial sort of action explanation, which he calls "naïve" action explanation. On the naïve view, an action is explained by reference to an action of which it is a part. To take Thompson's most intuitive example: "Why are you breaking that egg?" "Because I am making an omelet." Here a wider action—making an omelet—explains the narrower action of breaking the egg. (I will use the terms "wider" and "narrower" for convenience; a "narrower" action on this usage will always be a "smaller" action which is a part, or constituent, of the "wider" action.) Thus, we seem to have a radically new account of action: we explain actions not by reference to something outside agency, but by reference to other actions.

Thompson's motto , so to speak, is given by what I take to be his definition of intentional action: "X's doing A is an intentional action (proper) under that description just in case the agent can be said, truly, to have done something else because he or she was doing A." (112)

Now, here are my major questions. Some of them are answered by Thompson, though in ways that, frankly, don't make much sense to me. Some he seems to avoid addressing. But I take these to require clear answers if Thompson's alternative to action theory is to work as an alternative at all.

(1) Thompson argues that, on his view, not only can we dispense with reference to "wantings" in providing action explanations (90), but we should also alter our account of what wanting are: he urges "a complete break with the apparently uncontroversial idea that they are properly called states." (92) I am all for this; I doubt that there are such things as "mental states." But I am unclear on what Thompson's alternative is. And I take it as a basic point that his alternative—as well as his entire account overall—will, in order to be a workable theory, have to be separated from his attempt to derive the account from a grammatical examination of aspect, or "that the linguistic appearances ought to be saved." (90) One can derive whatever theory one wants from an examination of grammar; but for that theory to be interesting—at least to me—it needs to have something going for it other than that it explains or fits our ordinary grammatical usage. (At least until I see a convincing argument for the view that metaphysics corresponds perfectly to the way we speak about it.)

(2) It is true that we often rationalize actions by saying what action they are a part of. But we also rationalize actions by giving their goal. So I might say "I am going to Chicago because I am going to Evanston," or I might say "I'm going to Chicago to visit my friend." The first, I think, is fully plausible by Thompson's lights, especially given that I have to take a flight to Chicago as part of my overall trip to Evanston; thus, the overall trip from NY to Evanston, say, would include a trip from NY to Chicago as a part of the action. But the second case seems different. It doesn't appeal to wanting—at least not explicitly—but it also doesn't appeal to a wider action. My friend is in Chicago, presumably; and my trip to Chicago is not, I think, most naturally taken as part of the action of visiting my friend. Rather, visiting my friend is what I will do after I complete the action of going to Chicago; it is a separate action that occurs after the first one. Here is another one: "I am going to the hospital because my throat hurts." "My throat hurts" isn't an action at all, and so doesn't rationalize any narrower action. Both the Chicago and the Hospital examples are, I think, quite naturally explained by a Davidsonian account or, say, a Korsgaardian one. Davidson: I want to see my friend and I believe that going to Chicago is a way of doing so. I want my throat to stop hurting, and I believe going to the hospital is a way of preventing that. Korsgaard: I am going to Chicago for the sake of seeing my friend; I am going to the hospital for the sake of making my throat stop hurting (where a reason is a description of the action, e.g., "doing act A for the sake of goal G" such that giving the reason shows why the action as a whole appears to the agent to be a good thing to do). These explanations seem to me more natural, more naïve, than Thompson's would be in such cases. How is his account supposed to explain this? (This is important, since if Thompson is explicitly offering an alternative to the standard views, his alternative needs to give a compelling reason to buy rationalization by actions over rationalization by ends or wanting.)

(3) I am puzzled by the claim that an action just is something that rationalizes sub-actions or narrower actions. I can't wrap my mind around how that is an action theory at all, and this is my central concern. Thompson sets up his account as if he is giving an alternative to the current theories. But to be an alternative, it has to either explain all the same things that the standard theories explain, or it has to explain why those things are not in need of explanation. But there seem to be two things missing when we look at either the narrowest or the widest actions.

(A) What does happen at the narrowest level? I suppose eventually the actions get so small that nobody would bother asking for an explanation of them; this would suggest that there is no ontology of actions: actions are just whatever we need our theory to pick out, and we don't need our theory to pick out the tiniest units. So Thompson's claim that we can explain what happens at the lowest end of the spectrum through some theory of vagueness seems to be missing the mark altogether: he seems to think that he is giving an ontology of action; but his theory doesn't fit an ontology at all. (A related point is raised by Millram in his Hard Truths, where he argues that Thompson's account—like all action theory—is ultimately a pragmatic one; it explains actions by explaining what we normally need our language to explain, but it leaves out "atomic actions"—such as blinking, or reading a stop sign in one glance—which don't seem to have component parts at all, because these are not actions we normally need to explain. Though I suspect Thompson can reply to this criticism by asking whether blinking and reading at a glance really are intentional actions; if they are, there is more to them than just the "atomic" component.)

(B) What happens at the wider end of the spectrum? I am breaking eggs to make an omelet. But what if someone asks me why I am making an omelet? I can't explain that by reference to a wider action, and Thompson doesn't claim that we do—his claim isn't, after all, that all actions can be explained by wider actions; only that all actions can explain narrower actions. (One suggestion here might be that wider actions are ultimately explained by reference to an agent's life; I like this suggestion and think it is probably right, but I'd like to see Thompson work it out, if this is what he has in mind.) But this means that every action is either explained by reference to a wider action, or it is explained by reference to something else—something that isn't an action. And the something else once again seems to call for a more traditional kind of action explanation, whether Humean or Kantian. I am not making an omelet, after all, because I am engaged in an action of feeding myself (in delicious ways). Here the contest seems to be between a Davidsonian pro-attitude in favor of ending my tinges of hunger, or even a McDowellian "conception of how to live" ("Virtue and Reason" (68-69) in Mind, Value, & Reality). Again, perhaps Thompson is rejecting the Davidsonian in favor of the McDowellian view; but then this needs to be clearly stated, and the McDowellian position—which is hardly clear or naïve—is going to require a lot more clarification before it starts to make sense. This may be what Thompson is doing in Part 3; the question is why this is still a naïve action theory if, ultimately, action is not explained by action but by something further and more primitive.

Continue Reading...

Thursday, February 10, 2011

Deciding What to Do and Deciding What One Has Reason to Do

R. Jay Wallace: "The task of practical deliberation, after all, is the task of determining what one has reason to do." A page later, however, he refers to "the deliberative standpoint we adopt when deciding what to do."

Now, would you say that these are equivalent?

The second claim, that in deliberation we are deciding what to do, seems true by definition: that's just what deliberation is. But the first claim, that deliberation involves determining what we have reason to do, strikes me as obviously false. For one thing, I don't think we ever deliberate about things we do not already take to be reasons; if I don't think I have a reason to fail a student, then I will not deliberate about whether to fail the student. (Though there is another possibility, which may be somewhat Davidsonian: if I have a desire to fail my student, I thereby have a reason to do so. This view isn't so popular any more, and I think correctly; the mere fact that I desire to do something does not give me a reason. Davidson may have held it to be a reason, but not a strong one; but Frankfurt, Bratman, Korsgaard, and others place stricter requirements on reasons, such that to be a reason a desire must be endorsed, involve or at least not contradict a volitional necessity, etc.)

But more to the point: (A) "Should I do X or Y?" is just not the same question as (B) "Do I have reason to do X or Y?" Nor is it the same as what I take to be a more reasonable interpretation of B: (C) "Do I have more reason to do X or Y?" Aside from the objection mentioned above, it seems clear that deliberation is not merely about what I have reason to do; in any case, if the aim of deliberation is to decide what to do, then certainly deliberation must involve choosing among reasons. Thus, I will drop B and stick to the question of whether A or C may differ. (Of course on some interpretation of "reason", B and C mean the same thing: if I decide that, all things considered, it would be better to spend my last three dollars on ice cream than on a subway ticket, then I have a reason to spend it on ice cream. But talking in this way makes it a bit difficult to explain what it is that might rationally incline me in favor of the other course of action if not a reason, or a consideration in favor of it.)

It seems to me that they may differ. I can answer A without considering C at all; on reflection, I might recognize that I decided to do X—through deliberation—without taking myself to have a reason to do X rather than Y. A lot of people—as diverse as Davidson and Korsgaard—dispute this claim. They think that if I decide, through deliberation, to do X, I must normally have more reason to do X (if I decide that I have more reason to do Y but then do X, the situation is no longer normal, but akratic). And I think there is a sense in which this is right: if we reconstruct my deliberation, we can describe me as deciding that I had more reason to do X, and that explains why I did X; or we describe me as having more reason to do Y, and this explains why doing X was akratic. But the fact that I can describe a situation in reconstruction in a certain way does not mean that that is what actually goes on in the situation; the description is a machinery I bring in to make sense of what I did.

So it seems like "deciding what one has (more) reason to do" and "deciding what to do" come apart: we may settle the questions in isolation from each other; neither question necessarily implies an answer to the other. Moreover, only the latter seems to be the question normally at issue in practical deliberation. It may well be that in some cases of practical deliberation (call it rationalistic deliberation) we do ask "what we have reason to do" or "what we have most reason to do." But this is a very different kind of deliberation: it is, for one, deliberation that does not resolve the question of what to do without some further process, one that either involves further deliberation ("should I do what I have most reason to do?") or a choice ("I will do what I have most reason to do")—a point Wallace discusses extensively in "Normativity, Commitment, and Instrumental Reason." At the same time, it is possible to redescribe standard deliberation in terms of "deciding what one has (more) reason to do." But that I can describe my actual deliberation in terms of a rationalistic model of deliberation does not show that my actual deliberation just is rationalistic deliberation, any more than describing relations between bodies in terms of gravity need imply that there is indeed a mysterious sui generis, mathematically constituted force governing their respective motions.

Continue Reading...

Thursday, June 4, 2009

Korsgaard, Reasons, and an Internalist Problem

Korsgaard's view of reasons is an interesting one. She formulates it explicitly as attempting to fix the problems of the two dominant views, namely, the view that reasons are psychological states of the agent and the view that reasons are facts, or the good-making properties of some action or state of affairs. In place of both of these views, Korsgaard wants to defend what she takes to be an intermediate view, one that incorporates the idea that agents must take something as a reason into the constitution of reasons themselves. Agents, on her view, must be active with regard to reasons. But I worry that her view leans too far in the direction of the psychological states account.

Korsgaard's view is basically that a reason is a consideration in favor of doing something. The consideration is provided by a proper combination of both the end and what one is to do in order to achieve that end. (In her terminology, what one does is an act, and the action as a whole involves an-act-for-the-sake-of-an-end.) In asking for a reason, then, we are asking for a description of the proposed (or performed) action such that both the act and the end are specified in such a way as to make the action as a whole appear worth performing to the agent.

As she writes in "Acting for a Reason" (printed in The Constitution of Agency):
If Aristotle and Kant are right about actions being done for their own sakes, then it seems as if every action is done for the same reason, namely because the agent thinks it's worth doing for its own sake. This obviously isn't what we are asking for when we ask for the reason why someone did something, because the answer is always the same: he thought it was worth doing. What may be worth asking for is an explication of the action, a complete description of it, which will show us why he thought it was worth doing. (221)
And later:
Aristotle and Kant's view, therefore, correctly identifies the kind of item that can serve as a reason for action: the maxim or logos of an action, which expresses the agent's endorsement of the appropriateness of doing a certain act for the sake of a certain end. (226)
Now I wonder if Korsgaard has any means at all of accommodating any sort of externalist view of reasons. Reasons are, on her account, entirely up to the agent: a reason gives a description of the action such that it makes the action appear worth doing to the agent (or, to put it another way, it gives a description of the action such that the agent is motivated to perform that action). But on her account, as far as I can tell, there is just no grounds at all for saying something like this: "John has a reason to push that button, even though he doesn't know it." That is, on her account--from what I can tell--a consideration can only be a reason if it is taken as such by an agent.

I suppose there is a way of fixing this. One could say that a reason is either a consideration that motivates A (or makes the action appear worth doing to A), or it is a consideration that would motivate A, were he fully aware of the relevant facts. Similarly, one would have to add: Even when A takes something to be a reason for him, it may still not be a reason. For example, John might believe that pressing the button will launch a bomb, and so he has a reason not to press it. But in fact pressing the button will stop the bomb from being launched, so what he takes to be a reason isn't a reason at all. But I suspect Korsgaard does not want to go in this direction: this is why she refers, in the second quote above, to "the kind of item that can serve as a reason for action." If I am reading this correctly, then, the fact that pressing the button will stop the bomb from being launched will enter into "kind of item that can serve as a reason" for John, but it is not a reason for John. And that seems wrong, for if John were aware of the button's function, he would recognize it as a reason, and this suggets that it is a reason for him, albeit one he does not have access to.

Her account, then, seems to be far more internalist than the one proposed by Williams ("Internal and External Reasons"). Williams, after all, recognizes that something is a reason for an agent so long as there is a path to it from the agent's subjective motivational set. But Korsgaard seems to reject this requirement: unless something is taken as a reason, it doesn't seem to be a reason at all.

In other words, I think Korsgaard's account as given is false: just because an agent takes something to be a reason does not make it a reason at all (and the fact that he fails to take something as a reason does not mean that it is not). What makes it a reason is that he could take it to be a reason, were he fully informed. (Like Setiya, then, Korsgaard portrays reasons as supervening on the agent's mental states, but she doesn't even add the proviso that none of his beliefs may be false, the way Setiya does.)

Continue Reading...

Thursday, May 21, 2009

Hornsby's Paper : Section 2

I work myself through section 2 of Jennifer Hornsby's paper 'Knowledge, Belief and Reasons for Acting'. Here I remain fairly sympathetic to what she thinks is the connection between reasons, actions, beliefs, and knowledge. I conclude by summing up some of the problems we encountered in section 1 and indicate where one should go in future work to fix these problems and thereby be able to defend all the things that I'm sympathetic to in section 2.

MAIN TEXT: SECTION 2
As readers may recall Hornsby not only requires that we give an account of the objective and subjective sense in which someone can be provided with a reason for acting; one also needs to show how to connect these accounts. That is the topic for section 2 of her paper. To bring home such a story Hornsby starts out from the disjunctive principle (D) (which she claims is an analogous principle in action theory to McDowell’s (1982) disjunctivism in the philosophy of perception, a view she has discussed in further detail elsewhere (see ‘A Disjunctive Conception of Acting for Reasons’)):

(D) If A F-d because A believed that P, then EITHER A F-d because A knew that P (and (thus) A F-d because P) OR A F-d because A merely believed that P.

The first thing to notice about (D) is that it is a conditional and thus is consistent with a failure of the antecedent. For instance, it might be possible to act from knowing P and yet fail to believe P if knowledge does not entail belief (e.g. the unconfident student). In other words, (D) does not claim that one couldn’t act on knowledge without at the same time acting on belief. It is a problem that (D) fails to account for cases where an agent acts for knowledge without belief? Well, that depends on one’s views about the relationship between knowledge and belief. However, Hornsby is quite willing to admit the failure of accounting for such cases saying that (D) was never supposed to account for them: “(D) is designed to bring a wide range of cases of acting from knowledge under the head of acting from belief. And there is no need to deal with every possible case of acting from knowledge in order to do this.” That is, of course, a legitimate move since one is always allowed to restrict one’s own explanatory ambitions. Thereby she risks losing something that would be worth lumping under the same general account or principle but it might also be the case that no such phenomena is at hand here or; alternatively, one could contend that (D) takes care of all cases of acting from knowledge since it is arguable that intuitions concerning the unconfident student’s lack of belief vary greatly and that, at any rate, whatever state the agent is in when acting from knowledge it is one that is cognitively complex or belief-like enough to count as believing (Williamson 2000, p. 42). I won’t pursue the issue any further here.

Another thing worth noting about (D) is that EITHER is a conjunction. It says that A acts from knowing that P AND that A acts because of P. The reason is that the equation (E) from section 1 governs cases of acting from knowledge. When one does the latter one also acts from the objective reason that P given the equation between the two. In fact, knowing P is the only way one could act for the objective reason that P, according to Hornsby. (For problems about (E) I refer you to the previous blog post on section 1 of this paper).

An advantage of (D) is that it can accommodate cases like the following: A, who is neurotic, turns of the light and shuts the door. He now knows that the light is off and the door is shut. Still his belief that the light might still be on torments him so much that he reopens the door in order to turn it off. Such a case can be relegated to the second horn of the disjunction where the neurotic acts from a mere belief despite the fact that this belief conflicts with what he knows.

The connection between knowledge and belief that (D) relies on—and what it tries to keep track of—is the sense in which “knowledge sets the standard of appropriateness for belief” (Williamson 2000, p. 47). As Hornsby notes, the above cases (unconfident student and neurotic man) point to this appropriateness of believing only what one knows by displaying their agents as being somewhat less rational than what is optimal. Mere believing is, in Williamson’s words, “a kind of botched knowing” (2000, p. 47). To act on mere belief in the absence of knowledge or in the face of it could therefore be looked upon as a kind of botched rationality, which is an idea that Stanley, Hawthorne and Williamson explore in several places. After all, one who acts without knowledge, like our aforementioned skater, fails to act for the objective reasons there are—this holds, as we saw above, even when the skater skates at the edge of the pond for the Gettierized but true belief that the ice is thin in the middle—whereas the neurotic man has no objective reason for reopening the door and checking the light: on the contrary! Finally, there is a sense in which the unconfident student should behave as if he believed his answers; after all, he knows them and is thereby licensed by standards of appropriateness to believe them. The second horn of (D) therefore takes care of any number of cases where belief is found responsible for an act either in the absence of knowledge or in cases where beliefs are held and acted upon in the face of what one knows.

Let that suffice as a commentary of the advantages we get from holding (D) and let us turn to whether (D) also suffices to display the connection between the objective sense in which one acts for reasons and the subjective. Hornsby underlines the important role of beliefs in explaining actions. It is crucial that we attribute the neurotic with a belief to explain why he reopens the door whereas there are plenty of cases where agents act on the basis of mere appearances and false belief that could never be explained by applying only the objective sense in which one acts for reasons. These roles—acting in absence and in the face of knowledge—crucially rely on some fallible, non-factive state like belief so the extent to which reason-giving explanations or rationalizations are out to explain such everyday behaviour is the extent to which beliefs are needed in action theory. Some might object to this being within the scope of reason-giving explanations and they may argue as follows: that someone act because she believes P is no more the agent’s reason to F than the fact that a bridge collapses because it had a structural flaw is the bridge’s reason to collapse. Believing P is a mere psychological state, they may go on to argue, that may or may not cause the agent to act whereas the agent’s reasons—the reason they had for F-ing—is something different.

Properly speaking this is obviously wrong in a great range of cases: someone may come to F for the objective reason that she believes P, i.e. where she knows that she believes P. For instance, if A is asked “do you believe that Schopenhauer was the greatest heir to Kant?” the reason for acting—say, by nodding or exclaiming “yes!”—is precisely the fact that one knows in this case what one believes about the matter. This belief might be false—which it probably is in our case—but the fact needed to be known here is just that the agent believes the thing in question. In this sense beliefs sometimes do operate as objective reasons, as facts to be acted upon by knowing them.

Bracket that and we read that Hornsby agrees with the critics: in ordinary cases (where we’ve bracketed away the cases just mentioned) an agent believing that P is not the reason she has for F-ing. What she goes on to say is that when we ask someone for their reason to F they typically reply with P rather than saying they believed that P (except for cases where they retract their earlier evaluation due to being challenged and safeguards their answer by saying that “I acted on my belief that P”). Since ordinary agents know what their reasons are she suggests that we take their answer at face value. Their reason for F-ing is, in the ordinary case, P as opposed to Bp. Thus, it is the contents of one’s belief—those beliefs that are applied in reason-giving explanation of action—that give the reasons the agent had. But having a reason is not the same as there being a reason. The latter requires an objective reason to exist in order to be true whereas the former says something about what the agent takes to be her reasons for acting. What is crucial for understanding agency is, as Hornsby puts it, that it “is a matter of seeing what reasons they had.” That is in line with Davidson’s earlier contention that rationalizations lead us “to see something the agent saw, or thought he saw, in his action.” (1963, p. 3) [My emphasis] Thus, to understand agents we need also to focus on what agents treat as if they were objective reasons. One way to know what reasons agents have is by knowing what they believe. The point here reinforces something that Williamson thinks about the relationship between knowledge and belief, namely that “to believe p is to treat p as if one knew p” (2000, p. 46). In other words, believing something is a way to populate one’s cognitive landscape with something—a thought or a proposition—that one is disposed to treat as facts or as reasons to act because believing something is treated as if it was knowledge.

Hornsby’s take-home message is therefore that we can understand the role of beliefs in reason-giving explanations because, as she says, “the thought that p plays the role that the fact that p plays for someone who acts because they know that p”. In this sense, we actually revert the scheme since we seem to get a better understanding of what it is to act from beliefs by understanding how an agent acts from knowledge and thereby showing how beliefs are treated as if their contents were known facts. In the same vein Williamson thought that he could illuminate the nature of beliefs in an account of epistemology via the nature of knowledge and the appropriate relation which says that beliefs aim at knowledge (2000, p. 47). So, pace the belief-desire proponents—who think erroneously that beliefs and desires can explain the whole truth about agency whereas they do fail to account for the objective sense in which one acts for reasons—it seems as if we can only understand what it is to act for beliefs when we first understand what it is to act for knowledge. According to Hornsby then, the belief-account is not wrong in the sense that it generates any falsehoods but because it fails to account for the whole truth about reason-explanation.

So where are we? Well, it seems as if all that is said and done in section depends on the truth of the following two claims: (1) that knowledge is sometimes necessary to explain how someone could act for the objective reason that P; and (2) that there is no general way to distinguish between world-involving mental states and internal mental states. We saw that Hornsby fails to establish (1). Moreover, her principle (E) for how knowledge is involved with acting for objective reasons ran into problems of its own. Yet I think we can establish (1) by other arguments, probably drawing on lottery-type considerations where we show how the existence of a lottery-proposition—basically, a propositions that cannot be known despite immensely probably evidence which favours its truth by closing in, but never reaching, probability 1—precludes that the agent acts for this objective reason. When it comes to (2) I think we need to establish in order to preclude the proponents of a belief-desire account to come back and say that the objective kinds of reason-giving explanations fall outside the scope of psychology. Again, I can only refer to Williamson’s and Gibbon’s work on these topics but I do think that this claim is worth pursuing. In addition I think the kind of view that Hornsby is here championing would better suited if it could also prove and provide details from how we can understand the causal relevance of knowing. Basically what I’m asking for is to show how knowledge, as a causally potent mental property, better fits the explanatory goals of reason-giving explanations. In other words, I think pace the internalist belief-desire proponents that knowledge is operative in action. Allow this and we may be on our way towards a naturalistic conception of action, one that allows for externalist or world-involving mental states in psychological explanations.

Final words: as Hornsby notes at the end of this paper, plenty of philosophers and people thinking about actions and mind take it for granted that world-involving states - like knowledge - doesn't belong in a psychological explanation nor in rationalizations of actions. In another paper ('Agency and Actions') Hornsby quotes Strawson's old saying that it takes a really great philosopher to make a really great mistake (1974). Internalist reason-giving explanations seems to me to be such a great mistake. Or, as Hornsby goes on to say, "I can't help thinking that, these days, it takes a really great number of philosophers to contrive in the persistence of a really great mistake." At least Hornsby has by this paper positioned herself strongly on the right side of this divide.

Continue Reading...

Wednesday, May 20, 2009

Review of 'Knowledge, Belief and Reasons for Acting': Jennifer Hornsby

Here I assess and evaluate the first section from Hornsby's paper where she tries to support the claim that knowledge is necessary for objective reasons to occur as reasons in a reason-giving explanation of the agent's activity. In the end I argue that her argument fails to establish this and that her formulation of the principle that governs acting for objective reasons must be revised. Yet, I remain sympathetic to her suggestion and think that arguments can be supplied to support knowledge's essential role in reason-giving explanations although I leave it to future work coming up with such a principle.

SHORT INTRODUCTION: KNOWLEDGE IN AGENCY
In this paper Hornsby tries to find grounds for thinking that an agent’s possession of knowledge is presupposed by the agent acting for reasons and thus for the claim that acting for reasons does not come into play unless the agent has knowledge. The tendency to explore knowledge’s role in action theory is one she shares with a number of other philosophers: Jason Stanley suggests in his paper together with Tim Williamson that knowing that p is a reason for F-ing is a necessary condition for rationally F-ing whereas Stanley together with John Hawthorne takes the step a bit further and explores and defends the idea that knowing that p is a reason to F is both necessary and sufficient for rationally F-ing. On the other hand, we have people like John Gibbons who thinks that intentional action without knowledge is impossible and thus that knowledge is presupposed in some form or another whenever one says of some agent that she intentionally F-d. Hornsby seems to be on roughly the same track as Gibbons since she too is exploring the metaphysical foundation for actions rather than merely asking, like Stanley, Hawthorne and Williamson, about the norms or ethical principles that govern rational conduct. Important as the latter question is Hornsby sees herself as going further than the normative question to pose questions about the metaphysical constitution of actions.

Hornsby seems to start out by picking up a clue from Williamson’s suggestion (2000, p. 62) that knowledge sometimes must figure in the best explanation for why some agent F-d. According to him, attributions of knowledge may be a better predictor for determining someone’s actions by lending more probability to a certain way of conduct. The intuitive example is the rational burglar who risks a lot by searching the whole building for a valuable diamond. The only way to understand why a burglar would take such a risk is, according to Williamson, by attributing her with the knowledge that the diamond is in the building. Otherwise it would be hard to explain why she apparently disregards evidence to the contrary—i.e. as the time goes and her search is not successful—on pain of diminishing her rationality (like declaring her to be just plain stubborn, insensitive to evidence etc.). Another example would be the case where someone, say your mother-in-law, comes to your door ringing the door bell and you consider whether to open the door or not. The outcome of one’s deliberation should depend on whether you have reason to think that she knows that you’re home or whether she has a mere true belief to this effect. In other words, the predictive outcome—i.e. whether your mother-in-law becomes insulted or just disappointed—depends on the presence or absence of knowledge. The relevant generalization needed for explaining these cases—which, by the way, is how Williamson likes to think about the notion of causality—thus seems to depend on knowledge in certain intuitive cases of human conduct. In this sense knowledge is causally and psychologically relevant for human conduct.

However, if such cases goes through they suffice to show that there are at least some cases where attributing knowledge to the agent provides the best explanation of his or her behaviour. Thus, knowledge is sometimes needed in psychological explanation since in those cases it would severely decrease the explanatory power of psychology if we were to restrict that discipline from attributing cognitive states like knowledge to the agent. Beliefs won’t suffice to rationalize or to provide a reason-giving explanation here so, pace Stephen Stich and psychology’s restriction to autonomous behavioural description (i.e. a description of a way of acting such that if you would act in a certain way in a given setting so would your replica that shares all your current, internal, physical properties (Stich, Folk Psychology, p. 167)), we should think that what knowledge adds to beliefs is psychologically relevant. (Of course, one could reply here that those limits to psychological explanations might be just what we should expect since this discipline is, after all, not an attempt to explain everything. Agreed, however, I do think that Gibbons’ paper on this topic provides us with plenty of cases that one would like to be explained psychologically and where one nevertheless would fail to do so without attributing the agent with knowledge. Thus we should reject Stich’s restriction to autonomous behavioural descriptions. The point is that since which psychological states there are is at least partly determined by what one needs in order to explain human behaviour it seems arbitrary to exclude knowledge as psychologically irrelevant unless one can provide a more principled distinction between which of these states are psychologically relevant and not: again, the failure of such attempts is discussed in both Gibbons (2001) and Williamson (2000, chapters 2 and 3)).

MAIN TEXT: SECTION 1
What Hornsby wants to do is to take these ideas a bit further and actually put knowledge into the constitution a specific kind of agency, namely the agency that goes by the label acting for reasons; or, as she puts it, “until it is allowed that our knowing things explain our acting, our acting for reasons is not in view”. In order to get there we need to make a couple of preliminary distinctions. An intuitive and much-discussed distinction in reason-explanations goes between the subjective sense in which something is a reason for F-ing and the objective sense in which something can be a reason. The latter is, according to Hornsby, a matter of fact that obtains regardless of whether the agent is considering that reason as a reason for F-ing. The former is another kind of fact: namely the fact that the agent considers some p (whether true, false or plain stupid) to be a reason for F-ing. The following schemas cash out this distinction:

OBJ: A reason for A to F was that p: p
SUBJ: A had a reason to F: she believed that p: Bp

The distinction can be appreciated with an example: suppose A goes skating on the edge of a pond and clearly avoids skating in the middle of it. Suppose further that the ice is too thin for skating in the middle of the pond. Now, according to Hornsby, that fact is a reason for A to skate on the edge of the pond as well as avoiding the middle of it. For short, call that fact P. A may now be acting for a reason in the objective sense, thus skating in the middle of the pond for the objective reason that P. On the other hand, there’s another way in which A may have a reason for skating at the edge of the pond and that is the subjective sense in which she believes the ice in the middle of the pond is thin. What’s more, her belief that the ice is thin might also be the fact that causes her to skate on the edge. Hornsby’s point is that both are reasons for the agent to skate at the edge of the pond given that she desires or wants to remain safe and dry. When we explain human actions we need both and we need to show how they are related. Hornsby’s claim in this paper is that in order to achieve both ends—i.e. explaining human conduct and how those different action explanations are related—it is required that we credit agents with knowledge. Let’s see if she can establish this claim.

The reason why we need subjective explanations may seem obvious to some, especially to the Humeans in action theory, like Donald Davidson, who thinks that a reason figuring in a reason-giving explanation or rationalization must “lead us to see something the agent saw” (Davidson 1965, p. 3); but it might be worth rehearsing its intuitive appeal. Hornsby resurrects Bernard Williams’ example where someone makes a mixture of petrol and tonic because he wants to drink gin and tonic and believes the petrol being gin (despite the smell...). In such a case there was no reason for making this particular mixture. After all, what the agent wanted was something of a totally different kind; what’s more, the mixture could be quite toxic and dangerous to the agent’s health (no suggestion here that gin and tonic is particularly healthy either...). But all this does not mean A had no reason to making this mixture since A still had her reasons for making the mixture as she did. The distinction that we need to make here is the difference between two existential claims, namely the fact that there is no reason for A to F is non-identical to the claim that A had no reason to F. Or in symbols:

~$p (p is a reason for A to F) ¹ ~$q(q is A’s (or her) reason for F-ing)

In other cases one’s belief, although true, might be rather silly and not even remotely connected to what one wants to achieve by acting. Hornsby calls these cases benighted agency. For instance, if I believe truly that my 30th birthday is 10.02.2009 and I take that to be a reason for me to make a first bid at 30.100.220,09 $ for a flat in Queens, there’s a sense in which I’m clearly benighted in my activity. Such examples are supposed to show the need for the subjective sense in which something is a reason for F-ing: they may be true, false, silly or just plain stupid, still they are the reasons for which A, as a matter of fact, is F-ing since the agent takes those as reasons for F-ing and acts on them as such. What happens it those cases the agent acts on her belief that P is reason for F-ing and then, from A’s perspective, there would be an objective reason to F if P were true.

To account for the objective sense in which something is a reason for A to F we only need to point to the fact that P is reason for A to F, i.e. OBJ. However, to make that reason figure in a rationalization or reason-giving explanation of A’s activity it is not enough to just cite this objective fact. After all, there might be a perfectly good reason for me not to be writing this blog post at this moment; but that does not make it a good reason to cite, as it stands, in a rationalization for why I am struggling to write it. Would a belief do the trick? That is to say, would it capture the objective sense in which something figures as a reason in a rationalization of one’s activity if the agent truly believed that P was a reason for her to F and acted on this belief? According to Hornsby, the answer is clearly no and the true belief that P is a reason to F do not add up to all we want from the rationalization. The true belief could be result of a mere happy conjecture or just the result of a lucky happenstance in which case Hornsby thinks that “inasmuch as the skater’s belief could have been false, the skater’s believing what she did can hardly provide her with the reason that there was for her to keep to the edge.” For instance, if A was told that the ice was thin by an otherwise reliable friend who for some reason was out to trick her from skating at the centre of the pond, it will be true that the ice was thin (unbeknownst to the friend’s knowledge) and A will have and act on this true belief. Yet, there is a sense in which the friend’s attempted trickery ruins the way in which we expect the agent to be connected to her reasons for acting; it requires something more than just a Gettierized, justified, true belief that P is a reason to F to be provided with that reason in the objective sense.

Of course, any interesting or empirical belief “could have been false” so I guess we should read Hornsby’s suggestion here as saying something to the effect that a mere belief “could easily have been false” in order to retain a charitable reading. By that I mean that one’s belief, although true, could have been true as a matter of epistemic luck—as shown by the Gettierized case—and that Hornsby thinks this presence of epistemic luck suffices to block the agent from being provided with the reason there was for her to skate at the edge of the pond. If that reading is correct we can begin to appreciate the intuitive connection between agency and epistemology/knowledge: we could say that something goes missing in this case—i.e. the possibly Gettierized scenario—and that “what one needs for one’s true belief to provide one with a reason for skating on the edge of the pond is that the belief be not only true but also epistemically reliable (i.e. holding true in all of one’s epistemic alternatives)”. Here the reliability relation could be defined as an ordinary accessibility relation in modal logic that is defined as function from the world one is in (@) to the possible worlds one for all one knows have been in (i.e. the set of those worlds that are consistent with all one’s evidence in @). P is the reason for A’s F-ing then (i.e. the reason because of which A’s F-ing) only if P is (a) true; (b) believed; and (c) reliably based. In the Gettier case condition (c) fails and we will have to say that A kept to the edge of the pond not because the ice was thin but because he believed (correctly) that the ice was thin. So his true belief does not provide him with an objective reason for acting because it fails to be reliably based; so adding the true belief to the objective reason merely gives you another subjective sense in which P is a reason to F.

Now if A were to know that the ice was thin and thereby acting on her knowledge she would satisfy the reliability condition—after all, knowledge requires being reliable—and thus there would be no obvious reason to deny that the fact that the ice is thin now provides A with an objective reason for skating on the edge. So the presence of knowledge is enough to provide A with an objective reason for which she acted. But Hornsby makes the further claim that knowledge is also necessary or that a condition for F-ing for the reason that p is that one knows that p. Her Gettier case obviously does not establish that knowledge is necessary; reliability, for all that’s been said and done so far, could possibly be supplied for by other means. Yet knowledge is a plausible candidate and also one that frequently occurs in normative and rational evaluations of people’s activities (for evidence see: Stanley and Williamson; Stanley and Hawthorne). We might also think that the necessity requirement could be established via lottery considerations, i.e. cases where the requirement of justification needed for acting with an objective reason is pressed increasingly towards probability 1 (= knowledge); but I won’t go in this now. Suffice it to say that more is needed—and can probably be provided—in order to support Hornsby’s main claim that: “We act for reasons in virtue of our having knowledge of relevant facts. As agents, we rely upon our often being, so to speak, the conduits of facts.”

We should note, in passing, that Hornsby’s suggestion does not preclude that p could still be a reason for A to F even though he is unaware of it or fails to know it and merely believes it. The point is rather that as soon as he acts his F-ing can only count as F-ing for the objective reason that p if A also knows that p. Failing that he would merely be F-ing for his subjective reason in accordance with his objective reason, i.e. by acting for the correct belief that p was a reason for F-ing. So the fact that p is a reason for A to F can only occur as an explanation of A’s actual F-ing if A knows that p is a reason for A to be F-ing. In this sense, Hornsby’s suggestion is completely on a par with Davidson’s requirement that a reason can only figure in a rationalization of someone’s behaviour if it shows or “leads us to see something the agent saw” (1965, p. 3); when we explain that A knows that P is a reason for A to F just is a way to come to understand something the agent saw. Davidson also lists knowledge as one of the possible cognitive attitudes we can list and combine with a pro attitude (desires, wants, etc) to yield the primary reason that rationalizes A’s intentional behaviour.

Anyway, Hornsby goes on to suggest that (E) captures what she thinks established by her Gettier case:

(E) Where ‘x F-d because p’ gives a reason-explanation (x F-d because p iff x F-d because x knew that p).

I think I see a problem with (E): it seems to be a version of the KK principle and thus leads one to the absurd consequence that follows when one applies an S4 model for the accessibility relation that operates on the epistemic operator. That is to say, Hornsby’s suggestion can easily be shown to require much more reliability and knowledge than first assumed. Here’s why:

Read F(x, p) as ‘x F-d because p’
Read K(x,p) as ‘x knew that p’

Then according to my reading of (E) as an instance of what is troublesome in the KK principle it would follow from F(x, p) that F(x, K(x,p)): F(x, Kx(K(x,p))), and so on. In short, it follows that whenever one acts for the objective reason that p one would have to not only Kp but KKp, and KKKp, etc.. The reason why this is a problem is that, according to Hornsby, the presence of knowledge adds reliability and thus it adds a restriction on one’s epistemic possibilities: the space of epistemic possibilities shrinks with every addition or iteration of knowledge. Thus, the extent to which (E) can be shown to iterate knowledge requirements is also the extent to which one would require a higher epistemic standard whenever one acts for the objective reason that P. My allegation is therefore that (E) commits one to an impossibly strict epistemological standard in order to act for objective reasons.

The best way to deal with this objection would be if one could point to the scope of the principle since the principle’s application is supposed to be guarded by a qualification to apply only where ‘x F-d because p’ gives a reason-explanation. One could hope that this qualification could be enough to block the reiteration that would make (E) an instance of the fallacious KK principle; however, (E) is easily turned into an instance of the KK principle by Hornsby’s own words since she regards both the left-hand side and the right-hand side of the equivalence as a reason-explanation. Thus, whenever ‘A F-d because p’ is a reason-explanation it follows from the equivalence in (E) that ‘A F-d because A knew that p’ is a reason-explanation too. Since that is the case nothing stops us from reapplying (E) to ‘A F-d because A knew that p’ since (E) is a universal principle that is supposed to apply whenever something of the form ‘x F-d because p’ gives a reason-explanations: and, guess what, ‘A F-d because A knew that p’ has that form since ‘A knew that p’ is a fact too. It’s just that on the reapplication the variable ‘p’ in the schema is replaced by ‘A knew that p’ rather than ‘p’; so when we put that into the formula (E) it gives you back that ‘A F-d because A knew that A knew that p’. This process can now be repeated as many times as you want giving you an infinite number of iterations of knowledge as a general requirement for objective reason-explanations. In other words, to have reason-explanation for F-ing of the form ‘x F-d because p’ requires not only that one knew p but also that one knew that one knew p, knew that one knew that one knew p, and so ad infinitum. Again, the trouble is that knowledge adds further reliability thus restricting the epistemic possibilities: in short, one would need to know so much that it rules out any epistemic possibilities and thus one would need to know exactly which world one resides in order to act because that P. And no one, except for an omniscient being, knows that much which is an intolerable consequence of Hornsby’s suggestion. At least, so it seems to me.

A possible solution is to insist on two senses of because here: we could read ‘A F-d because that P’ as the sense in which something is objectively a reason for F-ing without being the ‘because’ that figures in a reason-giving explanation; and think of ‘A F-d because* she knew P was a reason for her to F’ as the ‘because’ that does figure in a reason-giving explanation. But that runs counter to the qualification or what we seek to explain, namely what we need for the citing objective sense of being a reason to F as a reason figuring in the reason-giving explanation for F-ing. As far as I can see, Hornsby is in real trouble here and I see no easy way out of it. (Note: it won’t help replacing the biconditional with a conditional, either; since the consequences only hinges on one direction of the biconditional).

Bracket this problem and I’m in sympathy with Hornsby’s general idea that, as she says, adding knowledge to the soup is an elegant way in which something can be a reason for F-ing at the same time as one can account for the agent’s motivation. Knowledge is factive and therefore Kp entails p; also knowledge is arguably a cognitive or mental state and thus can figure as a causal factor in the explanation of one’s acts. We still need a story about the subjective sense in which something is a reason for F-ing; or rather, we need to connect such a story to the one about knowledge. That is Hornsby’s topic for section 2 of this paper but I will return to that project in a blog post that is soon to follow.

REFERENCES:
Davidson, D, 1963, ‘Actions, Reasons, and Causes’
Gibbons, J. 2001, ‘Knowledge in Action’
Hornsby, J. 2007?, ‘Knowledge, Belief and Reasons for Acting’
Williamson, T. 2000, Knowledge and Its Limits

Continue Reading...

Monday, March 16, 2009

Intentionality and the Object of Moral Perception: Ricoeur's Challenge

Ricoeur tantalizingly challenges the Husserlian (and common sense) notion that the intentional object remains the same throughout various intentional acts. Consider, for example, the following: “that person with the heavy bags needs a seat” vs. “that person is standing with heavy bags.” On the common view, the intentional object, “the person standing with heavy bags,” is the same in both cases. This view, that the intentional object is given an identity through an act of understanding, is central to standard accounts of moral perception and is an important point for philosophy of mind and agency.

To work out the common view, let me take a version of the standard account from Angela Smith, who takes moral perception to be a case of “seeing under an aspect” (I do not mean to imply that this is Smith's own view; she suggests that it may be mistaken in the paragraph that follows):
A morally insensitive person may, in a literal perceptual sense, “see” exactly the same thing as a morally sensitive person—for example, that a person is standing on a crowded subway with two very full grocery bags. What differs is that the morally sensitive person sees this person as uncomfortable and in need of a place to sit down, while the morally insensitive person does not. (1, 259)

This point is taken to be independent of the further point about moral perception that a morally sensitive agent is more likely to notice features of her surroundings that call for a moral response (perceptual salience). Here, the issue is rather of how, or under what aspect, the morally sensitive or insensitive person perceives a situation provided that both have already noticed it. And this view—that intentional objects are somehow basic particular units of meaning that, already constituted, can enter into various intentional acts—has some obvious support: If, for example, I am to want to have chicken soup for dinner, then “having chicken soup for dinner” or something of the sort must have a meaning independently of my particular act of wanting it; after all, the very same object must be able to play a role in my epistemic judgments, or else I would never know how to satisfy my desires.

This is the sort of view Ricoeur has in mind. He calls on us to consider the following infinitive proposition: “I am to go on a trip.” This grammatical form

Is a neutral signification which could be incorporated in acts of different quality. It will occur some day that “I shall go on a trip”: here the meaning is at the same time called and held in suspension by its hypothetical modifier. In a decision the meaning is inserted into a positing of existence which is not stated but is affirmed as depending on me… (2, 43)

So what is the common meaning in these intentional acts? Ricoeur rejects the idea that the common meaning is given by a founding act of understanding, which allows it to enter into other intentional acts such as willing, hoping, predicting, etc. Nor “is it a primitive judgment of existence modified afterwards as a wish or a decision” (44).

Ricoeur's own view is that,
this meaning is distinguished only by abstraction from the concrete act of stating, wishing, ordering, or deciding… This proposition is not a judgment about that which I state, hope, command, or will, but a convergent product of abstraction, formed in the context of a reflection on acts and their objects (43-44)
Thus, the intentional object of a wish is not identical to the intentional object of the understanding; the identity of the two objects is not primary, but is established through a later act of abstraction. Similarly, the perception of a person standing with heavy bags will not be identical to the perception of a person with heavy bags in need of a seat: these intentional acts have a different quality, and are filled by different objects. (Ricoeur makes a similar point in “Methods and Tasks of a Phenomenology of the Will,” published in (3), though in similarly vague terms and also without any clear analysis of the implications. If anyone is familiar with further sources, please let me know.)

One way of bringing this out is by going back to the distinction I mentioned above, between seeing something under an aspect and noticing it at all. We can, of course, make this distinction in abstraction, but it is not at all clear that we can draw any fine line. For one thing, to take the example Smith uses, it seems a fact about the situation that the person with heavy bags needs a seat. So the morally sensitive observer is not adding something of his own to the situation; rather, he is simply seeing the situation for what it is. That the person with heavy bags needs a seat is part and parcel of the perceived situation, and it is a feature of the situation that the morally insensitive person simply does not notice. Similarly, an even less sensitive person might fail to notice that the bags are heavy, or might fail to notice a person standing with them at all. “Seeing under an aspect” is easily distinguishable from perceptual salience only if we assume that the “aspect” under which a perception might be seen is something added by the agent’s subjective attitudes, in opposition to what is objectively there to be perceived. But if we accept a moral realist picture, the “aspect” is really there, to be noticed by any sensitive observer in the way that the person with bags is really there.

So why does this matter? For one, if Ricoeur is right, we have to reexamine the standard classification of cognitive and conative acts in terms of directions of fit. For another, it suggests that valuation is integral to perception rather than projected on it, perhaps as some secondary quality. Of course the account would—to pose any serious challenge—still require a serious work-up of how a secondary act of abstraction, through which sameness of meaning is determined, could serve to unite our various judgments (say, judgments about what we want and judgments about how to get it; or judgments about moral responsibility and judgments about moral desirability). In any case, I suspect there is a way to pull off such an analysis by working out exactly how second-order acts govern first-order acts.

PS. What looks like a blue jay just pooped on my copy of Smith's paper. A spirited philosophical debate at last!


References:

(1) Angela Smith, “Responsibility for Attitudes,” Ethics 115 (January 2005): 236-271

(2) Paul Ricoeur, Freedom and Nature. Chicago: Northwestern University Press, 1966.

(3) Paul Ricoeur, Husserl: An Analysis of His Phenomenology. Chicago: Northwestern, 2007.

Continue Reading...

Tuesday, July 22, 2008

Agency, Endorsement, and Identity: A Case for Phenomenological Intervention

I often try—usually unsuccessfully—to push the idea that philosophy of action would benefit from a serious interaction with phenomenology. I tried to give an account of this to a well-known philosopher a few days ago but, partly because I was being overly exuberant and at the same time not entirely coherent, I got the distinct impression that he thought I was an idiot. Here I want to sketch out one place where I believe action theory needs phenomenology: on the issues of endorsement and identity. I am going to argue that a phenomenological account is needed to bring out the ways in which our agency is both creative and passive in such a way that acting on motives we do not rationally endorse may yet strengthen or at least express our agency.

In the latest incarnation of his theory of identification, Frankfurt has argued that agents can be said to identity with a first-order desire when they both have a second-order volition to act on that desire and are satisfied with that second-order volition. This account has been widely accepted, but Frankfurt’s conception of what the satisfaction comes to has come under constant fire. Frankfurt conceives of the satisfaction as something like a lack of motivation on the part of the agent to revise the second-order volition, and he admits that there are many acceptable reasons why agents might be so disinclined toward revision: for example, they might simply be bored with the self-reexamination involved; or, they might even be manipulated into the satisfaction. Most (as I understand it, Velleman, Bratman, and Ekstrom, among others) are not satisfied with this view of satisfaction. The reasons vary, but the core problem is that satisfaction alone, thus conceived, does not seem to be a sufficiently agential process.

But I think Frankfurt is on to something important: he is rejecting a standard view of agency. On this sort of standard view, agency is an active process through and through; this activity, in fact, is what differentiates agency from (supposedly) passive processes, such as perception and belief formation, or even coming to have motives (as opposed to endorsing them). And this view seems to me slightly mistaken. Its basis is an idea, expressed for example by Korsgaard, along the following lines: When we encounter a motive (such as a desire), we cannot just act on it. Because we are self-conscious, we are detached from our motives, so that we can take them or leave them. That is, we can endorse or reject them. From a first-person practical perspective, in which we must decide what reasons to act on, we are under the necessity of choosing among our motives rather than simply following whichever motives might pop up. As an account of practical reasoning in abstraction, something like this is probably right: when faced with evaluative judgments provided to us by desires, we can either use those judgments as premises in our reasoning to reach a conclusion, or we can override them with other judgments. But is this account correct of our actual deliberative processes or our typical decision-making?

I think it leaves out a rather salient feature of our phenomenology, and this is the point at which phenomenological accounts are needed as correctives to overly rationalized or intellectualized views of agency. The feature is this: We sometimes find ourselves saddled with motives that we would not endorse, on deliberation, as good motives to act on. We might, in fact, reject them on every possible grounds, from their negative consequences in our means-end reasoning to their apparent undermining of our pursuits of the things we care about. But these motives might nevertheless come with, one might say, built-in endorsement. They appear to us as agency-defining for us, individually, as the persons we are. I might, for example, believe that all sorts of things are worth sacrificing some of my pride for. And if I deliberate seriously on the question, I might in the end decide that in some cases I ought to bite the bullet and overcome my pride. But faced with a concrete situation, I find that pride-based motives appear with a certain agential authority that I have not given them through any deliberation. While I may override these motives, either through impulsive action, or though further deliberation about the benefits of doing so, I find that these deliberations smack to me of rationalization.

There is a problem in cases like these. The mechanisms of practical deliberation normally taken to be agency-bestowing appear here as the exact opposite: from the first-person practical perspective, I am distanced from my deliberation, so that while I endorse all the premises in the deliberation, I still cannot help treating the process as a rationalization, undermining the agency-laden motives of pride. I might have every (good) reason to swallow my pride here, and yet I find that every such reason undermines my sense of my own identity and my own agency. The deliberate, rationally endorsed course of action comes up against a practical identity that I do not in any obvious sense endorse, but that I experience as somehow self-endorsing. (Think of John Proctor in Arthur Miller’s The Crucible bellowing, when asked why he will not sign his false confession, “Because it is my name!”)

The obvious existence of cases of this kind, I think, lends Frankfurt’s account much of its credibility. But I am not at all convinced that a view like Korsgaard’s can accommodate such cases. On her view, after all, such self-endorsing motives are necessarily agent-undermining, since they seem to involve something “acting on me or in me”. I agree that, thus described, self-endorsing motives are agent-undermining. But we can redescribe them as follows: We are complex organisms with complex mental economies. Some items in these economies are central and largely irrevisable; perhaps we could revise them with a massive amount of work, but for the most part they are likely to serve as the cornerstones in all our processes of deliberation and endorsement in such a way that attempting to revise those items would itself be a self-undermining process, a sort of conflict of the will with itself. It is precisely because those items are central to our deliberative processes and our self-conception, and not because we actively endorse them upon deliberation, that they appear to us as agency-laden rather than agency-undermining. If they appear to undermine our agency, it is because they get in the way of other things we want or care about. In other words, they seem to undermine our agency only if we think of agency as entirely unfettered to non-deliberative motives, and if we cannot accept the idea that we discover some norms within ourselves rather than coming to them through deliberation.

My claim, in other words, is that we do not fully create our wills; we also discover ourselves to have de facto irrevisable wills. Because they are de facto irrevisable but in principle revisable, the phenomenology here comes into conflict with theories of agency that reject any passive component to agency, i.e., any component that we do not actively endorse and that we cannot reject without great harm to our practical identities. And this is also a case where, I think, the phenomenology has the upper hand: faced with self-endorsing agency-laden motives, I may well be aware that I could, in principle, withhold my endorsement of them; but this thought is only an abstraction, born of a self-deceptive view of the agent as a mind fully in control of itself. But at the same time, the motives we endorse and the self-endorsing motives we encounter usually work together more or less harmoniously. Agency is, one might say, a composite of what we are and what we make of ourselves. Phenomenology is in a unique position to study the functioning of this composite. Granted, it cannot exclude the problems of exhaustion-satisfaction or manipulation-satisfaction. But this shows only that phenomenology is not sufficient for an account of agency; not that it is not central to working out such an account.

Continue Reading...

Monday, July 21, 2008

Normativity and the Causal Theory of Action; Some Concerns About Causalism

Given the recent concerns by David Velleman and others about the “chilling effects” of blogging about conferences, I am a bit hesitant to say too much about the conference on “Normativity and the Causal Theory of Action.” But it was a superb and very interesting gathering, so I’d like to at least offer a few reflections. I don’t think I have anything to say that could even potentially be construed as negative, and if anyone from the conference objects, I would be happy to take down any of the points. In any case, this was a really impressive group of people, and many thanks go to Markus Schlosser, Bryony Pierce, and Finn Spicer for organizing it. The faculty and post-graduate students from Bristol, other UK Universities, and a few visitors from the Continent, provided a spectacular stream of comments that largely had me in awe. Bristol, I should add, is a gorgeous city, with winding streets criss-crossing at different levels in three dimensions; more than enough for a serious flâneur. And it isn’t every day that I get to stay in a dorm room within a 1740s Palladian villa.

Interestingly, the conference did not really attain its initial goal, originally stated as being to bring together critics and supporters of the causal theory of action (CTA). It did not attain this for the simple reason that all five speakers accepted CTA, at least in some minimal form. Though some of us were a bit critical, no one argued against such theories altogether; the papers were more focused on attacking specific versions or formulations of CTA, or raising problems that causal theorists have yet to resolve, than attempting to throw CTA out altogether. Thus, Lynne Rudder Baker defended CTA, but struck a blow against any version on which actions are caused by neural events, providing a quite brilliant argument to the effect that action-causing mental states are constituted by, but irreducible to, their neural substrates. Matthias Haase questioned the extent to which CTA can account for rule following. Maria Alvarez provided a strong account of reasons as facts, attacking causalists for speaking of reasons as the causes of actions, as if reasons were reducible to mental states. And I (on a charitable reading of my paper) argued that CTA is only part of the story of action explanation; the other part has to be hashed out through agent-constituting narrative accounts, which specify exactly what it is, within the agent’s psychic economy, that rationalizes each action.

But while all of us were open to endorsing at least some version of CTA, Michael Bratman was the conference’s major defender of causalism. Having never seen him in action before, I must say that his reputation is well earned. He has the ability to get to the philosophical core of every paper, and thus his comments sometimes had an especially devastating tendency. In his own account, he raised three features central to human agency (planning, identification, and rational guidance), argued that these are fully compatible with CTA, and insisted that we need CTA for two major reasons: (1) If we accept that the actions of non-human animals are causally produced by features of their psyche, we need CTA in order to retain continuity between those animals and ourselves. (2) Accounting for the Davidsonian challenge of distinguishing between acting with a reason and acting for that reason—that is, we need a way of specifying the connection between an action and the motives for which the action was actually performed, as opposed to the motives the agent simply happened to have at the time of action, but did not act on. Bratman added to this the consideration that, if we are to be able to speak of acting for reason R, and acting for a different reason while thinking (perhaps through simple error, or self-deception) that we are acting for R, we need an account of what the right connection—acting for R—comes to; CTA gives us this.

Ultimately, though I find these considerations important, I am not fully convinced. Let us formulate two sorts of objections to CTA. Objections of the first sort argue that planning, identification, and rational guidance are incompatible with a causal account. Those objections, I think, are amply answered by Bratman, Velleman, Mele, Bishop, and others. But now take objections of the second sort, which might go like this: the features central to human agency are, e.g., planning, identification, and rational guidance. These features may well be compatible with a causal account of action. But these concepts themselves are not causal ones. Thus, the objection might go, although giving a complete account of agency need not rule out CTA, it need not appeal to it either, since the features central to agency are explicable apart from any reference to causal relations. I think we can answer the first of Bratman’s points: we can grant that there is no radical break between human and animal kinds of agency by accepting CTA as running in the background of any metaphysical account of action. But certainly we need not foreground CTA, especially since we can explicate the features central to human agency without constantly returning to the continuities between human and animal agential powers.

Thus it is really the second point—Davidson’s original one—that seems to require CTA. It rests on the question of whether we can give a coherent account of what it is to act for a reason (as opposed to merely acting with a reason) without appealing to causality. I’ll give here a brief, and all too incomplete suggestion: we had better be able to give such an account, since CTA requires it. The reason is this: saying that X caused Y isn’t very meaningful, unless we can give a further account of what that causal relation consists of. For example, we might say that the solubility of salt (together with the salt's being placed in water) causes it to dissolve in water, but this claim conveys little information apart from a detailed account of the dissociation of NaCl molecules in H2O. That account cannot, in turn, appeal to any causal claim, since it is supposed to make the causal claim meaningful in the first place. Similarly, if we are to explain sentences of the type “Agent S’s action A was caused by motives X, Y, Z”, we need to lay out what causation by these motives consists in, and we need to do this in non-causal terms. If (in Mele’s example) Al mowed his lawn in the morning because this was a convenient time to mow his lawn and not because he wanted to get back at his neighbor for waking him up early last week, then we need an account of the relation between his belief that this is a convenient time and his mowing the lawn, and an account of how this is different from the relation between his wanting to get back at his neighbor and his mowing the lawn. And these accounts seem to require talk of rationalization and the agent’s psychological economy that is not in turn dependent on any causal talk.

Continue Reading...

Tuesday, July 15, 2008

Off to the Continent, Plus Action Theory Abstract

I might be taking a blogging hiatus for a bit, as I'm leaving for Europe and don't know how good or regular my internet access will be over the next month and a half. It should be fun. Three philosophy events planned:
Bristol: Normativity and the Causal Theory of Action (One day conference)
Cologne: Meaning and Its Place in Nature (Workshop with Ruth Millikan)
Krakow: European Congress of Analytic Philosophy (Really long conference)

I'll try to post about these if I get a chance. All should have exciting things to offer. For now, I'll throw out my abstract for the Krakow Congress, which is a revised version of the Bristol paper, which in itself has grown more sophisticated and, hopefully, a heck of a lot clearer than this original (that, and I managed to squeeze Lacan into the newer versions):

The First Person, Mental Holism, and the Causal Theory of Action

Following Davidson, the dominant view in the philosophy of action takes actions to be caused by the mental states that rationalize (i.e., provide reasons for) those actions. But objections to the view remain on the grounds that the causal theory seems to be out of sync with some aspect of our first-person perspective. On the one hand, it is argued, we do not actually experience our mental states as causing our actions. On the other hand, the relation between our motives and actions seems to be a normative rather than a causal one. I will argue that there is no real problem: there is, ultimately, no good reason to reject the causal picture on first-personal grounds. On the other hand, once we work out the real force of the appeal to the first person, the picture of causality we are left with is immensely uninteresting.

What gives rise to the first-personal criticisms of the causal theory is a difficulty about the mental itself: mental states seem to depend for their existence on being apprehended, or potentially apprehended, by consciousness. Mental states depend for their very existence qua mental on the way in which they can be apprehended. If this is right, then it follows that accounts of mental states and their relations must be developed from a first-personal perspective. But from a first person perspective, we do not seem to experience our motives as causing our actions. The standpoint from which we can even talk about motives, then, excludes causal relations. But this argument is flawed: we can, sometimes, recognize motives so overpowering that they bring about courses of action. Nor can we exclude the possibility that some motives operate in the back of all our deliberations. A stronger argument appeals to the normative role of deliberation: we cannot take a motive as a cause in deciding what to do; rather, we must endorse or reject the motive, which implies that it cannot act causally on us. But this is also problematic. We can engage in rational deliberation that merely helps to clarify which desire is strongest, so that it can act as a cause; or we might engage in post-hoc justification of a decision that has already been causally established. The causal view is not excluded by this account.

I argue that appeals to the first person are really getting at something else: the thesis of mental holism. The thesis claims that the identity of a mental state depends on its relations to other mental states, including past ones, and to the entire framework of the mental. But these accounts typically leave out the important role of the future: if our normative frameworks can change, so then can our estimation of the identity of past mental states, including the motives that caused our actions. But since mental states derive their identity from these frameworks, it follows that the identity of any mental state can, at any future time, be revised. The revised account of that mental state’s identity will be just as true as my estimation of the motive causing my action was at the time I acted. Because the identity of our motives is always open to revision, it may seem like no particular motive is responsible for causing our actions, and this gives rise to the objections to the causal theory. On the account I offer, however, we can maintain that our actions are indeed caused by mental states, but there is no fixed fact of the matter about which particular states they are. If the first-personal arguments fail to show that our actions are uncaused, they make that thesis much less interesting. The cause of an action is only an empty placeholder for explanations that we can construct and reconstruct indefinitely.

Continue Reading...

Tuesday, July 8, 2008

Davidson on Pro-Attitudes: Evaluative and Dispositional

As is well known, Davidson sees actions as events caused by reasons, which themselves involve a combination of a belief and a pro-attitude. Davidson, and many others, often use “desire” in place of “pro-attitude”, though we do best to keep in mind that this use of desire is meant to be very broad. These pro-attitudes are supposed to be mental states, capable of both playing a causal role (when re-described appropriately as physical states) and a rationalizing role (that involves making the action they cause intelligible, both to observers and the agent). I’ve started to get concerned, though, about whether Davidson’s accounts of what pro-attitudes are can be sufficient.

First, we can start out with Davidson’s earliest account of pro-attitudes, given in “Actions, Reasons, and Causes”. Here, under the category of pro-attitudes

are to be included desires, wantings, urges, promptings, and a great variety of moral views, aesthetic principles, economic prejudices, social conventions, and public and private goals and values in so far as these can be interpreted as attitudes of an agent directed toward actions of a certain kind. The word ‘attitude’ does yeoman service here, for it must cover not only permanent character traits that show themselves in a lifetime of behaviour, like love of children or a taste for loud company, but also the most passing fancy that prompts a unique action, like a sudden desire to touch a woman’s elbow (A&E, 4)
There is already, I think, something a bit off here. Keep in mind that the point of introducing pro-attitudes as essential to the action-producing causal sequence is that they are supposed to rationalize an action, i.e., to make it intelligible. But the introduction of passing fancies into the mix seems to undermine that. There is no doubt that at least some actions do arise as the result of passing fancies. But it is not clear how such fancies can help make actions intelligible: that is, the action may well be intelligible in light of the fancy, but the fancy itself is not intelligible. If the entire point of introducing pro-attitudes is to ensure that it is possible to make sense of actions, then pro-attitudes that seemingly spring out of nowhere don’t help; if we’ve identified the proximal cause of an event but not the cause of that cause, we haven’t really given a causal explanation of the event at all; we’ve only pushed that explanation back a step, resting satisfied with having done very little. The passing fancy does not, by itself, serve to make sense of the action: something else is needed to make the fancy itself meaningful. This suggests to me a tension in two ways in which pro-attitudes play explanatory roles: their explanatory role as making an action rational, and their explanatory role as making the action intelligible. For Davidson, these are the same thing—intelligibility implies rationality—but the above considerations suggest to me that the connection is not so direct.

Let’s move on to an account of what pro-attitudes must be like if they are to have a role in making an action rational. For a mental state to rationalize something is for that state to be capable of playing a role as a premise in a practical syllogism. The state’s contribution to rationalizing an action, then, is provided by its content, which Davidson characterizes in “Intending” as follows:

The agent’s pro attitude is perhaps a desire or a want; let us suppose he wants to improve the taste of the stew. But what is the corresponding premise?... we do not want a description of his desire, but an expression of it in a form in which he might use it to arrive at an action. The natural expression of his desire is, it seems to me, evaluate in form; for example, ‘It is desirable to improve the taste of the stew,’ or, ‘I ought to improve the taste of the stew.’ We may suppose different pro attitudes are expressed with other evaluative words in place of ‘desirable’. (A&E, 86)

Davidson is completely right, I think, in pointing out that the propositional content of a desire cannot be something like “to improve the taste of the stew”, or “the taste of the stew is improved.” That content could play no role in a practical syllogism, because it is missing the crucial element needed at this point in the practical syllogism, namely, the evaluation.

Davidson sometimes suggests that the propositional content of mental states is all that we need to know about them in order to understand how they function as mental states. For example, in “Problems in the Explanation of Action,” he tells us that,

beliefs, desires, intentions, and intentional actions must, as we have seen, be identified by their semantic contents in reason-explanations. The semantic contents of attitudes and beliefs determine their relations to one another and to the world in ways that meet at least rough standards of consistency and correctness. (PR, 114)
Now this is partly right: to understand an agent’s action, we need to know why she thought the action worth performing, and the evaluative claim tells us this. But we should also note that the content so given does not exhaust what there is to be said about the mental state. This is fairly obvious from the fact that the judgment is carried out in evaluative terms such as “it is desirable” or “I ought to”, etc. Such evaluative terms as “ought” and “desirable”, if we are to make sense of them and not simply of their place in a practical syllogism, require a further account in terms of the non-propositional features of states such as desire or obligation. So an account of pro-attitudes in terms of evaluations is not exhaustive; something needs to be added. But what?

Davidson seems to suggest an answer to this while explaining why we need reference to pro-attitudes in action explanation:

To deny the need for a pro-attitude in the etiology of action is to lose an important explanatory aid. If a person is constituted in such a way that if he believes that by acting in a certain way he will crush a snail he has a tendency to act in that way, then in this respect he differs from most other people, and this difference will help explain why he acts as he does. The special fact about how he is constituted is one of his causal powers, a disposition to act under specified conditions in specific ways. Such a disposition is what I mean by a pro-attitude. (PR, 108)
On this account, pro-attitudes are dispositions to behave in a certain way. But this account is clearly insufficient to explain what pro-attitudes or mental states in general are, since dispositions to behave in certain ways might not be mental, intentional, or propositional at all. So Davidson’s claim that a disposition to act in specific ways under specified conditions is “what I mean by a pro-attitude” is confusing: such dispositions are not at all what Davidson means by a pro-attitude, unless those dispositions include an evaluative content. Moreover, there are—as Davidson recognizes—plenty of pro-attitudes that never issue in action at all, and so there are pro-attitudes that do not yield themselves to a dispositional account except counterfactually. Understanding pro-attitudes as dispositions helps us to make actions intelligible, but not necessarily rational, in the way that understanding the molecular structure of salt and water makes the solubility of salt intelligible without rationalizing it.

The difficulty, then, is that Davidson seems to have several different accounts of pro-attitudes, none of which are sufficient to explain what pro-attitudes are. What does seem necessary for a pro-attitude to play the role Davidson needs it to play is its evaluative content. But this evaluative content requires a further, non-evaluative element, in order to explain its evaluative meaning; at the same time, we need a further element in order to explain how pro-attitudes as such can be meaningful for an agent, a point brought out by cases of “passing fancy” problem. But this further element, in turn, cannot be provided by a dispositional account, since the dispositional account only provides an account of pro-attitudes if we already assume the evaluative account. The evaluative account explains the action as rational, but does not explain why the agent performs the action, only how her performing it might make sense in rational terms; it neither makes sense of the motivating force of particular pro-attitudes, nor does it explain how the attitudes themselves are meaningful for the agent. The force of the evaluation remains unclear. But that force cannot be provided by the dispositional account, since that account already assumes the evaluative account; otherwise it would not be an account of pro-attitudes at all. My suggestion, then, is that both accounts of pro-attitudes given by Davidson require a phenomenological account as a foundation in order to make sense of the motivational and meaning-bestowing power of pro-attitudes.


References:
A&E=Donald Davidson, Essays on Actions and Events. Clarendon, 1980.
PR=Donald Davidson, Problems of Rationality. Clarendon, 2004.

Continue Reading...

Friday, April 25, 2008

Neural Antecedents of Decision: Some Phenomenological Skepticism

Web-happy philo-types are by now familiar with the recent study on “Unconscious determinants of free decisions in the human brain” by Soon et al., published in Nature Neuroscience. The study, which expands on the famous experiments performed by Benjamin Libet, purportedly demonstrates a seven second gap between the onset of neural activity involved in making a choice and the subject’s awareness of the choice. The details are discussed, among other places, at Not Exactly Rocket Science, Mixing Memory, NeuroLogica, and Conscious Entities; some interesting comments also at Alexander Pruss's blog. Essentially, participants were asked to press one of two buttons, and to take note of the letter showing on a screen in front of them at the instant they first become aware of having made a decision (Libet’s original experiments asked subjects to remember the position of a hand on a clock); all the while, fMRI scans were recording their brain activity. The comparison, then, is between two heterogeneous sorts of things: neural events, and conscious awareness.

One problem I have with these experiments is that they seem to assume a temporally thin notion of consciousness. Neural processes take time, which is why you can measure how long they go on before an action is carried out. They are temporally thick processes. But conscious awareness is apparently assumed to be instantaneous: we know the exact moment we become aware of something. This is a temporally thin notion: there is no gap in time between the instant we become aware of something and the instant we become aware that we are aware of it or, as in the set-up of the experiment, between the instant we make a choice and the instant we become aware of having made the choice. One obvious reply is to deny that there is any problem here, given a basic assumption that “becoming aware” and “becoming conscious” are synonymous. Surely there is an instant when I am conscious of my decision, say, or conscious of the position of the hand on a clock face (or the letter on a screen). And there is no real sense in which we can speak of something like being aware of such things without also being conscious of them.

But this sounds dubious. What these experiments measure, after all, is not simply when the subjects become aware of a decision. Instead, they measure when the subjects become conscious of that awareness. This is a reflexive process. But is there reason to think that the reflexiveness itself does not take up time?

To complicate matters, there are two reflexive processes going on. On the one hand, the subjects must become conscious of making a decision. On the other, they must become conscious of the letter on the screen at the instant that they become conscious of making a decision. This sounds like a fairly complex process to me, though maybe I am wrong. In any case, though, the process has got to take at least some time to perform. (That there is likely also some temporal gap between seeing the letter on the screen and registering that one has seen it complicates this even further.) What I am suggesting, in other words, is that there might be two temporal gaps that the experiments do not address sufficiently. First, there might be a gap between becoming conscious of a decision and becoming conscious of that consciousness. Second, there might be a gap between becoming conscious of that consciousness and associating this second-order consciousness with the awareness of a particular letter on a screen.

I worry about this, particularly, because the reflexive process is not involved in normal decision making. I either continue typing, or I stop to scratch my nose. But I do this without the second-order consciousness. I cannot, looking back at my action, pinpoint the exact instant when I decided to scratch my nose; normally, I cannot upon reflection even establish that I ever made such a decision, but this fact does not undermine my experience of having made the decision nevertheless. (This suggests, to me, that we may be better off not treating decisions, choices, or volitions as if they were events, and instead recognize them as interpretative abstractions.) So what I am questioning here is the idea that being aware of making a decision—in the normal way in which we experience making decisions in everyday situations—is really connected to the sort of consciousness of deciding that these experiments look at. What they are looking at is the process by means of which we thematize our decisions in consciousness; but this is neither something we normally do, nor is it something that seems central to our awareness of ourselves as deciding.

To top it off, I wonder to what extent the results of these experiments are even transferable to our everyday decisions. The subjects are specifically asked to pay attention to their decision and note the instant when they become conscious of it. But this is not something we normally do. Try it. Right now. Decide to scratch your nose, and then scratch it. When I do this, it feels weird: there is a doubling effect going on, as if I am performing the same action twice. In Searle’s terminology, the decision to scratch my nose is a prior intention, while the mental process involved in the actual nose-scratching is the intention in action. The prior intention in such simple actions is completely redundant. So if you are specifically looking for it, this seems to just distort what it is you normally do when you make decisions. When we ask people to locate the temporal instant at which they make a decision, we are asking them to do something extremely unusual, and the experimental data obtained from such exercises seems unlikely to be telling us very much about normal human decisions making; it might be telling us not what is going on in the brain when we make decisions, but what is going on in the brain when we try to catch ourselves making decisions, which is going to be a very different and very slippery task.

I am not just trying to be skeptical. What I am curious about is just the claim the scientists performing experiments like these seem to be making, i.e., the claim that we can scientifically study the relation between neural processes and consciousness or, at least, that we can do this given current technologies without either distorting or oversimplifying the precise thing we are studying. I am perfectly sure that we can study the neural processes. But it is not at all clear that we have the tools for scientifically studying consciousness. Why, then, should we think scientifically studying the relationparticularly the temporal relationbetween the two a currently plausible proposal? These experiments are certainly interesting for all sorts of reasons; I am uncertain that the light they claim to shed on conscious choice is one of them.

Continue Reading...