Tuesday, March 25, 2008

Actions, Consequences, and the Phenomenology of Responsibility

There is an all too common view according to which we can only be responsible for actions that we have chosen in some sense freely, or autonomously. If the consequences of our actions matter, it is because we were, or should have been, aware of those consequences. On this view, the consequences of our actions make a difference to our responsibility only insofar as we can or could have foreseen those consequences. This is the picture, then, on which free will or autonomy has primacy; responsibility, in turn, is grounded in it.* I want to argue for a different picture, one according to which the phenomenology of responsibility is intrinsically such that the autonomy or freedom of the agent with respect to an action depends in large part on her future estimation of the rightness or wrongness of its consequences. A deliberative decision—at the time it is made—is largely arbitrary; if it were not, deliberation would have been unnecessary. It ceases to look arbitrary only from the standpoint of one’s future self.

There are doubtless many questions to raise with regard to the condition that we must know or be able to know the consequences of our actions in advance. And there are questions, too, about the related issue of our conative attitudes—how deep must our desires go, how “wholeheartedly” need we accept them, to be held responsible for the execution of action on their basis? Something about this sort of approach certainly seems right. What I want to suggest is that what is right about it, however, is not that there is some unified entity, or even some coherent set of volitional attitudes and beliefs that constitutes the agency necessary for responsibility. That is, I want to question both the common libertarian premise about free will and the common compatibilist one. The libertarian often claims that to be responsible, an agent must directly (or indirectly) and indeterministically choose to perform an action. The compatibilist typically looks for something less demanding, namely, that the action be caused by agency-constituting features, such as rational deliberation, endorsement, cohering attitudes, or desires with which we are fully satisfied.

To say that the compatibilist approach is “less demanding”, of course, is not entirely right. It is only less demanding in the sense that it does not require us to rely on the idea that (1) some events can be caused indeterministically, (2) that the indeterminism in physical processes somehow corresponds to an indeterminism in our deliberations, and in such a way (3) that it is primarily the deliberation, and not the physical indetermistic causation that gives rise to the choice of action. This picture is likely too complicated to be true; and, moreover, it relies on assumptions about the physical world that are, given our current state of knowledge, probably false. In any case, the compatibilist has a further argument: that indeterministic decisions are necessarily arbitrary in such a way that agential responsibility is undermined.

To avoid this last problem, compatibilists typically try to fortify our choices with the aforementioned agency-constituting features. But this, too, ends up being a bit too demanding. I think psychology is slowly uncovering the unlikelihood of our actions being caused through such coherent dispositions; but here I want to pose a phenomenological challenge. It requires noticing that questions of responsibility for an action—and so the accompanying questions of whether the agent was free or autonomous in performing the action—come up only in contexts (sometimes, but not always, moral) where the rightness or wrongness of an action is at issue.

Here is an example: I am taking an exam. I am facing a particularly tough problem, where two answers strike me as possibly right, but I am not sure which one is right and don’t have the time to work it out further. I pick one on a hunch. If all goes well, and the answer is right, I may never think about it at all; or, in thinking about it, I might believe that due to my understanding of the material, my hunches tend to be good ones. But what if I end up getting the problem wrong? I will then blame myself for my stupidity, and for making a bad choice where I should have known better. Although my choice was, for all intents and purposes, arbitrary, whether I praise myself or feel guilt over the bad decision I made depends on the outcome. I would suggest that, in more or less complicated ways, the vast majority of our decisions are of exactly this kind.

A more complex example: I am largely set on graduate school. But the thought that it would be nice to have a decent life—one that has some redeeming features other than the occasional published paper—nags at me; law school becomes a competing option. We can sketch out such a conflict in any number of ways. Perhaps I have different beliefs about what would constitute a good life and different desires regarding my future. If I cannot adjudicate between these sets of beliefs and desires, then my resulting decision is—take your pick—as autonomous as it could be (given the circumstances, i.e., a lack of wholehearted identification with one of the options) and at the same time entirely arbitrary: it could have gone either way. To make this more of a challenge to compatibilist approaches: Let us say that I am wholehearted about going to graduate school; something nags at me, though, urging me to apply to law schools as well. This nagging feeling may appear to me as entirely external—as an affect that I do not identify with—but I humor it and apply anyway. If, in the end, I do not get into any decent graduate schools, but do get into a great law school, it is pretty easy to decide what to do.

So what of this situation? On what grounds can we reasonably say that the decision to go to law school was not autonomous? You will perhaps have guessed that I think there are no such grounds. Imagine that, ten years later, I am miserable with the legal world and wish I could go back and remake my decision to get into it in the first place. Why did I apply to those law schools? Why didn’t I do what my heart told me to do? Why didn’t I stick with the attitudes that defined my “real self”? On the other hand, imagine that, although I would have preferred graduate school, I quickly get into law school—I enjoy having definite problems to solve, clear rules to work with, easily demarcated rules of competition, and the high-paid lifestyle that results. And here I might be glad that I acted autonomously, followed my real self, and went to law school instead of indulging in a post-adolescent intellectualist fantasy of academia, however central that fantasy once seemed to who I am. In evaluating my responsibility for my choice, then, I now look at it not from the perspective of a conflicted self making a largely arbitrary decision, but from the standpoint of my current situation, one in which my satisfaction or dissatisfaction with my choice—its rightness or wrongness—has shaped my perception of what my real, autonomous self should have done.

I suggest, then, that the currently dominant structural theories of autonomy are largely false. There is no real self that is responsible for its decisions. The self responsible is, rather, the future self, the self that is a product of situations brought about as consequences of the initial decision. We cannot eliminate arbitrariness from our choices; we can make them autonomous only by looking back on them. One might, of course, object in the following way: this is a merely phenomenological critique. How we see or experience ourselves has, in the end, nothing to do with whether or not our choices are really the products of some coherent real self. To make this objection work, however, one would need to produce a notion of something like a real self, a coherence of our preferences, or identification with a desire that does not rest on an agent’s self-apprehension. It would be a notion of a choosing self that is understood independently of how this self experiences itself. And that such a picture could be coherent is dubious.


* Strawsonian accounts that base responsibility on our reactive attitudes are not immune from these considerations. Those reactive attitudes depend on our estimation of the agent’s intentions in performing the action for which we judge her. So even if we think that judgments of responsibility derive primarily from the framework of reactive attitudes we have toward others, these judgments still get their appropriateness from considerations about the agent's psychological profile.

Continue Reading...

Thursday, March 20, 2008

Are We Post-Moral?

For a while now I've been claiming to anyone who cares to listen that we live in a more or less post-moral age. I don't really have a developed argument, and in fact, I'm not even exactly sure what a 'post-moral age' means, but here's a brief attempt to explain.
I don't mean anything too profound by 'post moral.' For instance, I don't think that moral obligations have floated away, or that they have ceased to matter. For all I know, that may indeed be the case, but it's irrelevant to the notion I'm getting at. I also don't mean to suggest that the moral world has been supplanted by something newer, fresher or more 'authentic.' Again, this might be case, but it doesn't matter for my purposes. Finally, I am also not denying that many of us (although a decreasing number, I suspect) ask ourselves moral questions from time to time, perhaps even daily. Should I cheat on my taxes? Is it okay to distort my colleagues record in order to gain an advantage over her? Is it wrong for me to skip my friend's wedding for a vacation in Vegas? As I intend the term, it is perfectly consistent to recognize that people ask themselves questions like these but maintain that our world is post-moral.

I think I can best describe what I mean by 'post moral' by pointing out that few of us (post-moral persons) take our moral character or standing as something important and overriding in our lives. Becoming a moral person is not a task that, when suggested, most people recognize for themselves. Maybe I'm wrong about this. Maybe I'm projecting my own shoddy self-understanding on the world around me--but I don't think so. It seems to me rather that morality, to the extent that it is effective in the average person's life at all, operates as a constraint (however constructive) in the pursuit of goals. For example, Jones hopes to become a successful researcher, but believes that, morally, it is wrong to achieve this end by plagiarizing another colleagues work. It's not that Jones is just afraid that he might not get away with it; get away with it or not, he does not believe that it is right to plagiarize for the sake of self-advancement and to the injury of another person. But the point is that it never occurs to Jones that it is important to act this way because it is important to be a good person. Jones really has no interest in being a good person. He wants to be a successful researcher, but he recognizes that there are certain things one ought not do in the pursuit of that goal.

Now, I also realize that there are ends or projects that many of us work towards achieving, and to which some of us even dedicate our lives. Ending world hunger, helping displaced refugees, working to promote human and civil rights--these are all, in some sense, moral ends that lend purpose to many people's lives, and that many of us, to some extent at least, identify with. But again, it is completely compatible to work for such ends, even strenuously and with dedication, without concern for whether or not, in so doing, one is or is becoming a good person.

Alastair MacIntyre we know was all the rage for a while because he argued that some important moral knowledge--the importance of character and virtue, and habit--had been forgotten sometime on or about May, 1641. I'm not making this argument. I don't think that the desire to be a good person commits one to a virtue-theory of ethics. Kant is no virtue theorist, yet Kant certainly recognized that we ought, morally, try to be good persons. I'm just saying that while many of us worry about doing the right thing, few of us worry about being the right sort of person, and that this makes us 'post-moral.'


Continue Reading...

Wednesday, March 12, 2008

Is There an Ethics of Belief?

Recently I’ve been coming across some thinkers promoting an ethics of belief. Initially the idea sounded a little strange to me: ethics concerns actions—intentions to act, dispositions to act, consequences of action—and beliefs matter ethically only insofar as they are relevant or involved in action, right? As I said, this was my initial reaction, and after some reflection, I still hold to it, although with an important caveat. So let me say what I think this notion gets right, and what it misses.

Take for example Colin McGinn’s claim that his atheism results in part from an ethics of belief. I’ll let him say why in his own words:
“It is often forgotten that atheism of the kind shared by Jonathan [Miller] and me (and Dawkins and Hitchens et al) has an ethical motive. Or rather two ethical motives: one is ethical repugnance at the cruelty, tyranny and oppression of organized religion over the course of human history; the other concerns the ethics of rational belief—how we are obliged to form our beliefs about the world. The first motive is familiar and needs no commentary from me. The second is less widely appreciated, but for some of us it is crucial to the whole discussion. We believe, as an ethical principle, that beliefs about what reality contains should always be formed on the basis of evidence or rational argument—so that “faith” is inherently an unethical way to form your beliefs.”
I will focus on the latter principle. McGinn asserts as a ethical principle of belief (formation and retention) that:
“beliefs about what reality contains should always be formed on the basis of evidence or rational argument”
Let’s call this principle ‘MG’. I see some problems with MG as stated. For one thing, it is too vacuous for serious universal application. Most theists, after all, claim that their belief is informed by serious evidence and argument. Thus, in an empty way, their beliefs satisfy MG. Another problem concerns the concepts of ‘evidence’ and ‘rational argument.’ What are we to make of these? By ‘rational argument’ are we committed to making as many of our beliefs as possible consistent with one another? If so, MG is surely too strong. Is it irrational, and therefore, unethical, for someone to be a dualist? As stated by Descartes, for instance, I take it as a truth of reason that dualism simply cannot be correct. Was Descartes—worse than wrong—immoral? By MG, it would seem so. A similar problem emerges with the notion of evidence. I’m not sure exactly what McGinn means by this term, but presumably he means something like verification, demonstration or proof. But many of our beliefs, especially the most interesting and profound ones, nevermake it to the stage where evidence in that sense applies. I, for instance, don’t think that a belief in God is a false belief—it is a confused belief. Most of the new atheists with whom McGinn aligns himself spend way too much time going straight away to arguments about why the belief is false, and skip over the really hard work of unpacking the confusions involved in the concept of God itself. Let me get personal for a moment and say why I think that one argument for the existence of necessary being is very hard to get around—it is one of Aquinas’ five proofs.
From nothing, nothing is caused. Something must always have existed if anything exists. This universe exists (assumed). Thus, something must always have existed, ie something exists necessarily.
Now, of course this is only a proof that something must necessarily exist, not that a personal God with intentions and an interest in human affairs exists. But so what? It is a theological claim, and even though I don’t believe it, I have a hard time saying why. Am I unethical? Not for this reason, I think.

We should also consider that for many of our beliefs we simply lack any decisive evidence or demonstration either way. Is it consistent with MG to form a belief on such matters anyway? For instance, should I believe that quantum mechanics is a correct description of reality? After all, we have yet to solve the puzzling fact of apparent particle-wave duality, and we still haven’t unified gravity with the electromagnetic forces. More importantly, we have no really good idea how this might be possible. Is it wrong to have beliefs on these matters, then? Perhaps McGinn would suggest that we append a rider to MG, such that in cases where sufficient evidence is lacking decisively for or against the truth of a belief, one ought to withhold judgment. Let’s call this MG’. Well, ok, but how do we apply MG’ to cases like the axiom of choice? Doing set theory requires that one take a stance on the axiom of choice--a 'leap of faith,' as it were--without any rational justification. Is this an example of the intrinsic conflict of goods in the domain of belief? Maybe, but this conclusion seems way overwrought.

Finally, what are we to do about counterfactual beliefs? Maybe there are some counterfactuals that we could exclude from reality, but surely not all of them. So what about someone’s belief that, had Napoleon not been defeated, then Germany would have democratized much earlier than it did? It’s surely fun to consider counterfactuals like this, and we can consider them knowledgably, but of course we really have no decisive way of determining their truth value. Again, is it immoral to believe a claim like this?

The fact is that most of our beliefs have no obvious actionable consequences, and because of this, I have a hard time thinking of them as ethical at all, and this in turn is why I think that my initial reaction is basically the right one. As of now, I can think of no morally sensitive consequence of either the belief that the universe is bounded or that it is infinite, and thus, this belief simply has no moral import. Now, we might suggest one further reworking of MG, to something like the following (MG’’):
Beliefs about what reality contains should always be formed on the basis of evidence or rational argument WHEN those beliefs potentially lead to morally sensitive consequences.
But this is just to say, again, and redundantly, that it is the actions or consequences that matter, not the belief per se. Suppose for instance that someone has a completely vacuous belief in a God—believing that some God exists, but who takes no interest whatsoever in human affairs and has prescribed no rules or norms. This belief, basically, is totally irrelevant to how one behaves and lives. It’s difficult for me to understand how such a belief could be moral or immoral at all--how one could be committing the 'sin of atheism' to believe it.

But let me conclude with my caveat. Despite all that I’ve said above, I do believe in a basically Kantian project of enlightenment, and this makes it requisite that I submit my moral and political beliefs to public, rational scrutiny and, when the public argument is persuasive, I ought, morally, to change my mind. So perhaps we could amend MG one final, less problematic way (MG’’’):
Beliefs about what reality contains should always be formed on the basis of evidence or rational argument WHEN those beliefs are morally or politically salient.
I have no problem with MG’’’, but I’m not sure that it says anything except that, insofar as beliefs are relevant to moral action or consequences, we have an obligation to ensure as best we can that those beliefs are, morally, right. But that’s obvious. I think. And again, this means that there is no such thing as a distinct ethics of belief; there is just ethics.

Continue Reading...

Monday, March 10, 2008

Kant's Moral Psychology (III): Endorsement, Determinism, and Motives

In this series of posts I have been defending Kant’s moral psychology against Leiter and Knobe’s attack. Particularly, I have been contending that their target is not Kant at all: they are attacking the position of a naturalistic Kantian, but Kant himself was certainly no naturalist. In this post I want to wrap up the discussion by looking at three features important to Kant’s moral psychology: his view of determinism, the endorsement principle, and the role of the passions (or sensible motives) in moral action.

Let me review the main elements of L&K’s argument as it pertains to Kant. They first present findings that suggest a massive role for heredity in determining our moral behavior (p. 18), so already there is a big chunk of moral behavior that is not touched by our explicitly adopted moral principles. L&K then go on to cite studies that suggest in no uncertain terms that, first of all, the correlation between people’s beliefs and their behavior is fairly low. Second, in cases of correlation, it looks like usually the behavior determines the beliefs, and not the other way around (incidentally: I do think this is a fairly large problem for Velleman; I doubt it is a problem for Kant). And finally they take their parting shot: even though there may be a small number of people who act on the basis of consciously chosen principles, not all will be moral; conversely, there may be agents who look very moral, but don’t act on explicitly chosen beliefs. That is, there “may well be agents whose conduct otherwise manifests respect for the dignity and autonomy of other persons and comports with the categorical imperative,” but, if these agents do not act specifically on consciously chosen principles, they “lack the kind of motivation (e.g., respect for the moral law) Kant himself thought morally significant.” Consequently, a Kantian is “likely to have to treat as immoral a lot of apparently moral individuals because of the largely unrealistic demands of Kant’s moral psychology” (33). This is a web of tangles and poorly understood Kant.

And there is just a bit of distortion designed to make Nietzsche come out on top. Take, for example, the following: “Nietzsche puts forward the view that a person’s traits are determined, to a great extent, by factors (type-facts) that are fixed at birth” (15). (L&K slip immediately into showing how studies of heredity show Nietzsche to be right. But did Nietzsche talk about heredity and modern genetics? Or did he simply say that character is determined at birth?) If what Nietzsche says is just that our moral characters are determined at birth, then one might strangely enough discover that this notion is taken directly from Kant. Let us recall that Kant was a staunch determinist. Unlike contemporary determinists, Kant did not simply believe in the truth of determinism because the world just happens to look determined. Rather, Kant insisted that the world must look determined if there is to be a world at all: we can only perceive an event as the effect of some cause. So it is no wonder that Kant is heavily committed—and on stronger theoretical grounds than Nietzsche had—to the idea that our empirical character is determined by factors “that are fixed at birth.” So much for using heredity to make Nietzsche look more plausible than Kant.

What about the role of conscious beliefs? I have already indicated in the last post that Kant simply did not hold the view that the only moral action is action caused by consciously chosen principles—the principles involved (maxims) are rational, not psychological entities. But there is another confusion here to disentangle, namely, the confusion between that thesis and the idea (which, in keeping with contemporary convention, we might call “the endorsement thesis”) expressed in a clip from Paul Katsfanas that Rob Sica quoted to me in the original discussion at Leiter’s blog:

Contemporary philosophers often endorse a claim that has its origins in Locke and Kant: self-conscious agents are capable of reflecting on and thereby achieving a distance from their motives; therefore, these motives do not determine what the agent will do. Nietzsche’s drive psychology shows that the inference in the preceding sentence is illegitimate. The drive psychology articulates a way in which motives can determine the agent’s action by influencing the course of the agent’s reflective deliberations. An agent who reflects on a motive and decides whether to act on it may, all the while, be surreptitiously guided by the very motive upon which he is reflecting.

I don’t disagree with anything here (and I am very much looking forward to reading this paper). What I want to note, however, is that Kant would—in my view—agree with all of it (he probably would not have agreed with the part about Nietzsche, but mainly because he hadn’t read Nietzsche, the slacker!). Even though L&K seem to conflate the two issues, there is a difference between the idea that agents’ actions are determined by principles (which, as I’ve suggested, was not for Kant an empirical claim at all), and the very empirical claim that agents can achieve a distance from their motives and are thus not determined by them.

How are these two claims different? Well, simply put, the bit of empirical psychology that Kant employs in the second is just obvious: he does not say that we know our sensible motives don’t determine our actions, but only that they don’t directly determine our actions. And that seems hard to argue with. When your nose itches, you want to scratch it. But if you stop to think about it, you might, for whatever reason, decide not to scratch. Under some circumstances—for example, when you need both of your hands on the wheel in order to keep your car from spinning off the road into a deep ravine—you would even be very likely to resist the urge to scratch your nose if you thought about it at all. In other words, the thesis is just this: if I am aware of a motive, then I have a certain distance from it, and this means that the motive does not directly determine me to action. Of course some motives might determine us to action no matter how much we want to resist them; but insofar as we are thinking about what to do, the motives will do so surreptitiously, as Paul correctly notes. That is, our motives do not directly determine our actions, even though they might determine them indirectly. And certainly no research L&K cite speaks against this.

Actually, in keeping with the emphasis on determinism, it is worth remembering that Kant is explicit on the issue that only sensible motives can cause our actions. If the motives that caused our actions were not sensible ones, then actions would appear to us as uncaused. But then, for Kant, we could never experience them at all. There is a further issue here: are the motives that cause our actions moral ones? According to Kant, even at the level of empirical psychology we can never know our motives because we do not have perfect introspection. We can strive to be as moral as possible, and to always act on moral motives, but in the end we can never know whether in fact we have done so. Many think this is a failing in Kant; but it strikes me as a strength of his moral theory, one that too many contemporary moral philosophers overlook: agents who know that they are moral are likely to get complacent; agents who are striving to be moral but do not know whether they have ever succeeded are forced to retain moral humility.

But the issue of whether someone’s motivation is a properly moral one is thornier than this in Kant, and far thornier than L&K suggest. They use the example of a teacher who both cares about his students and believes that he ought to care about his students. And they take it as a given that, if the teacher’s inclination to care about his students causes his belief that he ought to (rather than vice versa), this is evidence that he is not a moral agent by Kant’s standards. But this is pure rubbish. What empirical test would L&K propose to determine whether the teacher’s inclination to care about his students is not itself the effect of the moral law working within him? At the empirical level, you would be insane to tell the teacher that he is an immoral agent. He may well be immoral if his motive is not a moral one, but that is not something you could possibly know. (Of course if it turned out that the teacher cares about his students a little too much, then you could be pretty sure that his motive isn’t driven by pure respect for the moral law. But in that case you probably wouldn’t be too likely to mistake the teacher for a moral agent in the first place.)

Of course there is another sense in which we could tell an apparently moral agent that he is actually immoral: if what we mean is that all human beings are immoral or, as the Christian doctrine goes, in Adam all have sinned. Something like this is indeed at work in Kant, for he does proclaim that (“due to the largely unrealistic demands of his moral psychology”) the human race is evil by nature. But nobody should go from that doctrine to going around telling people who hid Jews from the Nazis that, despite appearances, they are still evil. If this is what L&K are worried about, then their real worry is about the idea of a moral ideal that’s actually an ideal. That is: they’re really just seconding Bernard Williams’s rejection of the institution of morality. But that’s another argument, both for another time, and also not about the data on moral psychology.

Empirical moral psychology, in other words, is not the place at which one is likely to find effective tools against Kant or, at least, against Kant the philosopher as opposed to Kant the misinterpreted punching bag. It is also not the place where Nietzsche will likely come out on top without some editorial tweaking. Naturalists may be rightly (in their eyes) suspicious of Kant. But so long as there is any discipline of philosophy that is not fully reducible to psychology and physics, perhaps that had best be the domain where honest philosophers engage Kant’s ideas. Provided, that is, that honesty is a goal.

Continue Reading...

Saturday, March 8, 2008

Are You Preposthuman? If So, Are You a Simulant?

There are some people out there, many of them frighteningly intelligent, who look forward to the day when we humans are cared for by super-intelligent machines that we ourselves have created. These people are called transhumanists. They even have institutes--plural.

The really interesting philosophical question that this movement poses is, could there be anything more intelligent than a human being? I believe that the answer is no, but I’ll save that argument for later. Instead, I want to address a more quirky and fun speculation, put forward by Nick Bostrom, that we are all, more probably than not, simulants—that is, simulations run by some posthuman creature on a super-duper computer. In fact, it turns out that so long as you believe it likely that a posthuman civilization will develop someday, and also believe it likely that such a civilization will have an interest in simulating its ancestors, then you would be very irrational if you did not believe that you yourself were a simulant.

This ‘simulation argument’ can be stated quite simply. To begin with, the super-duper computer would not have to simulate the entire past universe, but only whatever is required to reproduce human experience to the degree that the simulated experience is indistinguishable from actual world experience. (We need to grant an extreme brain-internalism). Bostrom claims that we can even estimate the sort of computing power (in terms of operations per second) that would be required to do this: roughly 1033 - 1036. Secondly, Bostrom suggests that, even with current nanotechnological designs, a planetary-mass computer could complete 1042 operations per second. Thus, our posthuman descendants should have the computing power necessary to simulate (prepost)human experience. (In case you think that this is all just whacky, check out this story.)

Now, if we assume that this is correct, then there are three possible outcomes:
a) No human civilization is likely to make it to a posthuman stage.
b) The fraction of posthuman civilizations that will have any interest in running ancestor simulations (us-simulations) is very small.
c) We are almost certainly simulants.
Bostrom even provides a cute little formula for determining an exact probability that we are simulants. Let ‘fp’ be the fraction of human civilizations that make it to a posthuman stage. Let ‘N’ be the average number of ancestor-simulations run by a posthuman civilization. Let ‘fI’ be the fraction of posthuman civilizations interested in running ancestor simulations, NI the average number of simulations run by the interested civilizations, finally, let ‘H’ be the average number of humans that have lived in civilization before it reaches a posthuman stage. The probability that you are a simulant can be determined as the fraction of all likely simulant human beings over all likely simulant human beings plus H. Thus:

fsim = __fp fI NI H___ , thus, fsim = __fp fI NI ____
(fp fI NI H) + H ...... ...... (fp fI NI H) + 1

Given our assumption that simulant and actual human experiences are indistinguishable from the inside, the value of fsim is exactly the credence you should give to the proposition that you are a simulant.

At the end of his paper, Bostrom suggests that we split our credence evenly among (a), (b) and (c) above. I don’t know why he says this. I can’t imagine why, only at the end of an article trying to prove that we are all most likely posthuman SIM creations, he suddenly wants to sound reasonable. Here are the probabilities I would assign:

My guess is that it’s at least a 50/50 chance that some human civilization sometime will make it to the ‘post human’ stage, so I would assign fp a probability of .5. Secondly, I assume it quite likely that any civilization that did make it that far would want to run ancestor simulations, so I’d assign fI a probability of .75. Finally, I’m just guessing that, at the time when our posthumans create their super duper ancestor simulation machine, there will be around 20 billion posthumans and that they will want to run ancestor simulations for half of themselves (10 billion). Finally, I’d give ‘H’ a value of around, oh, 9 billion. With these values, my fsim is .99999999973. If you were to ask me, Do you wonder if you are a simulant?, I should respond that I am 99.999999973% certain that I am.

So, am I 99.999999973% certain that I am a simulant? Not at all. For one, I don’t believe in the extreme brain-internalism of the sort Bostrom presupposes, and so I don’t think that, given whatever computing power you like, human experience will ever be simulatable without just reproducing the world itself. But I think that this poses an interesting quandary for those who are committed brain internalists, insofar as, following Bostrom’s argument, they really should believe that they are simulants. Similarly, for reasons similar to ones Putnam expressed in ‘Brains in a Vat,’ at a basic level the very proposal doesn’t make sense—or, least it makes no more sense than a statement like ‘there might be golden rivers in heaven.’ Sure, I can imagine a vaguely pleasant place, somewhat like the Catskills, with rivers that flowed gold, but really, I have no idea what heaven is like nor whether it is terraformed—which is just to say that since I have no idea, really, what would even count as verifying my statement, I have no idea what I mean by it. The same could be said of ‘What would it be like to get sucked through a black hole’ and, so I presume, of ‘what is the likelihood that I am really a simulation’?

Continue Reading...