Saturday, March 8, 2008

Are You Preposthuman? If So, Are You a Simulant?

There are some people out there, many of them frighteningly intelligent, who look forward to the day when we humans are cared for by super-intelligent machines that we ourselves have created. These people are called transhumanists. They even have institutes--plural.

The really interesting philosophical question that this movement poses is, could there be anything more intelligent than a human being? I believe that the answer is no, but I’ll save that argument for later. Instead, I want to address a more quirky and fun speculation, put forward by Nick Bostrom, that we are all, more probably than not, simulants—that is, simulations run by some posthuman creature on a super-duper computer. In fact, it turns out that so long as you believe it likely that a posthuman civilization will develop someday, and also believe it likely that such a civilization will have an interest in simulating its ancestors, then you would be very irrational if you did not believe that you yourself were a simulant.

This ‘simulation argument’ can be stated quite simply. To begin with, the super-duper computer would not have to simulate the entire past universe, but only whatever is required to reproduce human experience to the degree that the simulated experience is indistinguishable from actual world experience. (We need to grant an extreme brain-internalism). Bostrom claims that we can even estimate the sort of computing power (in terms of operations per second) that would be required to do this: roughly 1033 - 1036. Secondly, Bostrom suggests that, even with current nanotechnological designs, a planetary-mass computer could complete 1042 operations per second. Thus, our posthuman descendants should have the computing power necessary to simulate (prepost)human experience. (In case you think that this is all just whacky, check out this story.)

Now, if we assume that this is correct, then there are three possible outcomes:
a) No human civilization is likely to make it to a posthuman stage.
b) The fraction of posthuman civilizations that will have any interest in running ancestor simulations (us-simulations) is very small.
c) We are almost certainly simulants.
Bostrom even provides a cute little formula for determining an exact probability that we are simulants. Let ‘fp’ be the fraction of human civilizations that make it to a posthuman stage. Let ‘N’ be the average number of ancestor-simulations run by a posthuman civilization. Let ‘fI’ be the fraction of posthuman civilizations interested in running ancestor simulations, NI the average number of simulations run by the interested civilizations, finally, let ‘H’ be the average number of humans that have lived in civilization before it reaches a posthuman stage. The probability that you are a simulant can be determined as the fraction of all likely simulant human beings over all likely simulant human beings plus H. Thus:

fsim = __fp fI NI H___ , thus, fsim = __fp fI NI ____
(fp fI NI H) + H ...... ...... (fp fI NI H) + 1

Given our assumption that simulant and actual human experiences are indistinguishable from the inside, the value of fsim is exactly the credence you should give to the proposition that you are a simulant.

At the end of his paper, Bostrom suggests that we split our credence evenly among (a), (b) and (c) above. I don’t know why he says this. I can’t imagine why, only at the end of an article trying to prove that we are all most likely posthuman SIM creations, he suddenly wants to sound reasonable. Here are the probabilities I would assign:

My guess is that it’s at least a 50/50 chance that some human civilization sometime will make it to the ‘post human’ stage, so I would assign fp a probability of .5. Secondly, I assume it quite likely that any civilization that did make it that far would want to run ancestor simulations, so I’d assign fI a probability of .75. Finally, I’m just guessing that, at the time when our posthumans create their super duper ancestor simulation machine, there will be around 20 billion posthumans and that they will want to run ancestor simulations for half of themselves (10 billion). Finally, I’d give ‘H’ a value of around, oh, 9 billion. With these values, my fsim is .99999999973. If you were to ask me, Do you wonder if you are a simulant?, I should respond that I am 99.999999973% certain that I am.

So, am I 99.999999973% certain that I am a simulant? Not at all. For one, I don’t believe in the extreme brain-internalism of the sort Bostrom presupposes, and so I don’t think that, given whatever computing power you like, human experience will ever be simulatable without just reproducing the world itself. But I think that this poses an interesting quandary for those who are committed brain internalists, insofar as, following Bostrom’s argument, they really should believe that they are simulants. Similarly, for reasons similar to ones Putnam expressed in ‘Brains in a Vat,’ at a basic level the very proposal doesn’t make sense—or, least it makes no more sense than a statement like ‘there might be golden rivers in heaven.’ Sure, I can imagine a vaguely pleasant place, somewhat like the Catskills, with rivers that flowed gold, but really, I have no idea what heaven is like nor whether it is terraformed—which is just to say that since I have no idea, really, what would even count as verifying my statement, I have no idea what I mean by it. The same could be said of ‘What would it be like to get sucked through a black hole’ and, so I presume, of ‘what is the likelihood that I am really a simulation’?

2 comments:

  1. I like your point in the last paragraph; hope you'll work it out more. The problem with the brain-internalism is that Bostrom takes a relatively (not entirely) uncontroversial thesis (substratum independence) and uses it in the sort of pseudo-Cartesian manner that a good deal of philosophy, cog sci, and neuroscience have been undermining: the idea that the substratum of consciousness is the brain. Even if the core of consciousness is computation, you can't compute anything without input, which means you have to simulate the input as well as the computational processes if you're going to have anything like consciousness (isn't this, ultimately, an overlooked thought behind Cartesian dualism?).

    But then again, I also wonder why one might--short of sci fi fantasy--assume that our descendants will want to run ancestor simulations. Sure, we do things like try to build computers that can simulate the functions of an insect or rodent brain. But the point of such projects is to try to recreate something we don't understand in order to understand it better. Presumably, if you can run ancestor simulations that are convincing to the simulants, then you've already got consciousness figured out pretty well. What's the point of running the simulation? What other motive could there be?

    And this brings me to the biggest worry Bostrom's reasoning raises: your posthuman ancestors are watching you on the toilet (uh... among other things).

    ReplyDelete
  2. Per the first point, yea, and maybe I'll say something about this in a later post: one of the points putnam makes in his essay, which I allude to at the end, is that concepts, and by extension perhaps (all?) other mental episodes are abilities of sorts, not states or occurrences. Simulating an ability is something quite different than simulating a state or occurrence. It's an interesting question whether abilities can be simulated, in the way that states or events can.

    As to the interest: well then yea, I guess we have different 'priors,' as the people in the literature are prone to say. Why do I think that our ancestors would run simulations? Maybe to figure out exactly what happened, to be able to picture it, see it like a movie, etc....one interesting thing Bostrom doesn't address is whether or not we, being simulants, are still in control of ourselves or not...I could see one making a nice case for epiphenomenalism (it seems like we control our fates, but really, its whoever is playing us right now......[and i wonder if that postperson is amused that I am coming to realization...])...

    Finally: unfortunately, my toilet antics are probably the least of my worries.

    ReplyDelete