The Simulation Argument Take Zar
The Simulation Argument (SA) always seemed pretty cool, but I didn’t realise how sound and quite what a conundrum it is until discussing it with a friend (after seeing his erroneous rebuttal). The SA bears a strong resemblance to a very common trope in SF and fantasy: if there are realistic virtual realities, then what is so real about “reality”? (:o I could list so many fiction stories about it, anime too~)
The first confusion to clear up is that the SA is not actually an argument that we are currently most likely in a simulation. The SA is actually a series of conditions with corresponding conclusions. The conclusion that we are likely in a simulation is only in the case that all the conditions apply.
So if you don’t agree that a condition is plausible or likely, don’t fret, the SA still has a recommended conclusion for you! (Note, the conclusions are copied from Bostrom’s paper.)
So, let’s see those conditions.
(a) Civilizations are not that likely to go extinct before reaching a post-human stage of development.
(b) PH civilizations are not that likely to go extinct either.
If you think civilizations of intelligent species are generally likely to go extinct, you disagree with (a) or (b), then you have conclusion (1):
(1) the human species is very likely to go extinct before reaching a “posthuman” stage
Not only are we not likely to be in a simulation, we’re not even likely to survive to post-humanity 🙁
Accepting (a) and (b),
(c) Humans can be uploaded into machines of some sort or another (mind-uploading).
(d) Internally consistent simulations capable of supporting mind-uploads of human-level intelligences (or beyond) can possibly be made at moderate enough cost.
(e) Post-human civilizations will at least have some members interested in running said simulations.
If you reject any of these, then conclusion (2) is the simulation argument’s advise for you. That is, if you think mind-uploading is not possible, or any such simulations will be too costly or not possible to make at such scale or granularity, or that no one in post-human civilizations will have no interest in running simulations (islated virtual realities) with intelligent beings in them, or any combination of the above, then:
(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof)
In rejecting (c), (d), or (e), you are advised to think that you are probably not in a simulation. (Odd that something called the “simulation argument” has a conclusion that one is likely not in a simulation, isn’t it?)
You could still be in a simulation, but then the Flying Spaghetti Monster could be making love to your dog as you read this. The intriguing ‘evidence’ for likely being in a simulation comes when you accept these conditions. (And if you think we can’t tell in our current stage of techonological development then just put judgment on hold until you have good estimates for the conditions.)
Condition (e) is hard to say much about because it’s hard to predict what post-human civilizations and beings will be like. Even harder if you don’t focus only on humans. And the looser your conditions for (d) are, the harder it gets yet :o.
Before moving on to conclusion (3), I will discuss condition (d) further.
In his paper, Bostrom refers to “ancestor simulations.” I don’t see why we have to do ancestor simulations in particular. I think we can just have simulations of intelligent beings. Part of the reasoning for conclusion (3) relies on a numbers argument, but this can be done for intelligent beings instead of just “humans.”
An important question here is what kind of simulation is “good enough.” Do the simulations we make have to simulate all known physics perfectly? Can they only simulate the intelligent beings and only work out the physics for what is specifically observed? Or do they have to simulate everything, observed or not? The simulations may have to be able to support human civilizations over generations, but do those humans have to be able to make intelligent being simulations that are just as good in their simulations? It’s not entirely clear what the necessary conditions for the simulation are for condition (d).
** The reader should get into a thought-experiment frame of mind now **
However, one illuminating point here is that an isolated virtual reality (a simulation) may not have to be that good to be unnoticed by intelligent inhabitants. There just have to be no signs of something external to the simulation.
Suppose that you, a human being, somehow grew up as a character in an MMO (that, for simplicity’s sake, banned all discussion of ‘the real world’). To you, as a human IRL (in real life), so man things about the MMO world seem ‘obviously fake’, but to the MMO you, that’s just the way things are. You don’t know anything else. to RL (real life) you, the sun just rises. And to MMO you, monsters just spawn out of nowwhere on a regular basis. Half the people are philosophical zombies (NPCs), and the other half appear and vanish on their whims. You can only interact with the world through a select number of movements (commands). This would seem really strange and limited to you in RL, but MMO knows nothing else and that’s the reality he can inductively learn of.
Should there be ‘artifacts’ that demonstrate he’s in a limited reality? Well, many things that RL you would take as such artifacts, MMO you just considers a ‘matter of fact.’ So even in a very crude simulated world, there may not be any direct internal signs that it is simulated. It can be internally consistent (coherent, convincing, etc).
Of course, in a normal MMO, people talk about RL a lot. That would be a strange sign: the non-NPCs very consistently talk about a whole other reality. Maybe you’d think it’s another dimension or something, until they start explaining things to you :p. So, clearly, the existence of signs that one is in a simulation or not is more subtle than how ‘realistic’ it is (using ‘realistic’ to mean “like our universe”). On the other hand, maybe “internal consistency” is a better definition of “realistic” to use here ;-).
And now for the thought experiment that sheds light on the heart of the Simulation Argument’s conundrum that is condition (3).
Bob is a baby who was uploaded into a fairly realistic isolated virtual reality that is, of course, internally consistent. This virtual reality has a whole planet with intellectually sophisticated civilizations in it. People are allowed to move into this VR from RL, but they can’t leave afterwards. And their memories are altered to preserve internal consistency. So VR Bob grows up pleasantly, never questioning his ‘reality.’
RL Bob wasn’t killed because he was uploaded or anything. RL Bob grows up pleasantly as well. However, RL Bob can watch what VR Bob does. He finds it amusing to watch his alter-ego. Sometimes he wishes it were an interactive VR so that he could talk with VR Bob though. RL Bob knows that VR Bob is living in a simulation.
RL Bob gets into philosophy and notices something strange, a quirky conundrum. RL Bob knows that VR Bob is living in a simulation, but VR Bob doesn’t know that. Even stranger, RL Bob sees no reason for VR Bob to believe that he is in a simulation. Sure, VR Bob knows it to be a possibility, but then a magical donkey could have conjured up VR Bob’s world too. VR Bob has no particular reason to single out the possibility that he’s in a simulation from all those other spurious possibilities. It provides little explanatory value and what is the “direct experience of being in a simulation”? Well, it’s of being taken up with your simulated world, which doesn’t amount to much. Yet, despite the lack of any evidence for VR Bob that he’s in a simulation, RL Bob knows that VR Bob is in a simulation. It’s a truth that VR Bob has no evidence for, and thus no reason to believe.
RL Bob wants VR Bob to realize that he’s just living in a simulation, but he knows that’s probably not going to happen. RL Bob doesn’t just go and believe things without evidence, so why would his alter-ego?
“Hey, Bob, if VR Bob is in a simulation and has no evidence for it, couldn’t we also be in a simulation without evidence for it? How does your situation differ from VR Bob’s? Sure, this universe seems fully self-sufficient to us, but VR Bob’s simualtion does to him too.”
“That’s true. I would never know if I am in a simulation or not, just as VR Bob will never know. I suppose I should accept that VR Bob is fully immersed in his simulation….”
“But, Bob, you know something VR Bob doesn’t! You know that the situation is possible. You know that a person can be in a simulation and not know it, and that it will be just as ‘realistic’ as reality.”
“Oh, do you think VR Bob will realize he’s in a simulation when they develop virtual reality in there?”
Bob’s conundrum lies at the heart of the Simulation Argument’s conclusion (3). There’s no ‘direct evidence’, but given condition (d), we are in Bob’s situation: we know that it’s possible for an intelligent being like us to be in a simulation and not know it.
The next step of reasoning to reach conclusion (3) deals with the numbers of intelligent beings in simulations and not. This is also where the cost comes in. Let’s go back to RL Bob’s world.
VR Bob’s simulated world is just one among many. Okay, maybe not that many, and not all VRs are isolated from RL either. However, there are twice as many people living in isolated VR universes than there are in RL. VRs run on renewable energy and people like living in concrete worlds rather than just being uploads on the net (or robots clogging up the streets). Making the VRs isolated is seen as a strange choice by some, but that’s the way it is in RL Bob’s world.
So, take a list of all people in RL and isolated VRs, and take a random name from the list. What’s the probability that this person is in a simulation and doesn’t know it? 2/3.
Now what if each of these isolated VRs develops their own isolated VRs, so that from their perspective they are 2/3 in isolated VRs. The perspective from these first-step VRs is the same as perspective at step-zero, RL Bob’s universe: if they pick someone randomly, there’s a 2/3 chance that they are in a simulation and don’t know it. And in RL Bob’s universe, the chance a random person is in a simulation and doesn’t know it is even higher!
As those in RL Bob’s universe look at the state of matters in their isolated VRs (with their own isolated VRs!), they will have to ask the question, “What distinguishes us from those in these isolated VRs?” We have direct observations of human beings in VRs unawares, and no obvious signs that we are ourselves, but we know that they are. Are we?
RL Bob’s universe may be the ‘first’ universe, but internal consistency can’t be taken as a sign of that. Those in VR Bob’s universe will also think theirs is the ‘first’ universe (rightly so?). And those in VR VR Bob’s universe will think theirs the ‘first’ too.
RL’s inhabitants relation to isolated VR’s inhabitants is the same as the relation of isolated VR’s inhabitants to the isolated VR in it’s inhabitants.
RL Bob realizes that the only way for VR Bob to correctly realize that he’s in a simulation is to look at the isolated VRs in VR Bob’s world, to realize that their situation is similar. VR Bob has to realize that given there are more people in his world in isolated VR than not, that the probability that a random person is simulated is higher than not. VR Bob has to put these realizations together to realize that he’s likely in a simulation, as there’s nothing with respect to simulation to set him apart from a random person (all people in coherent isolated VRs don’t have direct evidence, even if they are).
But, then wait, there’s also nothing stopping RL Bob from applying the same reasoning to himself. The only way for VR Bob to correctly surmise that he’s in a simulation, requires that RL Bob surmise he is likely in a simulation.
That is last conclusion if you satisfy conditions (a) – (e):
(3) we are almost certainly living in a computer simulation.
Basically, Bob has to realize that there are many so many situations identical to his own where they are in simulations: his situation is identical to that of one inside a simulation, and there are many more simulations than top level ‘real’ worlds, so chances are he’s in a simulation. If Bob is evidenced with enough simulations and layers of simulation, then there are ‘so many more’ that it verges on almost certainty.
Where does this leave us, now?
If you think we’ll end up in a situation at all like Bob’s, then, you’re advised to think we’re also in a simulation. Some things to note are that there don’t actually have to be that many simulations for conclusion (3), especially if you’re willing to replace “almost certainly” with “most likely.” This means that even if simulations are necessarily simpler than the worlds they run in, to satisfy (3), enough intelligent beings could still be run on simulations before you bottom out with simulations that can’t create good enough new ones.
Now, we are not in a position like Bob’s. We don’t have strong evidence of the possibility of “good enough” simulations. We’re not even entirely sure what “good enough” simulations are. Therefore we don’t even know which conditions of the Simulation Argument will hold; we just have its reasoning. If you think this condition is likely, then you should think conclusion is likely.
Do you think mind uploading is possible? How good simulations do you think we’ll possibly be able to make? Do you think we’ll even care about making such simulations? C’mon, especially nearly completely isolated simulations instead of interative virtual realities?
Should we assign probabilities to the conclusions? Or shall we assign them to the conditions and try to estame the conclusions based on that? The Simulation Argument itself does not tell us how we should assign probabilities or which position we should take; however, the original paper does argue that, at least, that good enough simulations should be possible.
The author of the original paper states, in a FAQ that he assigns about 25% to conclusion (3), that we’re currently in a simulation. His view is that if we don’t have any evidence, but we can split possibilities into 3 groups, then we should assign each an equal probability. In the paper he says in our ignorance, “it seems sensible to apportion one’s credence roughly evenly between (1), (2), and (3).” But in the FAQ, his view seems to be that he is not quite in a state of complete ignorance and his subjective opinion is slightly off from equiprobable (although roughly the same). (Of course, as he notes, your assigned credences for the conclusions is not part of the Simulation Argument itself.)
My view? shrug You can assign equiprobability if that’s how you want to deal with a lack of evidence. I don’t see much meaning in doing so, frankly. Why bother assigning a probability value with near zero confidence? You’re likely to confuse people who think you have more confidence behind your statement of equiprobability. If your intuition is that post-humans won’t be that interested in isolated simulations, then you may assign less than 1/3 odds to (3), but, still, this is done with rather low confidence as one can’t make confident predictions about vastly superior intelligences. Same problem: you’re likely to confuse others into believing you have more confidence than you do. So, I personally wouldn’t bother with assigning probabilities in ignorance.
The Simulation Argument is almost like a test of reasoning. If you are too conservative with your means of reasoning and demanding direct evidence then, in the position of VR Bob, you will come to the wrong conclusion. This is somewhat paradoxical: if you deny the reasoning of the Simulation Argument, then you will make the wrong conclusion. Ahh, that just means that you have to argue that the correct conclusion cannot be made in VR Bob’s position, and that any reasoning that leads to the correct conclusion also leads to even more incorrect conclusions. Poor VR Bob.
(P.S., is there a relation between what Bostrom calls the “bland indifference principle” and the axiom of choice?)