Robin Hanson defends his enthusiasm for silicon immortality against an incredulous Bryan Caplan, whose “I refute it thus” one-ups Dr. Johnson’s stone-kicking with an imagined shotgun blast to the brainpan. I’m sympathetic to Hanson’s response, and I think Caplan’s position is mostly voodoo in philosophy drag, but let’s be clear that there are a couple different things going on here when we ask about the transformations under which I should consider myself to have “survived.”
The first question is whether it’s somehow uniquely rational to identify your “self” with a particular unique physical brain and body. To dramatize it as Bryan does, cribbing from my old prof Derek Parfit: Suppose that via some kind of Star Trek replication or some combination of cloning, highly advanced brain scanning, and neuron-etching nanotech, scientists create a precise physical duplicate of you. Just as your duplicate is waking up—so let’s be clear, there are now two extremely similar but clearly distinct loci of conscious experience in the room—you’re told (ever so sorry) that as an unfortunate side-effect of the process, your original body (you’re assured you are the original) is about to die. Should you be alarmed, or should you consider your copy’s survival, in effect, a means by which you survive?
The gut intuition Bryan wants to work with—the crucial “common sense” move—is that, by stipulation, there are, after all two of you who now have separate experiences, emotions, physical sensations, etc., and who could each survive and go on to live perfectly good (and very different) lives. And you could certainly lament that you won’t both get that chance. But I think it’s a serious mistake to imagine that this settles the questions about what we have, unfortunately, chosen to call “personal identity,” a property which even in more ordinary circumstances bears little resemblance to its logical homonym. There is ample reason to think that a single brain and body can, and perhaps routinely does, support multiple simultaneous streams of conscious experience, and as Robin points out, it’s not as though “your” physical body is composed of the same matter it was a decade ago.
In reality, our ordinary way of talking about this leads to a serious mistake that Robin implicitly points out: We imagine that there’s some deep, independent, and binary natural fact of the matter about whether “personal identity” is preserved—whether Julian(t1) is “the same person” as Julian(t2)—and then a separate normative question of how we feel about that fact. Moreover, we’re tempted to say that in a sci-fi hypothetical like Bryan’s, we can be sure identity is not preserved, because logical identity (whose constraints we selectively import) is by definition inconsistent with there being two, with different properties, at the same time. And this is just a mistake. The properties in virtue of which we say that I am “the same person” I was yesterday reflect no unitary natural fact; we assert identity as a shorthand that serves a set of pragmatic and moral purposes. Whether it’s true depends intrinsically on the concerns and purposes of the user. A chemist and a geologist will mean quite different things when they ask, pointing at a lake, “is that the same body of water we noted a decade ago?” The answer may be “yes” in one sense and “no” in another, because what they mean by “same” is implicitly indexed to their different concerns and purposes. Bryan’s flip reply—that one could thereby achieve immortality by “deciding” to identify with something permanent—misses the point: That there may be no independent fact of the matter about identity does not entail there are no facts about what’s worth caring about.The whole motive for arguing against his material-continuity standard is precisely that he has seized upon a criterion of intertemporal personal identity that does not really matter very much.
The problem is that none of this gets us all the way to the conclusion that immortality as a digital upload ought to count as a form of surival, provided we agree that one of the things worth caring about, when we contemplate our own future survival, is the specific subjective character of certain experiences. So, for instance, it may be important to my attitude toward a candidate future-self that, when he reminisces about summer at Camp Watonka looking up at the meteor showers or the morning sunlight on a lover’s face, that what he recalls is qualitatively similar to the original experience, not merely some Ramsified input/output functional equivalent. It’s highly plausible that my carbon carbon copy has this property. Given our current state of knowledge, it’s highly speculative that my silicon doppelganger would: Its internal life could be radically different while yielding the same responses to stimuli, and perhaps it could even be a Chalmerian “zombie” with no subjective internal life at all. Here Bryan’s objection that this would not be “me” but only a “simulation” has more bite. On the other hand, while I doubt it is irrational to care about this, I’m not sure it’s obligatory either, so while I doubt I’d count Zombie Julian as a good-enough instance of me, I’m not sure Robin is making a mistake—though I suspect he may be—if he’s indifferent to whether he’s succeeded by a Zombie Robin.
Update: Bryan has a response to Robin that gets at the last point I make above (and elaborate on in the comments), but which also goes awry in a couple ways. First, this is at most a reason for agnosticism about the shared mental properties of different physical realizations of a brainlike system, and it certainly doesn’t provide any real support for a dualist theory of mind. It doesn’t really cut either way. A dualist could easily, and perhaps more easily than a physicalist, subscribe to the notion that the very same sort of mind could be anchored to any number of physically very different neural substrates. Second, the Bayesian move he tries to make seems misguided. Astrophysicists should not adjust their priors if population growth in the developing world increases the prevalence of folk cosmologies. If you want to apply Bayesian logic at all here, it should be by considering that people tend to find dualism intuitively highly appealing as a default, but that people with advanced philosophy training who spend large amounts of time thinking about this sort of thing have a much lower opinion of dualism on average.