Julian Sanchez header image 2

photos by Lara Shipley

Don’t Go Lawnmower Man Just Yet…

November 30th, 2009 · 24 Comments

Robin Hanson defends his enthusiasm for silicon immortality against an incredulous Bryan Caplan, whose “I refute it thus” one-ups Dr. Johnson’s stone-kicking with an imagined shotgun blast to the brainpan. I’m sympathetic to Hanson’s response, and I think Caplan’s position is mostly voodoo in philosophy drag, but let’s be clear that there are a couple different things going on here when we ask about the transformations under which I should consider myself to have “survived.”

The first question is whether it’s somehow uniquely rational to identify your “self” with a particular unique physical brain and body. To dramatize it as Bryan does, cribbing from my old prof Derek Parfit: Suppose that via some kind of Star Trek replication or some combination of cloning, highly advanced brain scanning, and neuron-etching nanotech, scientists create a precise physical duplicate of you. Just as your duplicate is waking up—so let’s be clear, there are now two extremely similar but clearly distinct loci of conscious experience in the room—you’re told (ever so sorry) that as an unfortunate side-effect of the process, your original body (you’re assured you are the original) is about to die.  Should you be alarmed, or should you consider your copy’s survival, in effect, a means by which you survive?

The gut intuition Bryan wants to work with—the crucial “common sense” move—is that, by stipulation, there are, after all two of you who now have separate experiences, emotions, physical sensations, etc., and who could each survive and go on to live perfectly good (and very different) lives.  And you could certainly lament that you won’t both get that chance.  But I think it’s a serious mistake to imagine that this settles the questions about what we have, unfortunately, chosen to call “personal identity,” a property which even in more ordinary circumstances bears little resemblance to its logical homonym. There is ample reason to think that a single brain and body can, and perhaps routinely does, support multiple simultaneous streams of conscious experience, and as Robin points out, it’s not as though “your” physical body is composed of the same matter it was a decade ago.

In reality, our ordinary way of talking about this leads to a serious mistake that Robin implicitly points out: We imagine that there’s some deep, independent, and binary natural fact of the matter about whether “personal identity” is preserved—whether Julian(t1) is “the same person” as Julian(t2)—and then a separate normative question of how we feel about that fact.  Moreover, we’re tempted to say that in a sci-fi hypothetical like Bryan’s, we can be sure identity is not preserved, because logical identity (whose constraints we selectively import) is by definition inconsistent with there being two, with different properties, at the same time. And this is just a mistake. The properties in virtue of which we say that I am “the same person” I was yesterday reflect no unitary natural fact; we assert identity as a shorthand that serves a set of pragmatic and moral purposes. Whether it’s true depends intrinsically on the concerns and purposes of the user. A chemist and a geologist will mean quite different things when they ask, pointing at a lake, “is that the same body of water we noted a decade ago?” The answer may be “yes” in one sense and “no” in another, because what they mean by “same” is implicitly indexed to their different concerns and purposes. Bryan’s flip reply—that one could thereby achieve immortality by “deciding” to identify with something permanent—misses the point: That there may be no independent fact of the matter about identity does not entail there are no facts about what’s worth caring about.The whole motive for arguing against his material-continuity standard is precisely that he has seized upon a criterion of intertemporal personal identity that does not really matter very much.

The problem is that none of this gets us all the way to the conclusion that immortality as a digital upload ought to count as a form of surival, provided we agree that one of the things worth caring about, when we contemplate our own future survival, is the specific subjective character of certain experiences. So, for instance, it may be important to my attitude toward a candidate future-self that, when he reminisces about summer at Camp Watonka looking up at the meteor showers or the morning sunlight on a lover’s face, that what he recalls is qualitatively similar to the original experience, not merely some Ramsified input/output functional equivalent. It’s highly plausible that my carbon carbon copy has this property. Given our current state of knowledge, it’s highly speculative that my silicon doppelganger would: Its internal life could be radically different while yielding the same responses to stimuli, and perhaps it could even be a Chalmerian “zombie” with no subjective internal life at all. Here Bryan’s objection that this would not be “me” but only a “simulation” has more bite.  On the other hand, while I doubt it is irrational to care about this, I’m not sure it’s obligatory either, so while I doubt I’d count Zombie Julian as a good-enough instance of me, I’m not sure Robin is making a mistake—though I suspect he may be—if he’s indifferent to whether he’s succeeded by a Zombie Robin.

Update: Bryan has a response to Robin that gets at the last point I make above (and elaborate on in the comments), but which also goes awry in a couple ways. First, this is at most a reason for agnosticism about the shared mental properties of different physical realizations of a brainlike system, and it certainly doesn’t provide any real support for a dualist theory of mind.  It doesn’t really cut either way.  A dualist could  easily, and perhaps more easily than a physicalist,  subscribe to the notion that the very same sort of mind could be anchored to any number of physically very different neural substrates. Second, the Bayesian move he tries to make seems misguided. Astrophysicists should not adjust their priors if population growth in the developing world increases the prevalence of folk cosmologies.  If you want to apply Bayesian logic at all here, it should be by considering that people tend to find dualism intuitively highly appealing as a default, but that people with advanced philosophy training who spend large amounts of time thinking about this sort of thing have a much lower opinion of dualism on average.

Tags: General Philosophy


       

 

24 responses so far ↓

  • 1 JustinOpinion // Nov 30, 2009 at 2:46 pm

    Insightful post, as usual. However I’m not so sure about this statement:

    “Given our current state of knowledge, it’s highly speculative that my silicon doppelganger would: Its internal life could be radically different while yielding the same responses to stimuli…”

    I suspect (and perhaps a mathematician can prove) that if a black box perfectly replicates a certain I/O behavior, then it necessarily has internal states isomorphic to the internal states of another box with identical I/O (say, a brain). Of course the enumerated “internal states” are sensitive to how one analyzes the system. If you analyze a silicon device in terms of transistor switching and inter-transistor communication, it will bear little resemblance to what we call consciousness. But of course, if you analyze a brain’s internal states in terms of synapse charges and chemical potentials, you will also see no trace of self-awareness. In both cases, one would need to analyze the internal state at a less fine-grained level, where objects like “memories” and “current thoughts” would begin to emerge.

    And of course there are different ways one can coarse-grain a computer/brain… some of which will paint a picture of conscious decision, others of mechanical action, and others of indecipherable complexity.

    To use an analogy from linear algebra, one can select from an infinite number of basis sets to describe a particular system. Some choices make a given computation simpler, and may even seem more ‘natural’. But if there is a basis wherein the I/O behavior of the brain can sensibly be interpreted as “consciousness” then I don’t see how there could not be a basis that provides the same interpretation for another chunk of matter that has the same I/O behavior.

  • 2 JustinOpinion // Nov 30, 2009 at 2:59 pm

    As an addendum to my comment: I suppose that a person could have internal thoughts or memories that they never externalize or act-upon, such that a replica could be indistinguishable to other people, yet lack certain internal drives/thoughts/memories.

    However:
    1. It’s difficult to see how internal thoughts/memories that have absolutely no effect (not even a subtle or subconscious effect) are critical. Our secret thoughts and memories necessarily shape our eventual choices and actions (unless they are truly trivial).
    2. Assuming one has the capacity to correctly simulate/model/replicate a person’s I/O behavior, there is no reason that one cannot also model their internal secret thoughts/memories. (In fact, it seems to just amount to a redefinition of where one draws the I/O boundary…)

  • 3 Julian Sanchez // Nov 30, 2009 at 3:31 pm

    Well, in some very literal sense, of course, the silicon version would not have EXACTLY “the same” response to stimuli. (Viz: Pouring a certain quantity of LSD into the works.) The problem is knowing which differences make a difference. Insofar as we don’t understand, really, why there’s “something it’s like from the inside” when our brains process information, or whether that subjective quality actually plays a necessary functional role, there’s no way to know whether it’s PART of the necessary internal isomorphism.

    That is, we can stipulate that my silicon copy IS internally isomorphic in the sense of having parallel functional states, and is NOT internally isomorphic in the sense that it’s not, you know, squishy and highly sensitive to LSD. The assumption that the first sort of isomorphism is sufficient for a THIRD isomorphism, in terms of the character of subjective experience, is exactly that: An assumption. The question is not whether you can *model* the internal secret thoughts, the question is whether the model is a thought. As Searle observes, the best computer model of a hurricane isn’t going to uproot any (real) trees. We have no real way of knowing how or whether the subjective character of an information process depends on the specific physical, causal properties of its substrate.

  • 4 I, Me, Mine, I Me, Mine, I Me, Mine « Around The Sphere // Nov 30, 2009 at 3:34 pm

    [...] Julian Sanchez: I’m sympathetic to Hanson’s response, and I think Caplan’s position is mostly voodoo in philosophy drag, but let’s be clear that there are a couple different things going on here when we ask about the transformations under which I should consider myself to have “survived.” [...]

  • 5 Alex Knapp // Nov 30, 2009 at 3:51 pm

    I would argue that, to the basis of our current understanding, what we consider to be the “self” is far too tied into our existence as animals to make a silicon copy feasible. Look what happens when people are sense-deprived or even people-deprived–they go crazy. A silicon copy self that at all resembles the human self is probably impossible without creating an entire virtual world that behaves virtually identically to the one we live in, complete with virtual bodies that simulate virtual senses. And I would continue by arguing that a simulation that complex is probably beyond our technological scope for a long, long, long time. If ever.

  • 6 The Definition of “Is” «  Modeled Behavior // Nov 30, 2009 at 3:52 pm

    [...] in Science, Society | by Karl Smith Bryan Caplan and Robin Hanson debate philosophy of the mind. Julian Sanchez weighs [...]

  • 7 Alex Knapp // Nov 30, 2009 at 3:57 pm

    A related point on the technology:

    http://scienceblogs.com/cortex/2009/11/reverse-engineering.php

  • 8 Julian Sanchez // Nov 30, 2009 at 4:05 pm

    Just to follow up by updating a classic thought experiment slightly: Imagine that somehow, we have assembled a vast library containing DVD recordings of my life from a branching infinity of alternate histories in parallel universes. Now meet my simulator. It keeps track of the branch it’s currently on, and every time it gets an input from the outside world, it queries a database and cues up the DVD from the universe where that world’s version of me got precisely the same input and had precisely the same experience. And then it plays back the archive footage of my reaction as a hologram. Obviously, the system required to process the inputs in order to locate the correct footage from that vast database would be impossibly complex. But what I want to suggest is that there’s no reason whatever to expect that such a system, if it could exist, would have to be conscious—or at least, conscious in a way that matched the original.

    Alternatively, suppose that by sheer insane luck, my simulator is just loaded up with a single prerecorded hologram that JUST HAPPENS to contain precisely the same responses I would really give to the actual inputs it receives. If anything went even slightly different, it would become obvious that it’s just a recording playing out, with no sensitivity to the external world. But there’s only one actual set of events occurring around it, and so as it happens, it contains all and only the responses my rather more interactive brain would give. It’s almost infinitely improbable, but not logically or metaphysically impossible. The objection that it does not track me functionally in counterfactual situations, moreover, doesn’t really strike me as getting to the crux of the matter: Whether a system IS conscious or not is fundamentally a question about what’s going on in this actual world; it doesn’t seem like it should turn on what might happen in a counterfactual.

  • 9 JustinOpinion // Nov 30, 2009 at 6:08 pm

    I agree that it is an assumption that isomorphic internal states implies isomorphic subjective experience (though I suspect that this is true, since what else are our thoughts/experiences other than the subjective perception of those states…).

    Your DVD thought-experiment is intriguing. Although a single DVD would not have internal states that are isomorphic to the thoughts/experiences of the original Julian, I would argue that the totality of all the DVDs (and whatever apparatus monitors the universe and selects which to run) does indeed contain your thoughts and subjective experience. (Actually it seems to contain the thoughts and experiences of many, many Julians…)

    In other words, picking out the particular DVD loaded in the machine at any moment and saying “This DVD is not conscious!” is like looking at a person’s body-minus-brain at any particular moment and saying “This mound of flesh is not conscious!” You’ve purposefully excluded a necessary component of the thinking machinery.

    Whether or not the consciousness of the full machine “matched the original” I cannot say (except that, again, I’m at a loss to say that consciousness is anything other than that).

    As for the “lucky simulator”, yes I’ll grant it as a counter-example of a system that exhibits similar I/O without having the same internal states. One can always posit a random fluctuation in the universe that produces a hypothetical result. The erosion on a mountain-face could just happen to write out a deep mathematical theorem or beautiful poem without any internal thoughts going into it. But at best this changes my argument from being mathematically exact to be just ‘correct in most situations’ (99.9999% of the time, or whatever).

  • 10 Julian Sanchez // Nov 30, 2009 at 6:35 pm

    “what else are our thoughts/experiences other than the subjective perception of those states”

    I think this is a mistake—it makes it sound as though there’s some kind of homunculus in there observing all this unfold. Our experiences *just are* the states in their subjective aspect. But again, without more knowledge, we have no reason at all to think that the subjective aspect necessarily tracks the functional role of the state across realizations. What you’d need for that is an account of the causal role the subjective aspect plays—we don’t have that.

    As for the DVD system (this is my version of a number of classic examples—the Chinese Room and Blockhead being the most famous) I guess I’m just struck by the profound disconnect in our intuitions about the case. You’re suggesting that when the simulator hears sad news, locates the DVD in the archive that contains the recording of my reaction to that piece of news, and plays my tearful reaction back… there is someone who feels sad? Bear in mind that the search mechanism here bears *no resemblance whatever* at the microlevel to what is happening in my brain when I feel sad, above and beyond the silicon/carbon distinction.

  • 11 JustinOpinion // Nov 30, 2009 at 7:14 pm

    I agree there is no homunculus. Our subjective experiences *just are* the internal states (irrespective of what is used to build/define the states, I submit).

    Yes, I am making the counter-intuitive claim that the DVD playback system has within it a consciousness that feels sad (or whatever). The mechanics of the sensors, search and playback mechanism may be drastically different from the biology of a brain’s information processing, but both systems contain (some) isomorphic internal states. Again, it is a bit of an assumption on my part to say that the subjective experiences that those internal states *are* will be truly *the same*… but if the (important) internal states are truly identical, and subjective experience simply *is* the internal state, then then the experiences are identical.

    Part of my outlook is affected by physics thinking. It’s currently understood that time is “just” a dimension. The universe, then, is perhaps better thought of as a 4D construct that simply “is” rather than a 3D construct that changes in time. Thought of in this way, the existence of our subjective experience doesn’t seem to depend crucially on time at all. In other words your fantastically-exhaustive DVD archive would contain the consciousnesses and subjective experiences even if they were not actually played-back; in much the same way that our 4D universe apparently contains conscious sub-structures even though it is not being “played-back”, it simply exists. All that matters is the information content, not the encoding. This, by the way, is a (poor) variant of an argument that Max Tegmark makes as part of his analysis of fundamental physics. See, e.g., footnote 16/page 26 of:
    http://arxiv.org/abs/gr-qc/9704009

  • 12 RickRussellTX // Nov 30, 2009 at 8:13 pm

    “You’re suggesting that when the simulator hears sad news, locates the DVD in the archive that contains the recording of my reaction to that piece of news, and plays my tearful reaction back… there is someone who feels sad?”

    I think you summed it up right there. The thing that “feels sad” is not the DVD, but the DVD player. It’s the thing that took the input, comprehended the emotional content and chose to play the “sad response” version rather than filed-almost-in-the-same-spot “happy response” version.

    Example:

    “When your father passed away, he wanted you to have his Army medals.”

    There is a universe of Julians that could have quite different reactions (sadness, happiness, surprise, pride) to that statement — even Julians with identical external histories. If the system produces correct output based on particular input, I’m not sure that we can say, with any certainty, that the system did not “feel sad” in a way every bit as meaningful as the squishy water-sack on which it was based.

    Unless you have some way of defining the qualia of “sad” that allows it to exist for you, but not for a machine.

    Honestly, I’ve never understood the whole qualia debate — I have yet to find a claim of qualia that is falsifiable.

  • 13 RickRussellTX // Nov 30, 2009 at 8:50 pm

    Actually, now that I ponder further, it’s the existence of different responses to the same news that suggest the existence of a meaningful inner life (i.e., qualia).

    If the DVD player looks through its archives and finds precisely one response to an input by that particular Julian, then inner life (or qualia), if they exist, are meaningless and undetectable. They have no objective reality; there is no sad Julian or proud Julian, just “this Julian who expresses sadness”.

    On the other hand, if the DVD player has a selection of responses among Julians with the exact same history, then inner life (or qualia) may be a determining factor. The sad Julian produces the sad response; the prideful Julian produces the proud response.

    Of course, even if qualia do exist, they may not be indicative of independent thought (aka “free will”), in the sense that our behavior may be selected randomly from a universe of possible responses based on unquantifiable noise or even intrinsic randomness in the brain chemistry. That is, we may feel, but we may not have will.

  • 14 Julian Sanchez // Dec 1, 2009 at 2:26 am

    Oh, I’ve never taken free will very seriously.

    As for the intuition gap, I suppose it’s possible that, biological similarity notwithstanding, we actually do just have wildly different internal experiences, with the difference only detectable in our reactions to weird hypotheticals like this.

  • 15 The Elephant in the Room » Postmodern Conservative | A First Things Blog // Dec 1, 2009 at 6:38 am

    [...] There I was, quietly chuckling over Bryan Caplan and Robin Hanson’s back and forth (and forth) on the reasonableness of cryonics, when somebody decided to bring Derek Parfit into things. [...]

  • 16 sam // Dec 1, 2009 at 8:36 am

    There’s a nice treatment of the issues raised here in John Scalzi’s “Old Man’s War”. See Chapter 5.

  • 17 Living Forever in a Computer « Rortybomb // Dec 1, 2009 at 9:57 am

    [...] and Adam at Modeled Behavior respond, and so does Julian Sanchez: Suppose that via some kind of Star Trek replication or some combination of cloning, highly [...]

  • 18 Glen // Dec 1, 2009 at 6:26 pm

    I think the following video sums things up nicely:

    http://www.youtube.com/watch?v=pdxucpPq6Lc

  • 19 Winter’s Haven » Who Knows? // Dec 3, 2009 at 2:56 am

    [...] in silicon, and personal identity. I suggest skimming the original two posts and then reading Julian Sanchez’s take on the question — he approaches the issue with more caution and more philosophical expertise than either [...]

  • 20 Weekend Readings - Ross Douthat Blog - NYTimes.com // Dec 4, 2009 at 8:05 pm

    [...] (See also Bryan Caplan versus Robin Hanson versus Caplan again, with interventions from Julian Sanchez and Will Wilson, on cryonics and personal [...]

  • 21 Weekend Readings - Ross Douthat Blog - NYTimes.com // Dec 4, 2009 at 8:05 pm

    [...] (See also Bryan Caplan versus Robin Hanson versus Caplan again, with interventions from Julian Sanchez and Will Wilson, on cryonics and personal [...]

  • 22 Neel Krishnaswami // Dec 6, 2009 at 6:34 am

    I suspect (and perhaps a mathematician can prove) that if a black box perfectly replicates a certain I/O behavior, then it necessarily has internal states isomorphic to the internal states of another box with identical I/O (say, a brain).

    In fact, this is not true. Systems that have identical IO behavior are only equivalent up to bisimilarity (ie, mathematician talk for “have the same IO behavior”), not to isomorphism.

    If you know a little about regular expressions, recall that the minimal DFA recognizing a regular expression doesn’t have to be isomorphic to the an arbitrary DFA recognizing that expression (in fact, that’s why there’s any point to minimizing them).

  • 23 Cryogenics – Stonesoup Readers Say – Not Really Crazy « stone soup // Jan 22, 2010 at 1:09 pm

    [...] of life, it might just be the creation of another person who is very very similar to me.  While the preservation of a version of oneself might be someone’s preference, even if they can’t experience it, that doesn’t particularly appeal to me; children [...]

  • 24 m65 // Feb 16, 2010 at 6:46 am

    good read thanks for the share. i really like the way the article is written and also the design of the website

Leave a Comment