Julian Sanchez header image 2

photos by Lara Shipley

Abortion III

August 8th, 2002 · No Comments

Eve has a good response-to-my-response-to-her-response, to which Sara has, for the moment, deferred. Lest this become AbortionBlog, I’ll try to keep my remarks succinct. Before diving into the point-by-point, though, let me just clarify what I meant when I said that the link between human biology and value depended on that biology typically giving rise to beings with our sorts of mental features, because it seems to have been misinterpreted. I did not mean that if a large percentage of humans were (for example) born without brains, it would be permissible to kill all humans. What I meant was that in Sara’s argument, biology really just acted as a proxy for those other capacities. Put it this way: imagine all and only blue foods were poisonous. You might establish a rule or taboo: “don’t eat blue foods.” The rule would be useful just because of that link, but the blueness in itself would not be a reason not to eat a certain food; the associated poisonousness would. If the link ceased to hold, then you might ditch the rule, and try to find some other test for poisonous foods. It wouldn’t mean you started indiscriminately eating all blue foods, some of which are still poisonous. I hope that’s a bit clearer. Anyway.

If mental activity is what’s definitive of “humanity” in the moral sense, are we more and less human depending what we’re doing, if we’re asleep, etc? In some sense maybe (the sense in which we might say someone with a richer set of experience has “lived life more fully”), but if by “human” we specifically mean “deserving of moral respect,” then certainly not. The fact that rights originate from our having achieved some certain minimum level of mental function (and note that this is a threshold phenomenon; I’m not arguing that having ever more capacities beyond that minimum level means having proportionately more rights) doesn’t require that those rights be tied to the continuation of that level of functioning at each moment of our existences. Recall the “placeholder” argument — for the same reason it remains wrong to kill the cryogenically frozen person, rights persist across variations in the realization of those capacities. That is, because the unconscious (or “less-conscious”) body is a link between past and future conscious selves. Split the question into two senses, and I think it ceases to seem problematic for my view. Are the capacities that account for our moral worth less fully realized when we are unconscious, or just in a daze? Yes. Does our moral worth vary correspondingly? No. Does our moral worth vary with the maximum potential for (or past instantiation of) the realization of those capacities? Not above the minimum threshhold, because the difference between a very self-consciously reflective being which represents values and goals to itself [insert other features I’ve cited, etc.] and one which does all those things to a lesser extent is one of degree, not kind. Not to say that animals and other nonhuman beings deserve no moral respect, but my view is that our obligations to them are qualitatively very different.

Eve seems to think some of my previous hypotheticals related to these points were question begging. But they really need not be. Assume my body’s destroyed, but a super-detailed MRI scan of my brain was preserved. The scan is not itself conscious at all, nor is it human — I am not, pace Eve, saying that human consciousness is necessarily realizable without a brain-like physical base. But, it would be possible to grow some brain matter in a vat, totally lacking a neural pattern, and then impose the scanned pattern. Whether or not you think my mind could be reproduced on silicon, I think it’s plausible to say that, in every important respect, this duplicate brain — physically precisely similar to my own — would be “me” in every important respect. Here again, we’re admitting that the scan data is not human at all. But because it’s the sole “placeholder” between its original and the reproduce-me it could create, I’d hold that destroying the scan is morally indistinguishable from killing me in my sleep. If Eve would still call this a “dissociation of mind and body,” I guess I’ll plead guilty, but then I’m not sure what follows from the charge. It’s like saying architects who draw blueprints “dissociate” form from matter — of course a blueprint isn’t a building, but it will let me recreate that building in the central respects.

So, what’s so important about being a “placeholder” for a previously existing human mind, and not just a potential future one? Well, I stand by my original idea here: that when there’s a prior mind, we can identify a set of values, life plans, volitions, etc. that we show lack of concern for when we destroy a brainscan or frozen body. To the question “who have we harmed?” we have an answer: the person who had hoped to reawaken in that (or a new) body, who had goals which would be achieved if he did. I want to make a note about “identity” because on reflection, I’m not sure what it means for Eve or Sara. For me, “identity” is just, as I’ve said earlier, a shorthand for a cluster of other things like links of memory and disposition and character. Eve apparently thinks there’s a “further fact.” But if it’s not a soul (which would be inconsistent, at least, with Sara’s project of developing a secular argument), what is it? Of course, Eve has done things she doesn’t remember, and it really was “still her” that did them, in that the actions arose from a set of character dispositions sufficiently overlapping with the ones she currently holds. But I can’t even conceive what more she could mean by “identity” if it’s not this. Suppose that through nanotechnology, her brain (the very same matter) were utterly reconfigured to match the pattern of my neural pathways, and mine were rearranged to match hers, with all attendant memories and personality quirks manufactured in the process. I say of that case: the person who would then inhabit Eve’s body would be me, and the person who would inhabit my body would be Eve. If that’s wrong, why? If it’s right, aren’t we granting that our “identity” is what I’ve claimed: a shorthand for a large set of interconnected mental traits? But perhaps all this is unnecessary distraction: the question of whether the fetus “is me” only matters if the “is me” relation, whatever it is, transmits moral status. (or: some moral status. I could make a choice tomorrow to commit an evil act that would reduce that status, but that’s not what we’re concerned with here, obviously) Identity in the sense I’m using it will do that on my view, because it’s tied to the traits I consider morally important. But if Eve means something different by identity, let’s take her use for the sake of argument. What is it about identity in her sense — the sense in which the fetus “is me” — that entails that if Julian (now) deserves moral respect, so does anything standing in the “is me” relation to Julian (now).

Infanticide Radley joked at work the other day that he wished more folks like me were making the public pro-choice case, because millions would recoil in horror. He’s probably right. Still, a visceral reaction isn’t, in itself, sufficient basis for an ethics, or Danielle Steele novels would be immoral too. For both social and (probably more powerful) evolutionary reasons, we would be programmed to be distressed by the killing of human infants whether it were wrong or not, so our distress can’t be taken as a reliable guide without some further support. I’ve advanced a background reason that I think most human beings are valuable, whereas most plants are not: viz., certain qualities of mind. If you (or Eve) consider what kind of moral respect a non-human alien or robot with those same qualities (if we could somehow know about their inner lives) would be due, you’ll probably find that you also hold those things important. I’ll presume that Eve’s on the same page vis a vis conscious and rational aliens, and thinks it would be wrong to kill or harm them and so forth. So we can take as a point of agreement the value of those things. But Eve must further believe that, though perhaps sufficient, they’re not necessary for presumptive moral worth. But then I want to know what the X-factor is. In other words, I know that Eve has a “basic conviction that it’s wrong to kill infants,” but not why. It is not, by stipulation, those mental characteristics, since Eve indicates she grants moral worth even in the absence of those first brainwaves. But if not that, what?

Well hell, so much for succinct…

Tags: Uncategorized