Julian Sanchez header image 2

photos by Lara Shipley

Moral Intuition Poll

March 23rd, 2005 · 60 Comments

I’m having an argument with Will Wilkinson in the course of which a little thought experiment came up. I’m curious what others think. Take the following scenario:

Suppose an impenetrable black cube lands on your lawn. Simultaneously, you receive a message from God telling you that the cube is conscious, and contemplating the nature of the universe. You are 100% confident that this is true. However, the cube does not move, it does not pulsate, it’s not even shiny. There is no possibility of communicating or otherwise interacting with it. It is, however, a beautiful exotic material and you could make a fortune by breaking it up into bricks for designer masonry projects. Do you really care about the revelation from God?

Now, in real life, the answer is “yes, because I’d take it as evidence I’d lost my mind.” But, for the purposes of the thought experiment, my reaction is: “Hell yes; this is a creature with rights like like any person’s.” I don’t think this is some kind of abberant response, a product of brain fever brought on by excessive Kantian theorizing. But how about everyone else?

Addendum: So, two things. First, don’t read too much into the “message from God” bit. That’s just how philosophers brush aside thorny epistemic questions that, while potentially interesting in themselves, obscure the issue they’re trying to get at. It just means “don’t worry about how you know this for sure, you just do, because it’s interesting to figure out what follows if we assume you are sure.”

The other thing: While actually the particular debate was in some indirect ways inspired by the Schiavo thing, this isn’t actually “about” Schiavo or abortion or anything else. (The issues in this and the Schiavo case in particular are pretty utterly distinct; your conclusion about the cube wouldn’t necessarily have any obvious connection to your position on the Schiavo business.)

Tags: Moral Philosophy


       

 

60 responses so far ↓

  • 1 DeadHorseBeater // Mar 27, 2005 at 4:32 pm

    I would think that moral status is dependent on where a being is on the no-mind (Stone, conscious-mind (cow), sentient-mind scale (human). And of course for all we know, there are farther stages beyond mere human sentience.

    But moral status should also be dependent on a being having preferences. A being with no preferences is indifferent to what happens to it. It places no more value on being alive than dead, well-fed or starving. To torture a true ultra-Spock to death for our pleasure would be no wrong to him, though it might damage our character.
    Replicants seem to display preferences, even as we are told they have none. This evidence for their having preferences creates cognitive dissonance. If we were highly certain that this evidence was overpowered by other evidence, we would be justified in doing to them what we wished.
    We have no information as to whether the Block has preferences. And if does have preferences, we do not know what those are. If destroying a sentient being against its wishes is a grave wrong, and destroying a sentient being without preferences or in accordance with its wishes is little or no wrong, we have a kind of Pascal’s bet, so we shouldn’t destroy the block.

    My thought experiment:
    Reverse the positions. The Block has no way of interacting with us, short of destroying us. But it knows to a moral certainty those same facts that we know about it. The scattered limbs of human corpses yield it some aesthetic pleasure. Should it destroy us?

    Me, I’m convinced Terry Schiavo is a barely-conscious non-sentient who has no preferences of her own (too little brain function). So I’m an anti-tuber. (A hater of potatoes and yams)

  • 2 Gil // Mar 27, 2005 at 5:37 pm

    Wow.

    The more I think about Will’s position, and the others who agree with him, the more I realize just how different the moral intuitions of libertarians can be.

    I think Will’s criterion of eliciting “a certain kind of reaction or regard” from people is just weird. Kind of a Turing test for rights. This just strikes me as a poor and arbitrary choice.

    I realize that Will stipulates that it has to have this regard from a mature and well-developed human being, but to me it seems that this simulated consciousness just makes it a sophisticated Tickle-Me Elmo. It’s interesing, and engaging, and is valuable to those who like to engage with it, but it doesn’t seem to me to be rights-bearing at all. I would have no problem turning them off and on, or allowing them to be bought and sold.

    Would Will say that Star Trek aliens who were so physically different from us that they couldn’t elicit the sort of human reaction that he describes, but had humanlike consciousness and desire to live, lack the necessary conditions for being rights-bearing?

    And if only Horton could hear the Whos, would the rest of the world be right (assuming that they believed him) to deny them rights merely because they couldn’t interact witht them?

  • 3 Jadagul // Mar 27, 2005 at 7:36 pm

    Patrick: it’s not really supposed to be a brute utility calculation; it’s the potential for meaningful interaction, or the ability to meaningfully interact. So if I meet Coma Man, he still has rights; I may not be able to interact with him now, but there’s the potential for that interaction in the future. Same with the passed-out junkie; if he’s going to recover, I may call the police to have him dragged off my lawn, but he still has rights. But if he’s screwed himself up so much that he’ll never be responsive again (no idea if that’s possible, but hypothetically…) then he doesn’t have rights.

    The point about the mold is that we know (God told us) that it’s completely harmless. Doesn’t affect us anyway, except perhaps for the aesthetic displeasure of having green stuff on my shower walls. Sorry I didn’t make that clear in my last example. But since we can’t interact with it, I have no problem fumigating anyway, since I don’t want green shower walls.

    I agree with you, on the other hand, that we shouldn’t cause anything unnecessary pain, given a proper definition of unnecessary. Pointless, random cruelty to animals is bad not mainly because of what it does to the animals, but because the idea of getting pleasure from causing pain is disgusting. It’s not wrong because hurting animals is wrong, but because sadism is wrong on-face.

    DeadHorseBeater: your example about the cube is almost exactly why I hold the position I do. For that matter, since we can’t communicate with it, how do we know it’s not planning to destroy us? If there’s no way of communicating, or if there’s no possibility of mutually beneficial interaction, there’s just no point. It would be like treating my desk as if it had rights.

  • 4 bagofmice // Mar 28, 2005 at 3:03 am

    The question is self-contradictory.

    If you define intelligence as the ability to process einformation, then you need some evidence that the cube is processing information in order to declare it as intelligent.

    However, if by definition (via god) you know the cube is intelligent, then that must mean that evidence that the cube is processing information exists, which is of course ruled out by the definition of the cube in the question.

    Therefore, the question contains two self contradictory definitions and asks for a resolution, which you cannot do as the question is fundamentally flawed.

    There is no internally consistent answer.

  • 5 Chris S. // Mar 28, 2005 at 11:01 am

    I have a question for the people who feel that the potential for interaction is a necessary criterion for according rights to the Cube (or a mold or whatever). Suppose instead that we�re talking about a human being in a little village on the other side of the globe. Suppose you are told by some Wondrous Infallible Machine that, if you do not want to do so, you will never interact with this person and that you will never interact with anyone who has interacted with this person (and so on, excluding all indirect interactions). (You can�t object that the idea of such a Wondrous Infallible Machine is unrealistic since the original thought experiment involved a WIM, too.)

    Why would you have any compunction about killing that person — if you could gain something by doing so? If you say that there is something about the *potential* for interaction that is decisive, even though you are 100% sure that no interaction (direct or indirect) will in fact ever take place, then what is it about such a *potential* that could make the difference, if you know that the potential will never be realized? (Iââ?¬â?¢m not saying that there is no good answer; Iââ?¬â?¢m just curious to know what it might be.)

    Note that you can�t say that the mere fact of interaction with some other intelligent creatures is what matters unless you admit that you would accord rights to a Cube if you discovered that it was interacting with other Cubes, even if it is not interacting with you. And I don�t think you would want to grant that.

  • 6 Nathan T. Freeman // Mar 29, 2005 at 10:12 am

    “However, if by definition (via god) you know the cube is intelligent, then that must mean that evidence that the cube is processing information exists, which is of course ruled out by the definition of the cube in the question.”

    The communication with God *is* the evidence. Consider God to be the proxy communicator here.

    Of course, he’s apparently such an ass that he’s decided not to proxy anything except “yup, it’s still thinking.”

    Really, the scenario is useless. The presumption that you can identify consciousness through this mysterious unknown mechanism is what makes the hypothetical pointless. (And I wouldn’t often dare to say that anything Julian has to say is useless.)

    Will’s case of the Replicant is really more of a Turing test scenario, and says “it’s intelligence because we choose to call it that.”

    I honestly can’t figure out what makes either scenario particularly relevant to any kind of human moral decision.

  • 7 Chris S. // Mar 30, 2005 at 2:40 am

    To follow up on Nathan’s point, I think it may be the artificiality and unrealistic quality of the set-up that fools us into drawing certain faulty conclusions. E.g. some of us conclude that the potential for interaction is necessary for the entity to bear rights. Maybe we just can’t get rid of our gut feeling that the only way we could *actually* know if it were conscious would be if the potential for interaction existed. In other words, at a gut level (though not explicitly), we reject the starting premise (the consciousness-detecting machine) and then that leads us to all sorts of intellectual confusions.

  • 8 adam s // Mar 30, 2005 at 10:14 pm

    I don’t think consciousness is a prerequisite for rights. We each as individuals get to decide what has rights and what does not. And those rights are largely based on our own individual morals. Will sees value (imparts his morals) in the block as little tile floor pieces. Maybe I see value in the block as a whole sitting in some river acting as a shoreline buffer. Maybe the next guy sees value in the block as a holy artifact and decides not to smash it, but worship it instead. Consciousness alone should not stop you from doing anything to the rock.

    I like to hassle my vegan friends about their silly beliefs. I spent 6 mos. myself as a vegetarian and it didn’t work for me. There are lots of arguments and reasons not to eat meat.

    In one exchange, I argued that plants can feel pain too.

    I was inspired by this Tool lyric:

    “These are the cries of the carrots, the cries of the carrots! You see, Reverend Maynard, tomorrow is harvest day and to them it is the holocaust.”

    Please, I pleaded, think of the carrots.

    They replied: well carrots don’t have central nervous sysems, they can’t sense what you’re doing to them. They aren’t sentient.

    Then I came across this article:

    http://www.csmonitor.com/2005/0303/p01s03-usgn.html

    See, plants can defend themselves. They have neurotransmitters. They can sense danger and react accordingly. They are still not sentient.

    So the carrots might feel pain. They might sense the impending doom of harvest. Maybe that leads to a new morality for vegans: another being (sentient or not) should not suffer at the hands of a human being. If plants can be shown to feel (scientifically and all), or react to predation, then maybe vegans should think about all those little soybeans they consume. Maybe you shouldn’t kick the head off that dandelion sticking out of the sidewalk crack come May.

    Consciousness is not a prerequisite for having rights.

    Maybe you should carve up the block and serve it to some vegan friends.

    Or perhaps I am mis-using/understanding vegan morality? (Or maybe even this post.)

  • 9 Jadagul // Mar 31, 2005 at 4:02 am

    Fruitarians! I love those guys; they’re fun.

    Sorry it took me so long to get back to y’all. Schoolwork just seemed to pile up this week, for some reason.

    Chris: I come at the interaction question almost from the opposite side. Consider a world which is truly zero-sum–the mercantilist assumption is true. There are only a few sources of value, finite in amount. You can only benefit by taking these sources of value from others; that is, mutual cooperation is impossible because one person’s gain implies an equal loss by another person (I can’t figure out how this could actually be true, but suppose it is). I contend that in such a world there can be no rights. Affording rights to others is a suicide pact; it makes me incapable of acquiring values. Even worse, it leaves potential threats around, because those other people can sieze my values and leave me value-less. We have something worse than a Hobbesian society: not only can no one trust anyone else, but even limited cooperation and peace is impossible.

    Now suppose the world isn’t zero-sum, but I still can’t cooperate with other people. Suppose even that there’s no way to gain at all from their efforts–I can only use what I’ve gotten myself. This works out sort of like Rousseau’s state of nature–everyone wanders around, but no one harasses anyone else because there’s no point. In this world I probably have no desire to attack other people, but I also have no reason to give them special treatment. Rights only arise when I have reason to interact with other people peaceably.

    As for the button-that-keeps-me-from-interacting, I think you shouldn’t ever press that button for the same reasons–because cooperation is good and beneficial. The only reason to press the button is that you expect him to be a net negative–probably because he’s already violated rights and committed crimes, and it’s a mechanism of self-defense.

  • 10 adam s // Mar 31, 2005 at 8:47 am

    I missed something, I think. Rights are something we all can assume. The degree to which we resist suppression of those rights is the degree to which we are free. The Replicant is not aware of this because they are not conscious of self, therefore they are not conscious of their rights. The Replicant cannot say: hey, what about my rights?

    Conversely, the block can be sitting there, perfectly aware of itself, loving life, but if it cannot resist an assault (well really it can, but we have tools for that) on its rights, then rights aren’t assumed inherent.

    The degree to which we resist (suppression of those assumed rights) is the degree to which we are free.

    Free the Replicant, do what you will with the block.