Julian Sanchez header image 2

photos by Lara Shipley

Moral Intuition Poll

March 23rd, 2005 · 60 Comments

I’m having an argument with Will Wilkinson in the course of which a little thought experiment came up. I’m curious what others think. Take the following scenario:

Suppose an impenetrable black cube lands on your lawn. Simultaneously, you receive a message from God telling you that the cube is conscious, and contemplating the nature of the universe. You are 100% confident that this is true. However, the cube does not move, it does not pulsate, it’s not even shiny. There is no possibility of communicating or otherwise interacting with it. It is, however, a beautiful exotic material and you could make a fortune by breaking it up into bricks for designer masonry projects. Do you really care about the revelation from God?

Now, in real life, the answer is “yes, because I’d take it as evidence I’d lost my mind.” But, for the purposes of the thought experiment, my reaction is: “Hell yes; this is a creature with rights like like any person’s.” I don’t think this is some kind of abberant response, a product of brain fever brought on by excessive Kantian theorizing. But how about everyone else?

Addendum: So, two things. First, don’t read too much into the “message from God” bit. That’s just how philosophers brush aside thorny epistemic questions that, while potentially interesting in themselves, obscure the issue they’re trying to get at. It just means “don’t worry about how you know this for sure, you just do, because it’s interesting to figure out what follows if we assume you are sure.”

The other thing: While actually the particular debate was in some indirect ways inspired by the Schiavo thing, this isn’t actually “about” Schiavo or abortion or anything else. (The issues in this and the Schiavo case in particular are pretty utterly distinct; your conclusion about the cube wouldn’t necessarily have any obvious connection to your position on the Schiavo business.)

Tags: Moral Philosophy


       

 

60 responses so far ↓

  • 1 jordan // Mar 23, 2005 at 11:50 am

    Point of Clarification: do you KNOW that breaking the block down into designer masonry projects harm the consciousness?
    (otherwise one might say that breaking it down is actually helping it reproduce, in an ameobic sort of way)

  • 2 Julian Sanchez // Mar 23, 2005 at 11:59 am

    There’s no such thing as a Point of Clarification, goddamnit. But yes, the point is whether you can destroy the consciousness.

  • 3 Marie // Mar 23, 2005 at 1:35 pm

    My intuition: no question; the block should not be harmed.

  • 4 fling93 // Mar 23, 2005 at 2:42 pm

    Do we know that there’s no possibility of it interacting with anything else (e.g. other cubes on other random lawns)?

    I guess I’d still refrain from harming it either way.

    And the fact that it can’t communicate with you might actually be a Godsend, cuz it might turn out to be like that really annoying Wisdom Cube that appeared on “Aqua Teen Hunger Force.”

  • 5 Glen Whitman // Mar 23, 2005 at 4:36 pm

    By any chance did your argument emerge from a discussion about the Schiavo case?

    My intuition’s the same as yours: if it’s conscious and thinking, it has rights. However, I can see the alternative intuition, which is that somehow thinking’s not *quite* enough. Maybe you should have to interact with the world in some way.

    Indeed, this may be part of what’s implausible about the story. Thinking is driven in part by the need to interact with the world, and vice versa. In the absence of real problems imposed by scarcity and other constraints of the world, thinking would be utterly unnecessary; we might ask why the thinking function evolved at all. But this black box has none of those problems.

  • 6 Glen Whitman // Mar 23, 2005 at 4:36 pm

    Hey, did I just claim that necessity is the mother of cognition?

  • 7 Alex Knapp // Mar 23, 2005 at 6:07 pm

    How, exactly, does one go about breaking an impenetrable block?

    At any rate, I’d say that there isn’t enough information to answer this question. For one thing, what level of consciousness is the block at? A snake’s level? A snake is conscious, and in its own dim way is probably contemplating the universe. But I have have zero moral qualms about killing a snake and using its meat for stew and its skin for boots.

    For another, the fact that God is giving me this message begs the question of whether the block has a soul. And if it does, how does its contemplation of the universe differ in the afterlife than it does in this earthly life? If it doesn’t differ, then what does it matter if I destroy it?

    Not having this and other information, though, my inclination is that it’s probably wrong to destroy the block. But that wouldn’t stop me from finding some way of making a buck off it–maybe by displaying it as artwork or something without destroying it.

  • 8 Daniel // Mar 23, 2005 at 6:37 pm

    Considering the cube is silent and unmoving I think its value to the world is pretty limited. Frankly, in that situation Iââ?¬â?¢d probably be so surprised by that communication from God (and proof, therefore, of his existence) thatââ?¬â?¢d Iââ?¬â?¢d forget all about the damn cube and feel compelled to go to church regularly or something. I realize that wasnââ?¬â?¢t really the point. Iââ?¬â?¢d probably keep the cube intact but only because I care about the revelation from god, as in ââ?¬Å?god told me something and I will commemorate it by keeping this block on my lawn,ââ?¬Â not really about the essence of the cube itself, which is essentially the equivalent of Terri Schiavo or Sunny von Bulow. This is not to say that such people should be broken up into bricks but fundamentally such people are unable to take part in the very essence of human interaction. So itââ?¬â?¢s conscious and contemplating the nature of the universe. But the cube wonââ?¬â?¢t be able to do anything with this contemplation of the universe. But effective contemplation of the universe is not just an introspective process; itââ?¬â?¢s an active search, it requires effort, and well, movement and speech. One would be justified (though not entirely so) in breaking up the cube.

  • 9 Nathan T. Freeman // Mar 24, 2005 at 2:04 am

    Alex and Daniel are leaning in the right direction here — the key to the scenario is the message from God. The fact that you’re receiving this message with what can only be described as metaphysical certainty moves it entirely out of the realm of any relevant human experience. You might as well ask “what would you do if you received a message from God that it was 100% certain you should enjoy green eggs and ham?” Would your response center on green eggs and ham, or on the fact that you now have a certain belief in a supreme being and that he cares enough to communicate with you?

    The question about the cube ends up falling into a religious discussion. What is the moral agency of the cube? Does it worship God? Can it journey to the afterlife? If so, can it contemplate the universe there? Is the Christian premise of absolution accurate, and therefore you can simply ask for forgiveness from a priest after you’ve made all your money? Who cares about communicating with the cube — all the important questions are tied up in communicating with God!

    Anyway, my point is that if this is a thought experiment about the Schiavo case, (which I know almost nothing about because it ain’t exactly news over here in Africa) then you have to remember that the opinion of a doctor on the cognitive activity of a person is NOT a message from God. I work with them every day — doctors are frequently wrong.

  • 10 Julian Sanchez // Mar 24, 2005 at 6:25 am

    The particular thought experiment was prompted by the Schiavo case, but this actually ISN’T an argument about the Schiavo case directly; we were having it before she got the 24/7 coverage.

    And c’mon, the “message from God” thing is not the point; it’s just philosopherese for “don’t worry about how you know this, you just do.” Considerations about souls and afterlives are beside the point. Ditch the “god” language and pretend someone’s built a 99.999% foolproof consciousness detecting machine (though not one, to preserve the conditions of the experiment, that allows communication of the CONTENT of consciousness in either direction).

  • 11 Alex Knapp // Mar 24, 2005 at 10:37 am

    Ditch the God language, and the question over the LEVEL of consciousness remains. If all we know is that it is conscious, maybe erring on the side of preserving the block is in order, but destroying it wouldn’t necessarily be wrong, absent knowledge that this is a human-level or higher consciousness.

  • 12 Will Wilkinson // Mar 24, 2005 at 10:42 am

    I think the “message from God” actually rigs the thought experiment to your advantage, Julian. I obscure my my own point by stipulating it in that way. My point is largely epistemological. But it is epistemological in an anti-Cartesian way.

    Let me back up… The argument, folks, is whether consciousness, the technicolor inner theater of experience, is relevant to a being’s status as a rights holder. My position is that consciousness is neither necessary nor sufficient. Julian, as far as I can tell, thinks consciousness (of a certain quality at least) is both necessary and sufficient.

    OK. So, The Cube. I want to say that a being’s rights-status is contingent on it’s ability to elicit a certain kind of reaction or regard from a mature, well-developed, human being. Human consciousness is in most cases sufficient to cause behavior that elicits the right sort of regard. Indeed, it is that distinctive form of reaction that is our best evidence for the existence of consciousness. But I want to say that the fact of consciousness is orthagonal to the main issue.

    If you are a Blade Runner-style replicant, and I cannot tell the difference between you and a human, and, say, I fall in love with you, the fact that you have no inner life is just irrelevant. And it’s not just that I don’t know that you don’t. Suppose Tyrell himself tells me that you are a replicant, and that you have no qualitative, subjective, first person experience, because the Tyrell Corporation can’t solve the “hard problem?” Then, I think, I experience a kind of extreme dissonance. My BODY responds to you as if you are conscious, and this is totally involuntary. The abstract knowledge that you are not conscious just can’t penetrate or override my natural reaction.

    Imagine: You, the replicant, are crying, telling me you ARE conscious, and that you do not know why this horrible man would lie, and that I should trust you, because you love me. And, I love you too, right? RIGHT?!

    I also have no reason whatsoever to believe that Tyrell would lie. So in a “Cartesian” way, I know you are a replicant. I say it doesn’t matter. I will embrace you, stroke your hair, and tell you that I believe you. And I actually will believe you, concluding, perhaps mistakenly, that Tyell doesn’t really understand what it means to be conscious.

    The point is, it doesn’t really matter if you are conscious are not, only that a normal person reacts to you in the right sort of way. So, consciousness: not necessary.

    The Block was meant to show that consciousness is not sufficient. The thing that matters about consciousness is that it enables a certain kind of distinctively valuable kind of interaction. Your intuition that you should potect The Block because it is consciousness is spillover from your extremeley well-supported and integrated generalization that conscious beings afford a certain kind of distinctively valuable interaction. But, ex hypothesi, there is no possibility for such interaction. My contention is that once you isolate the fact of consciousness from the possibility of interaction (which is admittedly difficult to do, since they do in fact tend to go together), you’ll come to conclude that the mere fact of consciousness is irrelevant to moral/rights status.

    Free the Replicants! Smash The Block!

  • 13 Kat // Mar 24, 2005 at 11:26 am

    I’m with Will on this one. But if it *were* shiny, man oh man would I think differently!

  • 14 R. Daneel Olivaw // Mar 24, 2005 at 1:31 pm

    I too am with Will on this, but then again, maybe I’ll just throw a femur bone up in the air and call it a day.

  • 15 Kriston Capps // Mar 24, 2005 at 2:05 pm

    Should anyone need to see our poor, sad block before deciding whether to smash it to bits, Tony Smith’s “Die” (1962) probably offers a sufficient resemblance:

    http://www.courses.rochester.edu/seiberling/AAH128/IMAGES/IMG074.JPG

    Who would smash such a fine-looking cube! Anyway, that was terribly OT.

  • 16 PG // Mar 24, 2005 at 2:44 pm

    Don’t you get tired of these thinly-veiled abortion metaphors? Why don’t you just tell the nice people about your end table built of fetus hands?

  • 17 fyodor // Mar 24, 2005 at 2:47 pm

    Is the black cube in as much pain and misery as Terri Schiavo is likely in if she actually possesses consciousness?

    Okay, I don’t know for sure that Schiavo would be just as glad to die if she’s conscious. But I think it’s very likely. And that, as well as what people who knew her and have assumed responsibility for her say about her, is all we can know.

  • 18 Julian Sanchez // Mar 24, 2005 at 2:47 pm

    Believe it or not, Pete, this one’s got nothing to do with abortion. Though the end table IS lovely.

  • 19 Will Wilkinson // Mar 24, 2005 at 3:09 pm

    If I had a ceramics studio, I would make such an end table. Wouldn’t it be creepy-cool . . . a bunch of tiny fingers supporting the glass table-top?

  • 20 Will Wilkinson // Mar 24, 2005 at 3:15 pm

    Kriston, I REALLY want to smash Tony Smith cubes!

  • 21 Marie // Mar 24, 2005 at 10:57 pm

    I would keep the block,and since it was here to collect information, I would talk to the pretty block day and night. I would talk, and talk, and talk, and I am just pretty sure, if this block is truly sentient, would do one of two things. Commit suicide or beg God to euthanize me.

  • 22 harlan // Mar 25, 2005 at 1:07 am

    Since you’ve clarified your thought experiment thusly “…the “message from God” thing is not the point; it’s just philosopherese for “don’t worry about how you know this, you just do.”” the answer is simple:

    A. Sell it like a head of Wagyu beef

    or,

    B. Of necessity, you must dedicate your life to harming NO creature, since all creatures have some level of consciousness; you MUST oppose all war, commerce which harms other creatures, the death penalty, abortion, etc…

    You must only live a simple life, eating only the minimal amount of plants–which you’ve grown–that is necessary to sustain life. In a word, you must become like a Buddhist monk.

    These are really your only options if you are to take the question seriously. Either consciousness is an absolute good in itself or else you adopt the attitude that most of us have: that value only exists for people like ourselves. It is the latter which allows us to support war, for example, which is the objectification of “the other”.

    Harlan.

  • 23 Timothy Waligore // Mar 25, 2005 at 6:53 am

    “There is no possibility of communicating or otherwise interacting with it.”

    I think that destroying it is a pretty big form of interaction, no?

    “I don’t think this is some kind of abberant response, a product of brain fever brought on by excessive Kantian theorizing.”

    I wonder if there’s not *enough* Kantian theorizing in this thread. Where’s the discussion of practical reasoning and Kan’t later political works?

  • 24 Peter M. Jaworski // Mar 25, 2005 at 2:19 pm

    What is the motivation for respecting the cube’s “rights”? It seems to me like there isn’t any. So chop it up.

    If we’re contractarian about this, the fact that the cube cannot harm us, deal with us, or do anything with us–besides just ‘sit there’–means that we can probably do just what we’d like with it. Consider the fact that we chop up cows to our heart’s content and only a semi-small (but growing) fraction of us gives a cow’s ass. The cube is like a cow.

    We can be Humean/Smithian about it and decide it on the basis of whether or not we can sympathize with the thing. But we don’t say that it has “rights,” we just don’t harm it because, in a way, it sort of harms us (we worry the cube is harmed and don’t like feeling that way).

    Rights are a deal we make with others. If we can’t make deals with things, then they don’t have “rights.” We can’t make a deal with the cube, so the cube has no rights. I don’t think any other sort of conditional makes sense (like: ~If~ conscious ~then~ has rights. Why not, ~if~ hair ~then~ rights? Or whatever?).

    A caveat: Consider all the rest of us who somehow feel like consciousness is sufficient for “rights.” We can respect the cube, and not cut it up, ~not~ for the cube’s sake, but for the sake of all those other people who care about the cube. This, I take it, is why we don’t cut up dogs and cats in North America. We don’t care about the dogs and cats, we care about the relationship people have with cats and dogs, and we respect them when we don’t chop the animals up.

  • 25 Peter M. Jaworski // Mar 25, 2005 at 2:29 pm

    Wait, wait. This reminds me of that ‘Deep thought’ by Jack Handey:

    “If trees could scream, would we be so cavalier about cutting them down? We might, if they screamed all the time, for no good reason.”

  • 26 ajmac // Mar 25, 2005 at 2:51 pm

    First visit to your blog. You are all to be commended for a civil discussion. And the Jack Handey quote.

    My perspective is that of an orthodox Christian and so, of course, my assumptions are quite different from those implicit in the sentiments expressed above. From the Christian perspective, neither sentience nor the ability to interact is relevant. Rather, intrinsic value, on which the right to life rests, is derived from creation in the image of the Creator.

    If the Christian worldview is to be believed, then we are all endowed by our Creator with reflections of His character. Most notable among these is the ability to exercise will. Of course, unlike the God of the Bible, who exercises His will over all eternity, we have power to exercise our will only in the present and only over certain physical variables that are within our limited control.

    So, thinking does not make us human. Rather, willing — loving, hating, creating, destroying, welcoming, shunning — is the sine qua non of personhood. Infants have wills. Senile people have wills. The replicant in the Blade Runner hypothesis above obviously has a will. So, each is a person.

    The block is not.

    That’s the orthodox Christian perspective, for what it’s worth.

  • 27 Julian Sanchez // Mar 25, 2005 at 3:01 pm

    Well, while the Christian perspective isn’t particularly relevant to the debate Will and I were having, I’m not sure the analysis above makes sense. It’s not obvious to me, for instance, that Will’s replicant “wills” (or loves or hates) any more than a toaster wills the toast to pop up. If behavior without an inner life can count as “willed”, then anything any inanimate object does is also “willed”, which can’t be right.

    I’d be inclined to say that the block, on the other hand, certianly can will (we’re assuming it’s mind is at least approximately humanlike). It can, for instance, decide to focus on one topic of contemplation or another, or to concentrate more or less intensely, perhaps to stop itself from being distracted by whatever distracting thoughts blocks have. Surely a Christian of all people would concede that important instances of willing can be purely internal acts.

  • 28 ajmac // Mar 25, 2005 at 3:06 pm

    I don’t see why the Christian perspective is any less relevant than your own.

    I agree, given the additional information you just provided. If the replicant cannot will (I am not sure what “inner life” means) and the block is able to make decisions, then I would certainly concur that the block is a person and the replicant is not, no matter their physical forms.

  • 29 Julian Sanchez // Mar 25, 2005 at 3:15 pm

    I just mean that it’s not relevant in that our specific argument takes for granted that the Christian perspective is wrong, so it’s a separate issue. Analogize two evolutionary biologists debating a fine point of theory. A creationist can interject that the whole framework is misconceived, but whether that’s right or not, it doesn’t really contribute ot the resolution of the debate they’re having.

    By “inner life” I just mean there’s some subjective character to the thing’s experience. The idea behind the Replicant (as we’re using the term here) is that it can move and smile and make speech-noises, but it’s essentially like a character on a movie screen: It’s not doing any feeling or understanding or what have you above and beyond the behavior.

  • 30 ajmac // Mar 25, 2005 at 3:20 pm

    My apologies. I missed that a certain set of assumptions was presumed. Again, my first visit here. I will in the future try harder to scope out the parameters of the debate before jumping in.

    Anyway, it’s always refreshing to read some measured discussion of important topics from people with whom I disagree. I am sure to visit again.

  • 31 The Accipiter // Mar 25, 2005 at 3:37 pm

    The most important aspect of the hypothesis is that God spoke to me. That would drive the rest of my decisions, I think. Because I’m also 100% certain the cube is conscious and contemplating the nature of the universe, I’d probably sit in my lawn for a few days mesmorized by the synergistic effect of those two amazing facts.

    Putting that aside to an extent, I’d leave the rock alone. Because I know it’s conscious and contemplating the universe, which includes my lawn, my house, my family, and me, I’d let it contemplate. Who doesn’t like being contemplated? Plus, I like to contemplate free from others’ ideas.

    Who am I to disturb it? There’s no indication that I’m bothered by it, right? I can mow around it, I imagine. And, because it’s so exotic and beautiful–even to people who don’t know of its spiritual or somehow animate nature–I’d be proud to have it.

    That’s not to say it wouldn’t drive me absolutely nuts because I’d think about it and what it was doing and why God spoke to me every waking minute.

    The idea of making a buck off it is ridiculous.

    But I think drilling in ANWR is ridiculous too.

    If this is a Schiavo thing, it’s lacking in analogical power, but I’ll leave it at that.

    Thanks for the challenge.

  • 32 Matthew Yglesias // Mar 25, 2005 at 3:58 pm

    Will’s comment on the Replicants is exactly right. Smash the block!

  • 33 wbaude@crescatsententia.org // Mar 25, 2005 at 4:40 pm

    Limited agreement with Will and Matt here. I don’t find the thing’s consciousness sufficient to get full rights-holding status. But I’m not sure I’d smash it either, at least not right now.

    It arouses some sort of sympathy in me nonetheless, in much the same way a dolphin does, so the unspecified “fortune” (especially when combined with the hassle of breaking, marketing, income tax reporting, etc.) probably wouldn’t be enough to motivate me to do anything about it in the short room other than put it on my bookshelf.

    That said, if I were ever really hard up for money, then I might consider smashing it apart and selling it, but I’m not sure where that threshhold would be. Probably about the same threshhold of poverty at which I’d be willing to eat my pet dog.

  • 34 Glen Whitman // Mar 25, 2005 at 7:43 pm

    Will, you’re such a good writer that I just want to agree with everything you say. But something’s amiss with your reasoning. In the case of the Replicant, you’re willing to appeal to human sympathy, a gut “feeling” that this being is lovable and worthwhile and fun to be around. You decline to dissect that intuition. Okay. But when it comes to The Cube, and we (or at least some of us) have an intuition that its consciousness imbues it with moral standing, you’re willing to dissect the intuition and take it apart. You conclude that this intuition results merely from the high correlation between consciousness and all those nice human qualities (“valuable interaction,” as you say) that are also displayed by the Replicant. You further conclude that the valuable interaction is the meat of the matter, while the consciousness is just window dressing. Couldn’t we have parsed it the other way? Maybe the consciousness is the meat of the matter, and valuable interaction the window dressing. Or maybe consciousness and valuable interaction are separately necessary and jointly sufficient. Or maybe one or the other is sufficient, but neither one is necessary.

  • 35 proletarian jackass // Mar 25, 2005 at 11:46 pm

    You all are an abomination to the English language. No wonder no one pays attention to philosophers. I call bullshit on this entire discussion.

  • 36 J. Goard // Mar 26, 2005 at 2:04 am

    This thought experiment seems isomorphic to those where you can press a button to cause harm to another person (on the authority of a system designer), without any feedback at all to you. Perhaps the former is somewhat purer, in that a) no third party witnesses any pain or other negative change (incl. cessation) of the target’s consciousness, and b) counterfactual worlds in which *you* could witness such change are not near enough to the actual world. But the latter can be and has been tested.

    My judgment: no way I’d knowingly harm or kill the cube for a comparably trivial benefit. For what it’s worth, I have a lot of trouble putting stock in ethical reasoning that isn’t fundamentally consequentialist, although I readily acknowledge the apparently insuperable problem of defining a reasonable calculus. (BTW, my biggest point of outrage in the Schiavo case is the — to my mind — wholly absurd amount of stock our society puts in the active/passive distinction, such that slowly starving someone passively is acceptable, but actively administering a lethal sedative is not.)

  • 37 Luka Yovetich // Mar 26, 2005 at 4:34 am

    My intuition is that the cube has rights. Another intuition I have is that I might not have best reason to respect those rights.

    What do you think Julian (or anybody)? If some being has rights, does that mean that all people or all normal people always have best reason to respect those rights?

    And if you conclude that all (normal) people don’t always have best reason to respect the rights of others, then what seems more important to you, respecting rights or acting on what we have best reason to do?

    Just curious.

  • 38 Chris S. // Mar 26, 2005 at 8:24 am

    Given the unnatural set-up (a 99.999% foolproof ââ?¬Å?consciousnessââ?¬Â detecting machine), the debate, although interesting, tends to slip and slide unless one starts out with a very clear definition of what is meant by ââ?¬Å?consciousnessââ?¬Â. In other words, the machine tells us with great certainty that the cube has some quality ââ?¬Å?smdgââ?¬Â. We want to know what rights having smdg entails. Well, weââ?¬â?¢d better begin by defining exactly what the machine means by smdg! Is it just the ability to contemplate the universe (which itself should be defined better)? Is it ââ?¬Å?a technicolor inner theater of experienceââ?¬Â (whatââ?¬â?¢s it like to be a bat)? Is it the ability to interact in a particular way (what way?)? It wonââ?¬â?¢t do to leave ââ?¬Å?consciousnessââ?¬Â as a weasel word.

    Defining ââ?¬Å?consciousnessââ?¬Â better in that way would improve the debate, but it would still be removed from arguably a more important problem. In real life, we donââ?¬â?¢t have a ââ?¬Å?consciousnessââ?¬Â detecting machine. All we have are the entityââ?¬â?¢s behaviors (including communications, brain waves, chemical changes, etc.). Given that we can reach conclusions about ââ?¬Å?consciousnessââ?¬Â (or some other relevant ââ?¬Å?smdgââ?¬Â) only by way of observing these behaviors, letââ?¬â?¢s not make the metaphysical leap too quickly and forget the evidence. The question then becomes: What do we need to observe about an entity (including communicating with it) to convince us that it should be entitled to certain rights? That might point us in a better direction.

    If we focus on our experience of observing and interacting with the entity, that gets us to ajmac�s comments (for which he/she has no need to apologize), setting aside the theological aspects. Maybe thinking, although necessary, is hardly sufficient. As pointed out by Will, what matters more may be certain attributes associated with consciousness but not consciousness per se.

    If we do some thought experiments, I think we will see that, when we speak of ââ?¬Å?consciousnessââ?¬Â, we tacitly bring in a huge amount of baggage based on the kinds of conscious creatures we know — i.e. ourselves. For example, suppose we can communicate with the Cube and we ask it whether it cares if we break it up. It replies (and our magic machine assures us that it is responding “honestly”!): ââ?¬Å?You are right that I am conscious: not only do I contemplate the universe, but I contemplate myself contemplating the universe. Nevertheless, your question has no meaning for me. I have no emotions; I feel no pain; I have no instinct of self-preservation; my reason for being is solely to serve those who created me. I do not ââ?¬Ë?careââ?¬â?¢ about anything.ââ?¬Â Well, now, wouldnââ?¬â?¢t that influence our decision about whether or not to break it up? So is it only ââ?¬Å?consciousnessââ?¬Â that matters, or in addition some other qualities that we associate with ââ?¬Å?personhoodââ?¬Â?

  • 39 ads // Mar 26, 2005 at 9:45 am

    The answer is: no, the cube is not a rights holder. But, if you were absolutely sure of its consciousness, you’d still want to respect its “rights,” so as not descend down that slippery slope of killing life.

    Although, in reality, the opposite case of something that acts like it has consciousness, but the message from God reveals is just a robot, is probably a stronger case to respect rights, even if rights don’t exist.

  • 40 asdfds@hotmail.com // Mar 26, 2005 at 9:49 am

    Anyone know where I can pick up one of those hot replicants?

  • 41 Medium Lobster // Mar 26, 2005 at 10:44 am

    I think Will’s slightly off on the replicants and very wrong on the block.

    I love the replicant despite its lack of consciousness because it can interact with me in a way that a conscious person could. But what I’m responding to isn’t merely the replicant’s physical interaction with me, but its simulation of consciousness, which compels me to treat it as if it were conscious. Hence, the replicant scenario does not demonstrate that consciousness is not necessary, nor that interaction is sufficient or necessary, but that the appearance of consciousness is sufficient.

    In the block scenario, the block may not appear conscious, but this doesn’t matter because I know that it’s conscious. I don’t know what the block is thinking, nor can it interact with me or communicate with me in order to convey this, but I do know – thanks to my handy divine revelation – that it is thinking. Given my certainty that the block is thinking, I would find the prospect of destroying another sentient being morally repugnant, and spare the block. Hence, consciousness is sufficient.

    Let me add a third scenario: I’m sitting by the bed of a comatose man. I’m perfectly aware that no conscious activity is going on inside his head, and in fact, he’s been lying in this bed for the last decade. However, this is not any ordinary comatose man – it’s Coma Man, the famous superhero, who has the miraculous power to go into a comatose state for a period of ten years every other ten years and awake like clockwork. In fact, one hour from now he’ll be getting up again. But for the moment, there’s nothing in his head at all – no experience and no consciousness, and certainly nothing to interact with or simulate consciousness. Is it morally justified to switch off Coma Man’s life support and harvest his organs, or does he have human rights?

  • 42 Chris S. // Mar 26, 2005 at 11:47 am

    Medium Lobster’s “block scenario” nicely illustrates one point I was making. He presumes that thinking implies being sentient. One definition I just found on the Web is this: “A sentient being has intelligence, is aware of its own existence, and aware that there is a ‘big picture’.” Another definition refers to being aware of sense impressions. These seem to go beyond mere *thinking* — at least to me. So unless one defines clearly what is meant by the Cube’s “consciousness” (given that we can’t observe any behavior resulting from that consciousness), the discussion will just keep slipping and sliding all over the place. Of course, intellectual slipping and sliding can be lots of fun!

  • 43 Medium Lobster // Mar 26, 2005 at 12:19 pm

    Definitions of “consciousness” could fill up a dozen threads, but within the context of these thought experiments it’s less important to pin down its precise definition than to describe what consciousness means in relation to moral intuition and rights.

    In the block scenario, consciousness is sufficiently distinct from not-consciousness that it would compel me to spare the Contemplating Block whereas I don’t feel any compulsion to preserve a non-conscious block of matter – say, a block of hardened concrete. I don’t need a precise definition of consciousness to be able to say that the experience of consciousness is different from the experience of being an ordinary rock; similarly, I don’t need that precise definition to say that the experience of the Contemplating Block is different from that of some object which lacks consciousness (and thus has no experience). The argument here is over how significant that difference is, not how to graph it.

  • 44 Chris S. // Mar 26, 2005 at 3:25 pm

    There are two extremes. First, thereââ?¬â?¢s an inanimate block of matter. Itââ?¬â?¢s hard to imagine having any compunction about chopping that up (unless youââ?¬â?¢re a pan-psychic of some sort). At the other extreme, there is a cube that we know (with near certainty) has a conscious, thinking inner life very much like our own, complete with beliefs, preferences, hopes, desires, moods, emotions, cares, humor ââ?¬â?? the works. I think most of us would agree that it would be wrong to smash that one up, even if it doesnââ?¬â?¢t look at all like a living thing and even if we canââ?¬â?¢t communicate with it at all. To me, the interesting question is where and how we draw the line between these two extremes. For example, what if the Cube has an inner conscious life like ours, except that it is has absolutely no feelings, emotions, moods, or preferences (an ultra Spock?). Does that make any difference? What if we are told that, whatever it is, it evolved naturally, or alternatively that it was created by another intelligent being for instrumental purposes. Would that make a difference? These are the questions that are interesting to me, and these require defining with some precision what attributes the Cube has or does not have.

  • 45 Patrick // Mar 26, 2005 at 8:56 pm

    My first thought was also that I would be too baffled about the proof that there is a God, that I would pay much attention to the cube (supposing I wouldn’t be accustomed to the divine communication.)

    Next I would begin to think, that if God told me that this cube is conscious, maybe she is testing me somehow, and it might be wise not to break it up into bricks. After all, if God exists, she probably is somewhat stronger than I am, and it wouldn’t hurt to avoid messing with her.

    If I could be sure, that I would not be punished, if I break the cube, that could be slightly more interesting. But I’m pretty sure I wouldn’t break it up anyway, if I thought it was conscious. It wouldn’t bother me on the lawn, and though I wouldn’t make a fortune, the situation would’t change from what it was before the cube landed on the lawn. And an impenetrable cube might be difficult to break up into bricks, anyway.

    And even though the cube is not shiny, it is, as you told, beautiful exotic material. Maybe I would like to keep it as a decoration on my lawn. Or maybe I would even remove the lawn and build a Japanese garden around the cube, instead.

  • 46 Gil // Mar 27, 2005 at 1:35 am

    I’m with Julian and against Will. In a comment, Julian said that we assume that its mind is approximately humanlike. That’s good enough for me.

    If I somehow slipped into a parallel universe where the I couldn’t interact with the world, but the people there could only sense a static cube-like image of me and sense my human-like consciousness (and have the ability to kill me), I would certainly hope that they would respect my rights, even if I wasn’t useful to them.

    I think that humanlike consciousness is inherently valuable, and should be respected (absent significant costs imposed on others). It can still consider and solve problems, create and appreciate the creativity of others, etc.

    Leave it alone!

  • 47 Jadagul // Mar 27, 2005 at 5:49 am

    Well, if I were trapped and had no way at all of interacting with the world, I think I would go insane and try to commit suicide, except I couldn’t, because I couldn’t interact with the world to destroy myself. But that’s because I’m a hyperactive neurotic who can’t sit still for more than ten seconds.

    As for the block…I think I’ve come to the conclusion that others’ rights come from our ability to meaningfully (and perhaps gainfully?) interact with them. Thus, if I have no way of interacting with the cube as an acting mind, I don’t treat it as a rights-holder (for that matter, if it can’t interact with the world around it, I think that prevents it from being an acting mind–in the sense that it has desires and acts to try to see them realized).

    On the other hand, I think I do give Will’s replica rights, because even if it doesn’t technically have consciousness, it can act (or pseudo-act; even if it’s not conscious, it has desires that it attempts to see realized) and it and I can cooperate (assuming I possess something it could potentially want. If it’s totally self-sufficient and has no desires either for material goods or for emotional connection, and thus there’s no way for it to cooperate with me, things get stickier).

  • 48 Patrick // Mar 27, 2005 at 6:37 am

    Why would they get stickier? Do I have the right to destroy somebody else just because s/he doesn’t have a way to cooperate with me? Does the value of another individual depend on his/her utility to me?

    I might also wish to die, if I couldn’t move or interact with other people, but maybe the cube is different and totally satisfied in contemplating the nature of the universe.

  • 49 Jadagul // Mar 27, 2005 at 7:34 am

    Patrick:
    T
    he thing about wanting to die wasn’t meant to be terribly serious. As for the other question…basically, yes. I don’t think rights are some sort of magical result of “consciousness”; I think they result from our ability to meaningfully and gainfully interact–though that’s a bit vague.

    An example: suppose we discovered that smallpox viruses are conscious (I have no clue how, but they are. Just trust me. They keep their brains stored in an alternate dimension, or something). Even suppose we can communicate with them. But once we communicate them, all they say to us is something like “Screw you, we don’t want to compromise. We’re just going to keep on infecting you.” We have no way of cooperating with the viruses; we can’t have any mutually beneficial interaction; so we don’t give them rights.

    Pushing it further, suppose we discover some form of mold. It’s conscious, but it only uses this consciousness to contemplate the fundamental beauty of being shower mold. It generates its own energy and survives in most conditions; it has no other needs, so there’s nothing we could give it. It doesn’t do anything, so there’s nothing it could give us. Even if it’s not directly harming me, I think I’d be okay with Lysol-ing it out of my shower. The fact that it’s conscious doesn’t give it rights; the potential for interaction does.

    In the example of Will’s construct, even if the construct in some sense isn’t “conscious,” I’m obviously able to have meaningful interactions with it. It qualifies as a rights-bearer. When it turns itself off and no longer is capable of interaction, it becomes scrap.

  • 50 Patrick // Mar 27, 2005 at 8:27 am

    OK, even if smallpox viruses were conscious, they would also threat my life. It would be me or them, so I would have the right to defend myself.

    As for the mold, if it was conscious, I guess it depends whether it would threat me somehow or not. I know I’m allergic to some things, maybe also to molds.

    But you are right that there is also differences between the levels of consciousness. Probably animals are on some level of consciousness, as well, but I’m not a vegan. I guess that means, that I also accept that certain conscious creatures are destroyed to benefit others, if they are on a lower level of consciousness. (However, as animals have some level of consciousness, I think we shouldn’t cause them unnecessary pain.) I assumed that if the cube was contemplating the nature of the universe, it would be at least on the same level of consciousness as we are.

    Therefore, as I believe that I must accord to others (and by “others” I mean creatures on the same level of consciousness as we are or higher) the same right as I enjoy myself, I think I can’t destroy it just because it doesn’t benefit me, without agreeing, that somebody who doesn’t get any benefit from me, has the right to destroy me.

    But I assume that the cube in question wouldn’t threat me in any way, so I don’t think that just the fact, that it doesn’t benefit me in any way, would give me a right to destroy it. Otherwise I could as well go to the streets and kill strangers, who don’t agree to serve me somehow, or who don’t at least speak any language I know.

    As the lawn owner I would have the right to remove the cube from my lawn without destroying it, if it would disturb me, but I don’t think that I would have the right to destroy it if it wouldn’t threat me.

    If a junkie would pass out on your lawn, and you couldn’t communicate with him, would that give you the right to kill him?

  • 51 DeadHorseBeater // Mar 27, 2005 at 4:32 pm

    I would think that moral status is dependent on where a being is on the no-mind (Stone, conscious-mind (cow), sentient-mind scale (human). And of course for all we know, there are farther stages beyond mere human sentience.

    But moral status should also be dependent on a being having preferences. A being with no preferences is indifferent to what happens to it. It places no more value on being alive than dead, well-fed or starving. To torture a true ultra-Spock to death for our pleasure would be no wrong to him, though it might damage our character.
    Replicants seem to display preferences, even as we are told they have none. This evidence for their having preferences creates cognitive dissonance. If we were highly certain that this evidence was overpowered by other evidence, we would be justified in doing to them what we wished.
    We have no information as to whether the Block has preferences. And if does have preferences, we do not know what those are. If destroying a sentient being against its wishes is a grave wrong, and destroying a sentient being without preferences or in accordance with its wishes is little or no wrong, we have a kind of Pascal’s bet, so we shouldn’t destroy the block.

    My thought experiment:
    Reverse the positions. The Block has no way of interacting with us, short of destroying us. But it knows to a moral certainty those same facts that we know about it. The scattered limbs of human corpses yield it some aesthetic pleasure. Should it destroy us?

    Me, I’m convinced Terry Schiavo is a barely-conscious non-sentient who has no preferences of her own (too little brain function). So I’m an anti-tuber. (A hater of potatoes and yams)

  • 52 Gil // Mar 27, 2005 at 5:37 pm

    Wow.

    The more I think about Will’s position, and the others who agree with him, the more I realize just how different the moral intuitions of libertarians can be.

    I think Will’s criterion of eliciting “a certain kind of reaction or regard” from people is just weird. Kind of a Turing test for rights. This just strikes me as a poor and arbitrary choice.

    I realize that Will stipulates that it has to have this regard from a mature and well-developed human being, but to me it seems that this simulated consciousness just makes it a sophisticated Tickle-Me Elmo. It’s interesing, and engaging, and is valuable to those who like to engage with it, but it doesn’t seem to me to be rights-bearing at all. I would have no problem turning them off and on, or allowing them to be bought and sold.

    Would Will say that Star Trek aliens who were so physically different from us that they couldn’t elicit the sort of human reaction that he describes, but had humanlike consciousness and desire to live, lack the necessary conditions for being rights-bearing?

    And if only Horton could hear the Whos, would the rest of the world be right (assuming that they believed him) to deny them rights merely because they couldn’t interact witht them?

  • 53 Jadagul // Mar 27, 2005 at 7:36 pm

    Patrick: it’s not really supposed to be a brute utility calculation; it’s the potential for meaningful interaction, or the ability to meaningfully interact. So if I meet Coma Man, he still has rights; I may not be able to interact with him now, but there’s the potential for that interaction in the future. Same with the passed-out junkie; if he’s going to recover, I may call the police to have him dragged off my lawn, but he still has rights. But if he’s screwed himself up so much that he’ll never be responsive again (no idea if that’s possible, but hypothetically…) then he doesn’t have rights.

    The point about the mold is that we know (God told us) that it’s completely harmless. Doesn’t affect us anyway, except perhaps for the aesthetic displeasure of having green stuff on my shower walls. Sorry I didn’t make that clear in my last example. But since we can’t interact with it, I have no problem fumigating anyway, since I don’t want green shower walls.

    I agree with you, on the other hand, that we shouldn’t cause anything unnecessary pain, given a proper definition of unnecessary. Pointless, random cruelty to animals is bad not mainly because of what it does to the animals, but because the idea of getting pleasure from causing pain is disgusting. It’s not wrong because hurting animals is wrong, but because sadism is wrong on-face.

    DeadHorseBeater: your example about the cube is almost exactly why I hold the position I do. For that matter, since we can’t communicate with it, how do we know it’s not planning to destroy us? If there’s no way of communicating, or if there’s no possibility of mutually beneficial interaction, there’s just no point. It would be like treating my desk as if it had rights.

  • 54 bagofmice // Mar 28, 2005 at 3:03 am

    The question is self-contradictory.

    If you define intelligence as the ability to process einformation, then you need some evidence that the cube is processing information in order to declare it as intelligent.

    However, if by definition (via god) you know the cube is intelligent, then that must mean that evidence that the cube is processing information exists, which is of course ruled out by the definition of the cube in the question.

    Therefore, the question contains two self contradictory definitions and asks for a resolution, which you cannot do as the question is fundamentally flawed.

    There is no internally consistent answer.

  • 55 Chris S. // Mar 28, 2005 at 11:01 am

    I have a question for the people who feel that the potential for interaction is a necessary criterion for according rights to the Cube (or a mold or whatever). Suppose instead that we�re talking about a human being in a little village on the other side of the globe. Suppose you are told by some Wondrous Infallible Machine that, if you do not want to do so, you will never interact with this person and that you will never interact with anyone who has interacted with this person (and so on, excluding all indirect interactions). (You can�t object that the idea of such a Wondrous Infallible Machine is unrealistic since the original thought experiment involved a WIM, too.)

    Why would you have any compunction about killing that person — if you could gain something by doing so? If you say that there is something about the *potential* for interaction that is decisive, even though you are 100% sure that no interaction (direct or indirect) will in fact ever take place, then what is it about such a *potential* that could make the difference, if you know that the potential will never be realized? (Iââ?¬â?¢m not saying that there is no good answer; Iââ?¬â?¢m just curious to know what it might be.)

    Note that you can�t say that the mere fact of interaction with some other intelligent creatures is what matters unless you admit that you would accord rights to a Cube if you discovered that it was interacting with other Cubes, even if it is not interacting with you. And I don�t think you would want to grant that.

  • 56 Nathan T. Freeman // Mar 29, 2005 at 10:12 am

    “However, if by definition (via god) you know the cube is intelligent, then that must mean that evidence that the cube is processing information exists, which is of course ruled out by the definition of the cube in the question.”

    The communication with God *is* the evidence. Consider God to be the proxy communicator here.

    Of course, he’s apparently such an ass that he’s decided not to proxy anything except “yup, it’s still thinking.”

    Really, the scenario is useless. The presumption that you can identify consciousness through this mysterious unknown mechanism is what makes the hypothetical pointless. (And I wouldn’t often dare to say that anything Julian has to say is useless.)

    Will’s case of the Replicant is really more of a Turing test scenario, and says “it’s intelligence because we choose to call it that.”

    I honestly can’t figure out what makes either scenario particularly relevant to any kind of human moral decision.

  • 57 Chris S. // Mar 30, 2005 at 2:40 am

    To follow up on Nathan’s point, I think it may be the artificiality and unrealistic quality of the set-up that fools us into drawing certain faulty conclusions. E.g. some of us conclude that the potential for interaction is necessary for the entity to bear rights. Maybe we just can’t get rid of our gut feeling that the only way we could *actually* know if it were conscious would be if the potential for interaction existed. In other words, at a gut level (though not explicitly), we reject the starting premise (the consciousness-detecting machine) and then that leads us to all sorts of intellectual confusions.

  • 58 adam s // Mar 30, 2005 at 10:14 pm

    I don’t think consciousness is a prerequisite for rights. We each as individuals get to decide what has rights and what does not. And those rights are largely based on our own individual morals. Will sees value (imparts his morals) in the block as little tile floor pieces. Maybe I see value in the block as a whole sitting in some river acting as a shoreline buffer. Maybe the next guy sees value in the block as a holy artifact and decides not to smash it, but worship it instead. Consciousness alone should not stop you from doing anything to the rock.

    I like to hassle my vegan friends about their silly beliefs. I spent 6 mos. myself as a vegetarian and it didn’t work for me. There are lots of arguments and reasons not to eat meat.

    In one exchange, I argued that plants can feel pain too.

    I was inspired by this Tool lyric:

    “These are the cries of the carrots, the cries of the carrots! You see, Reverend Maynard, tomorrow is harvest day and to them it is the holocaust.”

    Please, I pleaded, think of the carrots.

    They replied: well carrots don’t have central nervous sysems, they can’t sense what you’re doing to them. They aren’t sentient.

    Then I came across this article:

    http://www.csmonitor.com/2005/0303/p01s03-usgn.html

    See, plants can defend themselves. They have neurotransmitters. They can sense danger and react accordingly. They are still not sentient.

    So the carrots might feel pain. They might sense the impending doom of harvest. Maybe that leads to a new morality for vegans: another being (sentient or not) should not suffer at the hands of a human being. If plants can be shown to feel (scientifically and all), or react to predation, then maybe vegans should think about all those little soybeans they consume. Maybe you shouldn’t kick the head off that dandelion sticking out of the sidewalk crack come May.

    Consciousness is not a prerequisite for having rights.

    Maybe you should carve up the block and serve it to some vegan friends.

    Or perhaps I am mis-using/understanding vegan morality? (Or maybe even this post.)

  • 59 Jadagul // Mar 31, 2005 at 4:02 am

    Fruitarians! I love those guys; they’re fun.

    Sorry it took me so long to get back to y’all. Schoolwork just seemed to pile up this week, for some reason.

    Chris: I come at the interaction question almost from the opposite side. Consider a world which is truly zero-sum–the mercantilist assumption is true. There are only a few sources of value, finite in amount. You can only benefit by taking these sources of value from others; that is, mutual cooperation is impossible because one person’s gain implies an equal loss by another person (I can’t figure out how this could actually be true, but suppose it is). I contend that in such a world there can be no rights. Affording rights to others is a suicide pact; it makes me incapable of acquiring values. Even worse, it leaves potential threats around, because those other people can sieze my values and leave me value-less. We have something worse than a Hobbesian society: not only can no one trust anyone else, but even limited cooperation and peace is impossible.

    Now suppose the world isn’t zero-sum, but I still can’t cooperate with other people. Suppose even that there’s no way to gain at all from their efforts–I can only use what I’ve gotten myself. This works out sort of like Rousseau’s state of nature–everyone wanders around, but no one harasses anyone else because there’s no point. In this world I probably have no desire to attack other people, but I also have no reason to give them special treatment. Rights only arise when I have reason to interact with other people peaceably.

    As for the button-that-keeps-me-from-interacting, I think you shouldn’t ever press that button for the same reasons–because cooperation is good and beneficial. The only reason to press the button is that you expect him to be a net negative–probably because he’s already violated rights and committed crimes, and it’s a mechanism of self-defense.

  • 60 adam s // Mar 31, 2005 at 8:47 am

    I missed something, I think. Rights are something we all can assume. The degree to which we resist suppression of those rights is the degree to which we are free. The Replicant is not aware of this because they are not conscious of self, therefore they are not conscious of their rights. The Replicant cannot say: hey, what about my rights?

    Conversely, the block can be sitting there, perfectly aware of itself, loving life, but if it cannot resist an assault (well really it can, but we have tools for that) on its rights, then rights aren’t assumed inherent.

    The degree to which we resist (suppression of those assumed rights) is the degree to which we are free.

    Free the Replicant, do what you will with the block.