Julian Sanchez header image 2

photos by Lara Shipley

When Is a Side Constraint Not a Side Constraint?

October 2nd, 2006 · 1 Comment

One more thought from the Dworkin panel mentioned in the post below. Apparently, in Dworkin’s new book, he lays out a case, grounded in human dignity, for a strong presumption against the use of torture—so strong that it should be thought of as a kind of “trump,” obviating any kind of cost-benefit calculation on a case-by-case basis. He does, however, consider an extreme version of the classic “ticking time bomb” scenario, where there’s a nuclear device set to blow in Manhattan, we’re sure we’ve got the guy who planted it, and we’re sure that torture, and only torture, will get him to divulge its location, averting millions of deaths. And Dworkin acknowledges that in that type of very extreme situation, using torture might well be the right thing to do, but these are not the kinds of situations we’re dealing with in the current torture debate, and so this dignity-based proscription on torture should stand.

Well, political philosopher Bill Galston (late of the University of Maryland, now at Brookings) was there, and pressed the pont a bit, saying (I’m paraphrasing here): Well, you said you didn’t want to apply these sort of consequentalist balancing tests, in effect defending a side-constraint sort of view, but here you are doing just that sort of balancing, so that’s also a component of your view.

Dworkin just sort of acknowledged this, and when I asked Galston about the question later, he said it wasn’t supposed to entail any further point like: So doesn’t even the part of the view that looks deontological really have a concealed weighted balancing or maximization structure after all? But I do want to press this a bit, because it’s a familiar sort of argument, both in the specific context of debates about torture, and in arguments about rights-based ethical views in general.

The moves in this particular game should look familiar. One person asserts that people have a certain right, and that this right acts as a kind of trump, placing limits on what we can do even in pursuit of producing better net outcomes. Someone else says, “Really?” and goes on to lay out some scenario where violating that constraint somehow produces an enormously better net outcome, often in the form of averting some monstrous harm, like a nuclear explosion in Manhattan. And if the first guy is sane, he backs off and acknowledges that, after all, the side-constraint can’t be really absolute in some fiat justitia ruat caelum sense.

But this turns out to be a problematic concession. Because an absolute side-constraint may generate some absurd and repugnant entailments (e.g. that you have to let millions die rather than sully your hands by inflicting pain on someone), but it’s at least internally coherent if you’re hewing to a view that makes morality first-personal and grounded on individual dignity, in the sense that what matters crucially is what you do as opposed to “what happens in the world,” and such that the focus is on how you treat each particular person and show respect for them as separate individuals. The anti-aggregative aspect of this sort of view is, as usual, well and concisely put by Robert Nozick:

There is no social entity with a good that undergoes some sacrifice for its own good. There are only individual people, different individual people, with their own individual lives. Using one of these people for the benefit of others, uses him and benefits the others. Nothing more.

The problem comes when you want to tack on a consequentialist escape hatch to deal with these “extreme situations.” Because someone can reasonably ask, first, where the line is supposed to be. If you torture to save a million, how about 900,000? How about 100,000, or 100? And wherever you put the line, it becomes puzzling what’s going on at that inflection point. The intuitive way to intepret it is that if it’s only when you get to 1,000 lives saved you’re going to use torture, the weight you’ve put on not-torturing is equivalent to 1,000 lives, and past that point the scales tip in favor of torturing. And if that’s what’s going on then it’s easy to reinterpret what had seemed to be a deontic, side-constraints view as really being consequentialist after all, with a huge thumb on the scale for certain kinds of consequences, like torture. But once you’re viewing it that way, it turns out you’ve got a third-personal and aggregative perspective after all, and it becomes very mysterious indeed why the harm you inflict on a single person—and maybe quite a bad person—is supposed to have this incredible weight, more than the worse harm that will otherwise befall many hundreds of other people. Rather, it starts to look as though the only fair and rational thing to do is to weight everyone equally, so that ceteris paribus you use torture any time the expected value of doing so is positive, with the pain inflicted is balanced against the suffering averted, adjusted by the probability that torture will actually avert it. If you want to go all rule utilitarian, there are various considerations that you could bring to bear to get a slightly stronger presumption against torture, but probably not that strong of one. And this is why torture apologists are forever trotting out the old ticking time bomb, which on face might seem like a red herring given its dissimilarity to the actual scenarios we’re facing: It gets the foot in the door, after which (as the old joke has it) we’re just haggling price.

So the trick for those of us who want to assert that, in ordinary cases, the reasons not to torture run deeper than an accounting exercise is: How to resist the repugnant conclusion in these extreme cases without making the aggregative logic we’re prepared to apply there the rule (or basis for the rule) rather than the exception? One way might be to ditch, at least for certain kinds of questions, the bathroom scale model of moral deliberation, where reasons or considerations are “weighed against” and “outweigh” each other. For plenty of cases, both moral and prudential, this is a perfectly apt metaphor. If I’m trying to (nonmorally) decide whether to spend a quiet night at home reading or go out carousing with friends, while these are certainly very different qualitative kinds of “enjoyment,” my decision is probably mostly going to involve constructing some rough common unit of “how much fun I’ll have” in each case, balanced by how I’m apt to feel in the morning if I’ve got to wake up early, and seeing which option gets the higher score. But this is not how we make different kinds of major life decisions—or at least not the whole story about how we make them. Because those sorts of decisions will sometimes involve, as it were, choosing the units themselves, or the “scales” in which the reasons we’ll later act on are to be weighed.

Now, it might still be the case, as in our imagined ticking-nuke scenario, that which scale you employ will have something to do with how the results come out in each scale. But it doesn’t have to follow that one or the other is the “real” or “fundamental” mode of evaluation, and that we’re setting the “weight” of committing torture equal to that of saving X number of innocents. That doesn’t solve the question, in any detail, of how exactly we’re deciding which scale to use—my intuition is that there isn’t necessarily going to be any systematic decision procedure, some kind of meta-scale. But it is a way of looking at the problem that recognizes a plurality of incommensurable values, and may do something to soften the charge of inconsistency such recognition provokes.

Tags: Moral Philosophy


       

 

1 response so far ↓

  • 1 Jadagul // Oct 2, 2006 at 3:20 pm

    Always liked the Nozick quote; back when I debated in high school I could recite it from memory.

    For the ticking-time-bomb scenario, I think Ayn Rand, actually, was close to the right answer. If you try to use any system for something it wasn’t designed for, it’s going to fall apart. If you apply Newtonian mechanics to subatomic high-energy interactions, you’ll get the wrong answer. If you try to build a house with the same carpentry techniques you use for a bookshelf, it’ll fall apart. There are enough conditions tacked on to the front of the time-bomb hypothetical to make it almost totally irrelevant to real life, which is where we need morality to operate.

    A large part of the reason for side-constraints—especially constraints on what we can do to bad people—comes from our lack of knowledge: we don’t know that we have the right guy, we don’t know that torturing him will help, we don’t know that deciding to torture him won’t lead us to torture in less acceptable circumstances, we don’t know that the threat is actually valid. If you abstract away all uncertainty you can take more drastic actions, because you’re more sure they’ll work. But that doesn’t help us.