No, no, I’m not referring to Kant’s dictum that all rational beings are obligated to have anal sex, but to this post by Brian Weatherson at Crooked Timber. He’s toying with a version of consequentialism that he believes evades some familiar intuitive objections to utilitarianism:
Itâ??s a form of consequentialism, so in general it says the better actions are those that make for better worlds. (I fudge the question of whether we should maximise actual goodness in the world, or expected goodness according to our actual beliefs, or expected goodness according to rational beliefs given our evidence. I lean towards the last, but itâ??s a tricky question.) Whatâ??s distinctive is how we say which worlds are better: w1 is better than w2 iff behind the veil of ignorance weâ??d prefer to be in w1 to w2.
What I like about the theory is that it avoids so many of the standard counterexamples to consequentialism. We would prefer to live in a world where a doctor doesnâ??t kill a patient to harvest her organs, even if that means weâ??re at risk of being one of the people who are not saved. Or I think we would prefer that, I could be wrong. But I think our intuition that the doctorâ??s action is wrong is only as strong as our preference for not being in that world.
Weatherson ends up shying away from this theory on the basis of a version of the “moral saints” objection. But it strikes me that there’s at least one or two more basic problems with this approach.
First, I don’t see how one’s entitled to the pervasive “we” here. The appeal of Rawls’s veil of ignorance is that the constraints of the veil embed sufficient impartiality in the theory that you can generate moral principles from the non-moral, self-interested rationality of the parties behind the veil. The parties in the Original Position are characterized in a sufficiently stripped-down way (a problem of its own, but leave that…) that you’re guaranteed to get moral convergence. Letting in moral intuitions behind the veil seems to defeat the purpose here.
Weatherson sort of glosses this by saying that “our intuition that the doctorâ??s action is wrong is only as strong as our preference for not being in that world.” But why do we prefer not to be in that world? If it’s because, say, we think people who knew they were in such a world would be subject to constant low-grade background fear of sacrificing, then haven’t we just found another way of saying that, taken in context, allowing sacrifices and organ redistribution would be straightforwardly non-utility-maximizing? If so, I don’t see how this ends up being any different from one garden variety utilitarian response—one that, for my money, missed the heart of the objection. If, on the other hand, our squeamishness is genuinely moral, the veil is redundant. Of course you can generate egalitarian conclusions if you specify that the parties behind the veil are all convinced socialists. But that rather begs the question, and assumes away the real-world moral disagreement that leads us to turn to theory to try to settle questions in the first place.
A similar but distinct point here is one Nozick made somewhere, I think: You can typically jigger up a rule that’s teleological in form and yields a deontological result if you’re willing to count certain aspects of conduct as part of the consequence you’re evaluating. But this is just a way of putting deontology in consequentialist drag. Why bother? Even then, there are some aspects of side-constraints reasoning that are hard to capture in this form. Say that in an attempt to dodge these sorts of objections, you specify that part of what counts as a bad consequence is the use of other persons as “mere means.” As long as you’re still in a maximizing framework, that implies that you ought to sacrifice one innocent person if you could thereby prevent two other people from sacrificing two distinct innocent people for organ-redistribution.
Surely this misses the point. Pace Weatherson, I think we get squeamish about these examples because of a sense that morality has things to say about who we are and what we do, not merely about the desirability of what happens (even when who other people are and what they do are included in an account of “what happens”).