I was having an argument about whether the conflict between utilitarianism and most people’s intutions is a decisive objection to the theory, and since he’s now blogged about it, I figure I may as well carry it over here. Will writes:
[Utilitarianism] is inconsistent with our “sense of justice” and thus utilitarian principles will not gain our willing compliance, and will therefore fail to establish a stable social order. The utilitarian can retort that motivational dispositions are a constraint that utilitarianism must take into account. But then it seems that the principles of utility basically end up mirroring the principles that underlie actual human motivation, which will be doing all the work. At which point it seems otiose to say that what we’re trying to do with policy is maximize happiness, when it would just be more accurate to say that we’re trying to come up with principles people take themselves to have a reason to endorse, where those reasons are only sometimes reasons of utility. The fact that the dopaminergic system or whatever lights up whenever we do whatever we do has nothing interesting to do with what we take to be valuable, or what we should be shooting for socially.
Now, I think conflicts between intuition and what utilitarianism seems to require us to do can be useful as spotlights about what might be deeper problems with the theory (e.g. the famous examples about carving up a healthy indigent for organs). But this is a different sort of argument, at a policy level. The problem here is that, even if particular individuals evidently can be convinced of the correctness of utilitarian conclusions, they cut sufficiently against the grain of most people’s mores that a policy embedding pure utility maximization will end up being self-defeating, since people won’t go along.
That doesn’t really strike me as an objection at all though; it’s one more datum to take into account when deciding what policy will actually be maximizing, as Will suggests a utilitarian will reply. Look at it this way: If our moral dispositions were totally fixed, there would be little point to moral theorizing of any sort: People would just do what they’re gonna do. If, on the other hand, people’s moral responses are at least somewhat malleable, as they self-evidently are (large number of people; become convinced that they ought not eat meat; intuitions vary cross-culturally and within the same ["same"?] culture over time), then the utilitarian has a theory with genuine content and a reasonable enough rejoinder. She can say: “Fine, people can’t act as pure utilitarians all the time. Assuming, though, that their dispositions to comply, over the long run, are at least somewhat underdetermined by prior conditioning and hardwiring, we seek to get them to adopt the combination of rules in the meta-set of rule sets they can comply with that maximizes utility. In other words, if we can bring people around to finding either of two rules just—perhaps by emphasizing different features activiating different intuitions—then we pick the one that maximizes happiness.” I’m still not a utilitarian, for various other reasons, but I don’t think this objection holds up too well. Even if all we’re doing is rendering our intuitions coherent and consistent in some formal way (the utilitarian will say this is what she’s doing), the process of working out the kinks and contradictions and conflicts will involve at least some deviations from our gut level reactions. This is no objection unless we’re prepared to conclude that the only valid moral reasoning amounts to a catalog of reflexes.