Julian Sanchez header image 2

photos by Lara Shipley

Happiness Will Only Happen When It Can

June 8th, 2005 · 7 Comments

I was having an argument about whether the conflict between utilitarianism and most people’s intutions is a decisive objection to the theory, and since he’s now blogged about it, I figure I may as well carry it over here. Will writes:

[Utilitarianism] is inconsistent with our “sense of justice” and thus utilitarian principles will not gain our willing compliance, and will therefore fail to establish a stable social order. The utilitarian can retort that motivational dispositions are a constraint that utilitarianism must take into account. But then it seems that the principles of utility basically end up mirroring the principles that underlie actual human motivation, which will be doing all the work. At which point it seems otiose to say that what we’re trying to do with policy is maximize happiness, when it would just be more accurate to say that we’re trying to come up with principles people take themselves to have a reason to endorse, where those reasons are only sometimes reasons of utility. The fact that the dopaminergic system or whatever lights up whenever we do whatever we do has nothing interesting to do with what we take to be valuable, or what we should be shooting for socially.

Now, I think conflicts between intuition and what utilitarianism seems to require us to do can be useful as spotlights about what might be deeper problems with the theory (e.g. the famous examples about carving up a healthy indigent for organs). But this is a different sort of argument, at a policy level. The problem here is that, even if particular individuals evidently can be convinced of the correctness of utilitarian conclusions, they cut sufficiently against the grain of most people’s mores that a policy embedding pure utility maximization will end up being self-defeating, since people won’t go along.

That doesn’t really strike me as an objection at all though; it’s one more datum to take into account when deciding what policy will actually be maximizing, as Will suggests a utilitarian will reply. Look at it this way: If our moral dispositions were totally fixed, there would be little point to moral theorizing of any sort: People would just do what they’re gonna do. If, on the other hand, people’s moral responses are at least somewhat malleable, as they self-evidently are (large number of people; become convinced that they ought not eat meat; intuitions vary cross-culturally and within the same [“same”?] culture over time), then the utilitarian has a theory with genuine content and a reasonable enough rejoinder. She can say: “Fine, people can’t act as pure utilitarians all the time. Assuming, though, that their dispositions to comply, over the long run, are at least somewhat underdetermined by prior conditioning and hardwiring, we seek to get them to adopt the combination of rules in the meta-set of rule sets they can comply with that maximizes utility. In other words, if we can bring people around to finding either of two rules just—perhaps by emphasizing different features activiating different intuitions—then we pick the one that maximizes happiness.” I’m still not a utilitarian, for various other reasons, but I don’t think this objection holds up too well. Even if all we’re doing is rendering our intuitions coherent and consistent in some formal way (the utilitarian will say this is what she’s doing), the process of working out the kinks and contradictions and conflicts will involve at least some deviations from our gut level reactions. This is no objection unless we’re prepared to conclude that the only valid moral reasoning amounts to a catalog of reflexes.

Tags: Moral Philosophy


       

 

7 responses so far ↓

  • 1 Will Wilkinson // Jun 9, 2005 at 9:53 am

    What I keep wanting to hear is why I ought to even grant that utilitarianism is even a plausible candidate for an acceptable moral theory. Utilitarians seem oddly attached to this very strange theory, and seem to think that step one is: assume utilitarianism. Step two is: show that utilitarianism is not defeated by objections. But this is just so wrong. It is such a bizarre theory (maximize the quantity of a certain pattern of brain activation?!)that we’re going to need some kind of supeduper argument to show that opponents of utilitarianism have the burden of proof. Instead, given the fact that utilitarianism fits the the pre-theoretical conception of morality so horrendously, utilitarians are under a massive burden to show why it ought to even be considered at all. It would be more honest for the utilitarian to say that morality is corrupt, and that that it ought to be replaced by utilitarianism, not that utilitarianism is a specification of the content morality. I don’t know how one would make this argument persuasively, either, but it would at least not do so much violence to the common sense content of morality.

  • 2 Will Wilkinson // Jun 9, 2005 at 9:59 am

    Additionally, if there are multiple sets of compliance-compatible rules, I vote for the one that is most likely to generate interesting literature. Assuming that most people are pretty happy under any good sense of justice-compatible set of rules, I’d much rather top it off with good art than with a little more happiness.

  • 3 Will Wilkinson // Jun 9, 2005 at 10:05 am

    That doesn’t really strike me as an objection at all though; it’s one more datum to take into account when deciding what policy will actually be maximizing, as Will suggests a utilitarian will reply.

    Again, why accapt at the outset that “maximizing” is what we ought be thinking about when we’re making policy choices? There is no reason to do this.

  • 4 Grant Gould // Jun 9, 2005 at 10:31 am

    Someone needs to take up the profession of moral theory engineering.

    For instance, if I start building clones and sticking wires into their brains that cause them to experience “utility” when I do, and I can do this enough, my utility becomes the moral imperative of every utilitarian on the planet and I have enslaved an entire philosophical school to my personal whim! There’s a ton of money to be made this way.

    There are good engineering solutions to all moral theories, I’m sure.

  • 5 Julian Sanchez // Jun 9, 2005 at 10:35 am

    First, fair enough, but this seems like a distinct argument. (This one is about where the burden is given the initial intuitive implausibility, the original one I took to be about whether intuitional barriers to *compliance* constitute a defeating objection.) And second, I think you’re underestimating the extent to which people’s intuitions do tend to be utilitarian. It didn’t become the dominant moral view for such a long period because large numbers of smart people found it crazy on face. And most of the early utilitarian arguments I recall were of the form: “Isn’t this really what you already believe? And to the extent that your intuitions seem anti-utilitarian, here are a bunch of reasons why the apparent deviation can be accounted for by a deeper utilitarian principle.” I think it’s wrong, but in many ways quite appealing and seductive.

  • 6 Will Wilkinson // Jun 9, 2005 at 11:50 am

    Grant, Great idea! And that’s a great thought experiment. Suppose GG has a billion clones in a vast lunar warehouse wired to feel everything he feels. He demands a blow job. Do you have a reason to comply to his fickle but massively utility-conducive wishes?

    No.

  • 7 Blar // Jun 14, 2005 at 6:40 am

    Richard[1] (following R.M. Hare) makes an argument similar to Julian’s about the importance of making the distinction between the critical level which addresses the criteria of goodness/rightness, and the intuitive or practical level which deals with how people actually think about what to do. Utilitarianism is the former kind of theory, but Will seems to think that the term “morality” should be reserved for the latter kind of theory. That is not in keeping with standard usage. Will also seems to think that the “proper” response when these two levels don’t match up is to consider the intuitive level to be corrupt and to try to replace it with the critical level theory (though his response is to abandon that critical level theory). As Richard makes clear, trying to implement the critical level theory at the intuitive level would lead to lots of trouble (by the criteria of the critical theory itself). A much more sensible approach is to accept people for what they are and to seek incremental improvements in the direction that the critical theory points, taking advantage of people’s actual psychological makeup (including the ability to learn and develop new habits), and of the institutions that our society has.

    A thorough argument in favor of utilitarianism is beyond the scope of a blog comment, but I’ll at least point to arguments that go beyond “assume utilitarianism.” Arguments in favor of maximizing are often based on ideas about what morality is and what properties it should have (think fairness/agent-neutrality/”each to count for one and none for more than one”). A basic argument behind identifying happiness as the thing-to-maximize is that 1) everything that is good is good for someone and 2) the only things that are good for someone are that person’s experiences (although some utilitarians, like Richard, think that the thing-to-maximize is not happiness but something else, like desire-fulfillment). You can argue with these claims, but to denigrate happiness as “a certain pattern of brain activation” is just as silly as denigrating writing as certain patterns of ink or pixels (or denigrating anything else as mere patterns of matter).

    [1]http://pixnaps.blogspot.com/2005/06/indirect-utilitarianism.html