Julian Sanchez header image 2

photos by Lara Shipley

Happiness Redux

July 23rd, 2005 · 2 Comments

So, that parentalism article from a little while back provoked a variety of responses.

Brad DeLong, who regards it as a kind of self-evident proof of the insanity of non-utilitarian views, just quotes a long stretch of the piece and reports that his “head explodes” when he reaches my claim that it may sometimes be better to make your own choices in life, even when you might end up happier if others chose for you. Since I’m still not wholly sure whether DeLong is using “happiness” in the colloquial sense (“happiness” is a kind of subjective feeling) or a much broader technical one (“happiness” is the state of the world in which your informed preferences are satisfied), and since in any event I don’t know how to refute an exploding head, I’ll just link to Will Wilkinson’s reply.

Battle Panda wanted an account of why I thought experiential, mental-state happiness wasn’t all that mattered, and when I replied that I’d been influenced by Nozick’s Experience Machine and (in a more roundabout way) Derek Parfit’s writing about rationality (nutshell: if it’s reasonable to care about even your own future self’s mental welfare, you’re projecting a value beyond your own immediate satisfaction, at which point it’s not clear why it must be the uniquely rational one), Panda initially objected that the EM is just an intuition pump that relies on some sort of Luddite instinct.

On reflection, Panda acknowledges in the comments that this isn’t really what Nozick was trying to get at, though it does hint at a point worth making: While the EM is often compared to the Matrix, in many ways the Matrix (or one not run by malevolent AIs) is less objectionable than the EM. Nozick’s point, after all, is not that mediated experience is per se bad, or even that it matters a great deal whether the world we encounter is made of matter or computer generated (a point this excellent David Chalmers paper makes). For in the Matrix as we have it in the movies, you can still, say, really write a novel and really be a certain kind of person, as opposed to having a certain set of pleasant experiences of doing things. You can still shape a life for yourself. And, as important, your relationships with other people can be genuine: The Matrix avatar of a person saying she loves you is connected to a real mind that actually has the feeling, not some mere empty simulacrum. The Matrix might still be unappealing to, say, a scientist who has made his life’s goal to contribute to our understanding of the deep structure of the universe (assuming telemetry hooked to the outside world can’t provide this ability from within the Matrix—though, of course, it’s not clear why it couldn’t, except that in a sense you wouldn’t then be living entirely “in” the Matrix so much as encountering reality through a different kind of filter, which scientists already do). In short, then, many of the considerations which weigh against plugging into the EM don’t appear to count against the Matrix.

Ethical Werewolf offers a different kind of response:

When designing public policy, our goal isn’t to give people what it’d be rational for them to pursue. It’s to set things up in the way that maximizes overall well-being.

That’s a possiblity I consider in this rather long post when I suggest that perhaps the god who loves you might choose to provide you with happiness-inducing false beliefs—in short, to promote your mental-state welfare rather than even your informed, considered, and rational preferences.

Now, one strategy here might have been to suggest that, insofar as we care about things other than mental-state welfare, those preferences are irrational. I think that’s a slightly more plausible way of going about it, actually, though in the end one that has to fall back on mere table-pounding. We do, after all, recognize plenty of cases in which the right thing to do might be to restrict someone’s irrational or ill-informed choice for their own benefit: Stopping the lovesick teenager from hanging himself in a fit of despair, or staying the hand of someone about to drink a glass of Drano on the false belief that it’s ginger ale. (I think Dworkin or someone calls that the “thanks, I needed that” principle.)

But the Werewolf wants to say what I cite T.M. Scanlon suggesting in my post linked above: That the question of what it’s rational for me to pursue as an individual and what promotes my well-being—and therefore what a just society might pursue on my behalf—are potentially distinct questions with distinct answers. The God Who Loves You might try to do for you that which—rationally and with full information—you would not choose for yourself.

There’s a related claim in this paper‘s (I think somewhat muddled) discussion of the Experience Machine:

Stated this way, it is clear that the Machine argument confuses two different claims: supporting mental-state Welfarism is not the same as claiming that everybody would (or should) desire to experience certain mental states. Welfarism refers to an external criterion allowing to discern what provides utility (i.e., experiencing certain mental states), while the latter refers to the content of preferences, which, of course, does not (generally) consist of having a certain mental state.

Both the Werewolf and the paper linked above seem to think that what Nozick has demonstrated is ultimately not that important—indeed, that it’s too obvious even to be worth demonstrating that our RI (rational, informed) desires might not coincide with what promotes our subjectively best mental states. So now let me turn to why I think this is a serious objection to mental-state welfarist versions of utilitarianism, and why the God Who Loves You argument is faulty.

It’s worth noting, first of all, that the early utilitarians certainly did appear to think that it was an important part of their argument that subjective pleasure is the only thing we ought to RI desire. Bentham, famously, called pleasure and pain the “sovereign masters” under whose thrall we live. In chapter four of his Utilitarianism, J.S. Mill argues (in the course of an argument now, I think, widely considered fallacious):

No reason can be given why the general happiness is desirable, except that each person, so far as he believes it to be attainable, desires his own happiness. This, however, being a fact, we have not only all the proof which the case admits of, but all which it is possible to require, that happiness is a good: that each person’s happiness is a good to that person, and the general happiness, therefore, a good to the aggregate of all persons. Happiness has made out its title as one of the ends of conduct, and consequently one of the criteria of morality.

Mill goes on to acknowledge that, of course, people have all sorts of other desires—to be virtuous, for example—but that these are ultimately (to the extent that they are RI desires) constitutive of and intelligible under the rubric of “happiness.” To this notion, I think, the EM provides a powerful rebuttal.

But does it matter? Must a mental-state welfarist utilitarian hold, with Mill, that the goodness of these mental states depends on our wanting them? Again, I think a few potent considerations weigh against making happiness divorced from RI desire the highest good for the purposes of developing a theory of justice.

The Werewolf sums up his position as, in effect, the notion that even if a person would RI reject the EM, the God Who Loves Him would surreptitiously plug him into it anyway. But once we’ve divorced mental-state welfare from RI preference, I think you’re forced to go further still—not just to the EM, but to the Bliss Machine, Nozick’s prelude to considering the EM, which just leaves us spending our existence in a kind of orgasmic, blissed-out stupor.

Now, Mill certainly rejected this kind of simple hedonistic reading of utilitarianism, and that rejection is bound up with his rather problematic idea of competent judges. The idea, recall, is that if certain people are “competently” acquainted with both (say) the ecstasy of Ecstasy (or heroin or whatever) and the pleasures of, say, artistic creation or the appreciation of opera, then if those people uniformly prefer the latter sort, those are to be regarded as qualitatively better. Leave aside, for a moment, the fact that it’s highly doubtful that such uniformity is really to be had. What’s important to stress here is that Mill (and, by extension, those who follow him), have already imported preference into their understanding of what promotes subjective utility. And I don’t find it plausible that the best account of the preference for, say, opera appreciation over orgasmic bliss is really that there’s just more subjective pleasure, though of a qualitatively different sort, involved in the former. Rather, what’s involved in that preference is a kind of value judgment about the relative worthiness of two modes of life. Insofar as someone in a sufficiently blissful stupor had no opportunity to reflect that, say, they were squandering their capacities by not reading Proust or listening to Wagner, I think the mental-state welfarist is hard pressed to seriously aver that this person is experiencing less subjective happiness, from a God Who Loves You perspective, than someone who has the enjoyment of realizing their capacities. If that follows—if Werewolf’s argument commits him not just to plugging the unwitting citizen into the EM, but into the Bliss Machine—I think that is a powerful (indeed, decisive) strike against it, at least intuitively.

But is there something more rigorous or systematic to say against Werewolf’s God Who Loves You welfarism? I think that there is, and it’s bound up with an important distinction between two motivations for utilitarianism Will Kymlicka considers in his excellent Contemporary Political Philosophy. On one account, call it teleological utilitarianism, pleasant mental states are the ultimate intrinsic good, and human minds are just convenient loci of these desiderata. On this view, if there could somehow be free-floating orgasms disconnected from any human personality, we’d want to generate as many of them as possible. On the second account, call it equal-concern utilitarianism, utilitarianism is one way of fleshing out the core intuition Kymlicka believes undergirds every plausible contemporary moral theory: To wit, people matter and (in some sense) matter equally. Kymlicka’s position—and I think this is quite compelling—is that this principle is constitutive of “the moral point of view,” and that the differences between all the various moral theories can be put down to different ways of cashing out what it is about us that “matters” and (what will likely be related) in what sense we are to be treated equally. So, for instance, we can ask: Are we to get equal opportunities, in the sense of “a fair chance in life,” equal resources, equal subjective outcomes, or an equal set of rights that provide a framework within which we pursue our own conception of the good? Which answer seems best will be bound up with our answer to the question of what it is about us that morally matters.

Now, with that distinction in mind, I think we can say something a bit deeper about what might be wrong with Ethical Werewolf’s God Who Loves You welfarism. For even most paternalistic arguments—whatever else might be said against them—rely implicitly on some premise about what you would want or choose if you were thinking clearly, free of certain sorts of irrational bias, fully informed, and so on. This sort of paternalist may deny you what you do want, but only (or so they claim) in the name of what you should or would want under some set of more optimal conditions. But Werewolf grants that the preferences that the God Who Loves You overrides to maximize your mental-state welfare might very well be RI desires. And at this point, what we have is no longer recognizably a theory that answers the question: How do we treat people—that is, conscious and desiring personalities with projects and ends, as opposed to mere loci of intrinsically-good happy-feelings—equally? How do we give equal consideration to their interests? Contra Mill’s account, we choose for them in accordance with a principle that we would (or, anyway, could) rationally reject as a standard of choice for ourselves.

Now, on Kymlicka’s account, a teleological (as opposed to equal-concern) version of utilitiarianism doesn’t properly count as a moral theory at all; it’s something more like an aesthetic theory about which states of the world are nicer. That’s probably a bit strong. But I think we can say at least this. First, a theory that doesn’t put front-and-center our status as rational choosing beings capable of developing life plans, thick identities, and values—not merely our status as loci of pain and pleasure—misses the core of what makes us morally valuable in the first instance. Second, the big appeal of a Mill-style version of utilitiarianism was that it helped us get around the fact-value gap by reference to facts-about-what-we-value. The goodness of happiness was, in a sense, derivative on the baptismal power of valuing minds that regard that happiness as a good. Sever that link, that grounding in our own valuation of happiness as a desideratum (which is admittedly conceptually difficult, since happiness is probably one of the things actual people do all value) and it’s actually no more obvious why mental-state welfare is a better candidate for the summum bonum than any other number of thing you might name. This is a version of utilitarianism that’s kicked out it’s means of support, but still wants to stay hanging in midair, like Wile E. Coyote crossing a chasm…until he looks down.

Tags: Moral Philosophy


       

 

2 responses so far ↓

  • 1 Will Wilkinson // Jul 25, 2005 at 4:20 pm

    I’d never seen the Carl Dennis poem. So good.

  • 2 Luke Thomas // Jul 25, 2005 at 6:11 pm

    Julian–

    Great post. I have one exceptionally (perhaps even perfunctory) minor point and one question.

    “For in the Matrix as we have it in the movies, you can still, say, really write a novel and really be a certain kind of person, as opposed to having a certain set of pleasant experiences of doing things.”

    I think this is mostly true, but perhaps not entirely correct. I forget the gentleman’s name, but the “Judas” character who had steak dinner with Agent Smith in the first Matrix movie specifically requested he a) remember nothing about being freed by Morpheus or his life in the “real world” and b) he wanted to be plugged back into the Matrix as someone important or famous in some respect (I forget his specific request). The point being is that in either the real world or the Matrix one could “write” a book, but the Matrix appears to adopt in many ways the EM’s more attractice features of taking a person’s interests and writing them onto a tabula rasa computer-generated reality with no knowledge of the EM or the self outside of the “false” reality. All I am suggesting is it seems the Matrix might be more like the EM than you indicated in the positng, but again, this is pretty trivial.

    My question is about this statement:
    “First, a theory that doesn’t put front-and-center our status as rational choosing beings capable of developing life plans, thick identities, and valuesââ?¬â?not merely our status as loci of pain and pleasureââ?¬â?misses the core of what makes us morally valuable in the first instance.”

    I tend to agree, but I’m curious about this statement you made in January 2004 when reviewing Owen Flanagan’s book (confession: I have not read the book):

    “Saying, as Flanagan does, that ethics should concern itself with ‘human flourishing’ does not tell us who should flourish and to what degree. Should we be especially concerned with the worst off in society, or is it only the sum total of benefits and burdens we face, rather than their distribution across persons, that matters? How do we weigh different kinds of flourishing? Perhaps Flanagan is right that there will be no one ‘correct’ way to answer these questions, but this means that they may ultimately have to be decided by nothing more exalted than the relative power of those who disagree.”

    To what extent, if any, does the notion of radical free will Flanagan appears to reject bear down on adopting moral theories that underscore, accept and promote our unique individual agency? In other words, what is the point of adopting a moral theory that “put[s] front-and-center our status as rational choosing beings” if such a construction of that type of self is incoherent to begin with? Does Flanagan’s repudiation of the “manifest image” and promotion of the nebulously-defined “human flourishing” allow us to NOT put priorities on the aforementioned moral theories you believe to be integral?

    If I am missing something, please feel free to correct my thinking. Thanks.