Julian Sanchez header image 2

photos by Lara Shipley

The Starry Heavens Above and the Moral Law Within (the Dorsolateral Prefrontal Cortex)

August 11th, 2006 · 6 Comments

I’ve been reading with great interest the flurry of work produced in the last five or six years by a burgeoning group of “Social Intuitionist” scientists and philosophers who are using cutting edge findings from neuroscience and innovative fMRI experimental techniques to proble the neural bases of human moral reasoning. One of the most interesting of these is a young-turk Harvard assistant prof named Joshua Greene, who I’d swear I first read about years ago in a profile in the Village Voice or the New York Times Magazine, though I can’t seem to track it down now. Greene’s recent work has centered on a series of experiments in which he hooks subjects up to brain scanners to see what’s going on in their heads as they puzzle through hoary ethics 101 thought experiments like the trolley problem (in its many variants). In particular, he focuses on the two most famous versions of the problem. First, if a trolley is heading down a track on a path to kill five people, should you flip a switch that will place it on an alternate track, where it will kill only one? Second, if a trolley is again headed for those five people (with no alternate track), should you push a fat man in the way of the trolley, killing him, but derailing the car and again saving the five? (Most people say “yes” in the first case and “no” in the second, and there’s a huge amount of debate, involving many more examples with many more trolleys, as to whether this is a defensible combination of positions.)

The short version of Greene’s conclusion is that people tend to make deontological judgments quickly, using emotional parts of the brain. People who come to utilitarian conclusions (like the minority who think you should sacrifice the fat man to save the other five) take longer to answer, and engage the “cognitive” or calculating parts of the brain. All well and good so far as it goes.

Yet in some of his writings, Greene suggests that this weighs against deontological ethics, indicating that deontology is just a kind of rationalization of unreconstructed emotional prejudices, seeking (as Nietzsche said of Kant) “to prove, in a way that would dumbfound the common man, that the common man was right.” He recognizes, of course, that affective is involved at some level either way—psychopath will not see any particular point to saving the five either—but in the cases where people give the classic deontological response, it seems like unreconstructed affect.

A couple methodological quibbles and one broader objection. First, Greene seems to want to define “utilitarianism” and “deontology” in terms of the reasoning practices of ordinary people rather than the doctrines of philosophers, which seems not entirely fair. Often policy professionals and intellectuals will have perfectly well reasoned arguments for views that most people hold for ill-considered reasons, and I doubt that ought to be held against them. Maggie Gallagher is wrong about gay marriage, but the reason she’s wrong isn’t that very many people who oppose gay marriage are just plain homophobes; it’s that she’s almost certainly mistaken about the empirical cultural effects of recognizing gay unions. Second, one of the examples Greene uses to illustrate the contrast between utilitarian and deontic thinking is the “crying baby” case: You are in hiding from Nazi soldiers with several other people, including your own infant, which begins crying. If you stifle his cries, you will smother and kill him, but if you do not, the soldiers will find you and kill you all, including the infant. Now, I have deontological sympathies, and would probably generally give the “deontological” answer in “kill one to save several” cases, but this one is a no-brainer. There are many varieties of deontology, obviously, but if you tend to hold deontic views on “separateness of persons” grounds, the objection to aggregation is going to implicate the tradeoff between different people’s welfare, which isn’t at issue when the kid’s going to die either way.

The broader objection is that this mode of analysis focuses on the moment of decision without looking at the background moral reasoning that gave rise to the rules people are acting on, in a way that will more-or-less necessarily make utilitarians seem more “cognitive.” Consider blackjack. Let’s say I’ve spent many hours working out the probabilities associated with different hands and distilled a series of rules for play, eventually internalizing them to the point that they’re almost second-nature. I see a pair of kings and instantly, reflexively wave my hand to signal “stick.” Or I see the dealer showing a six and feel an immediate rush of glee—he’s going to bust!—and do the same, even if I’m holding (say) a 14. Now, self-evidently, if you stuck me under a brain scanner at the table, you’d see something very different than if I were attempting to calculate the probabilities on the fly. But this tells us nothing interesting about how “emotional” or “impulsive” the rules underlying my play are: In fact, they were the product of a highly deliberate and rational thought process.

You can probably see where I’m going here: The structure of consequentalist thought means that it’s always going to require a kind of calculation, however quick and easy, in weighing costs and benefits in each particular case. Deontic thinking, however elaborate the background reasoning supporting it, is more apt to be reducible to simple rules where the numbers on different sides don’t count. So what looks like an intriguing result about deontic or consequentalist “intuitions” broadly speaking may only reflect a structural difference in the application of the different principles yielded by those intuitions. Greene is clearly super sharp, and so I wouldn’t be surprised if he deals with this kind of objection in a paper I haven’t gotten to yet, but it seems like a confounding factor you’d need to address.

Tags: Moral Philosophy


       

 

6 responses so far ↓

  • 1 Will Wilkinson // Aug 11, 2006 at 4:39 pm

    Julian, I met Greene at an AEI thing, and had many of the same thoughts, which I shared with him, and I don’t remember what he said. (I wasn’t impressed.)

    Kant, I’m sure, wouldn’t be impressed with brainscans of heteronomous motivation. Of course, the mystery for people looking for Kantian normative bindingness is how autonomous motivation would look any different. The mystery for utilitarians is to find computations of involving their peculiar philosophic conception of utility. Neither thing is happening.

    This stuff is relevant in only an ought-implies-can sort of way. My sense is that the brainscan stuff refutes hard deontology and hard utilitarianism. The motivational and cognitive requirements of either theory are impossible for us to satisfy, and therefore not obligatory.

    I think actual, effective, non-utopian morality–the sort of thing that actually does enable peaceful cooperative social order–involves culturally relative, but psychologically strict-seeming, norms that engender a sense of categorical deontic restriction and obligation. But different contexts elicit different norms. Paradoxes arise when we can’t resolve the context. Sometimes no norm is called. In the absence of a default deontic-seeming norm that internalizes and automates the calculation, on-the-spot prudential or consequentialist calculation may be called for.

    My take reading the Greene, Haidt, etc., stuff is that the fight between Kant and Bentham is sort of irrelevant. The laurels go to a Darwinized version of Hume and Smith.

  • 2 Will Wilkinson // Aug 11, 2006 at 4:43 pm

    By the way, Steven Stich and friends are doing the best naturalized moral psychology…

    http://www.rci.rutgers.edu/~stich/Publications/publications2.htm

  • 3 James // Aug 11, 2006 at 6:15 pm

    A related issue here is how rapid judgements also involve rapid perceptions, and how pre-existing assumptions, prejudices, training, etc. affect those perceptions. The classic case is the policeman who shoots the suspect because he thought there was a gun in his hand, but it turns out to be a wallet, or some other harmless object that might not even look much like a gun. The perception of the gun is based on other perceptions and instant judgements that are often incorrect.

    Training can assist someone to better make the snap judgement, and it’s probably that “better” in thise case means, “make it so that it would conform to the decision that would be made if more time were available for both perception and reflection.

    I’ve heard a quote, “You hope that people will rise to the occasion, but usually they go to the level of their training.”

    The trolley case can vary a lot, depending on whether you know any of the people involved, or, for example, if you are prejudiced against fat men. In real life situations, most people are simply going to do nothing.

  • 4 Glen // Aug 11, 2006 at 7:14 pm

    “The structure of consequentalist thought means that it’s always going to require a kind of calculation, however quick and easy, in weighing costs and benefits in each particular case.”

    I think that’s probably true only of direct forms of consequentialism (like act utilitarianism). Indirect consequentialism can recommend rules that people internalize in much the same way you internalize your blackjack strategy. Some of the rules may even be internalized genetically. Perhaps the “gut reaction” responses that Greene documents actually reflect a kind of proto-utilitarian calculus based on more typical situations (as opposed to constructed thought experiments). Where does this lead? I suspect something like Will’s response above, in which he awards the laurels to Darwin-cum-Hume-and-Smith.

  • 5 Jadagul // Aug 13, 2006 at 1:09 am

    I agree with Glen and Will, I think. I probably would have to self-describe as a utilitarian, or at least a consequentialist: I believe that moral rules are just a codification of “what we need to do to live together without killing each other.” If you look at the temporal chunk of the Ten Commandments, it’s pretty good advice both prudentially and utilitarian-ly.

    But my particular brand of consequentialism says “Things go better if we follow a set of general rules than if we try to evaluate each situation on its own merits.” So I believe in a pretty strong form of rights because it makes stuff turn out better, because I’m a consequentialist. Which brings us back to Glen’s point: you compute general rules beforehand, and then don’t have to do too much analysis when you come to a specific set of circumstances. It might look like a gut reaction, or pure deontology, but it isn’t really.

    It occurs to me as I write this that Rand had made a similar point: your emotions are often an expression of judgments that you’ve made intellectually and then internalized.

  • 6 Anonymous // Aug 23, 2006 at 10:28 am

    >He recognizes, of course, that affective is >involved at some level either way�psychopath will >not see any particular point to saving the five >either�but in the cases where people give the >classic deontological response, it seems like >unreconstructed affect.

    I think many advocates of reason-based ethics would say that they’re not counting on psychopaths being persuaded to change their behavior based on abstract moral theories, but that you need reason to prove that what the psychopath is doing is wrong. The point of ethics is not simply to persuade others how to act but also have a code of how to act oneself.

    A recent UCLA PhD had ideologues and nonideologues thinking about politics under an MRI, and found that they did use different parts of the brain, similar to this experiment.