I’ve been reading with great interest the flurry of work produced in the last five or six years by a burgeoning group of “Social Intuitionist” scientists and philosophers who are using cutting edge findings from neuroscience and innovative fMRI experimental techniques to proble the neural bases of human moral reasoning. One of the most interesting of these is a young-turk Harvard assistant prof named Joshua Greene, who I’d swear I first read about years ago in a profile in the Village Voice or the New York Times Magazine, though I can’t seem to track it down now. Greene’s recent work has centered on a series of experiments in which he hooks subjects up to brain scanners to see what’s going on in their heads as they puzzle through hoary ethics 101 thought experiments like the trolley problem (in its many variants). In particular, he focuses on the two most famous versions of the problem. First, if a trolley is heading down a track on a path to kill five people, should you flip a switch that will place it on an alternate track, where it will kill only one? Second, if a trolley is again headed for those five people (with no alternate track), should you push a fat man in the way of the trolley, killing him, but derailing the car and again saving the five? (Most people say “yes” in the first case and “no” in the second, and there’s a huge amount of debate, involving many more examples with many more trolleys, as to whether this is a defensible combination of positions.)
The short version of Greene’s conclusion is that people tend to make deontological judgments quickly, using emotional parts of the brain. People who come to utilitarian conclusions (like the minority who think you should sacrifice the fat man to save the other five) take longer to answer, and engage the “cognitive” or calculating parts of the brain. All well and good so far as it goes.
Yet in some of his writings, Greene suggests that this weighs against deontological ethics, indicating that deontology is just a kind of rationalization of unreconstructed emotional prejudices, seeking (as Nietzsche said of Kant) “to prove, in a way that would dumbfound the common man, that the common man was right.” He recognizes, of course, that affective is involved at some level either way—psychopath will not see any particular point to saving the five either—but in the cases where people give the classic deontological response, it seems like unreconstructed affect.
A couple methodological quibbles and one broader objection. First, Greene seems to want to define “utilitarianism” and “deontology” in terms of the reasoning practices of ordinary people rather than the doctrines of philosophers, which seems not entirely fair. Often policy professionals and intellectuals will have perfectly well reasoned arguments for views that most people hold for ill-considered reasons, and I doubt that ought to be held against them. Maggie Gallagher is wrong about gay marriage, but the reason she’s wrong isn’t that very many people who oppose gay marriage are just plain homophobes; it’s that she’s almost certainly mistaken about the empirical cultural effects of recognizing gay unions. Second, one of the examples Greene uses to illustrate the contrast between utilitarian and deontic thinking is the “crying baby” case: You are in hiding from Nazi soldiers with several other people, including your own infant, which begins crying. If you stifle his cries, you will smother and kill him, but if you do not, the soldiers will find you and kill you all, including the infant. Now, I have deontological sympathies, and would probably generally give the “deontological” answer in “kill one to save several” cases, but this one is a no-brainer. There are many varieties of deontology, obviously, but if you tend to hold deontic views on “separateness of persons” grounds, the objection to aggregation is going to implicate the tradeoff between different people’s welfare, which isn’t at issue when the kid’s going to die either way.
The broader objection is that this mode of analysis focuses on the moment of decision without looking at the background moral reasoning that gave rise to the rules people are acting on, in a way that will more-or-less necessarily make utilitarians seem more “cognitive.” Consider blackjack. Let’s say I’ve spent many hours working out the probabilities associated with different hands and distilled a series of rules for play, eventually internalizing them to the point that they’re almost second-nature. I see a pair of kings and instantly, reflexively wave my hand to signal “stick.” Or I see the dealer showing a six and feel an immediate rush of glee—he’s going to bust!—and do the same, even if I’m holding (say) a 14. Now, self-evidently, if you stuck me under a brain scanner at the table, you’d see something very different than if I were attempting to calculate the probabilities on the fly. But this tells us nothing interesting about how “emotional” or “impulsive” the rules underlying my play are: In fact, they were the product of a highly deliberate and rational thought process.
You can probably see where I’m going here: The structure of consequentalist thought means that it’s always going to require a kind of calculation, however quick and easy, in weighing costs and benefits in each particular case. Deontic thinking, however elaborate the background reasoning supporting it, is more apt to be reducible to simple rules where the numbers on different sides don’t count. So what looks like an intriguing result about deontic or consequentalist “intuitions” broadly speaking may only reflect a structural difference in the application of the different principles yielded by those intuitions. Greene is clearly super sharp, and so I wouldn’t be surprised if he deals with this kind of objection in a paper I haven’t gotten to yet, but it seems like a confounding factor you’d need to address.