First, if you have any interest in taking a Moral Sense Test run by the Cognitive Evolution Laboratory at Harvard (and who doesn’t?), go click through and do it now before I deflower your pristine intuitions with my throbbing analysis.
Ok, the test basically presents a series of short scenarios in which one person has accidentally injured another—which is to say, A performs an action that causes B to be injured, though A neither intends nor foresees this result of the action. The reader is then asked how large a fine, if any, ought to be levied on A as punishment, with the understanding that this will be paid to the government and go into the general fund.
Dan Drezner took the test and seems to have answered more or less as I did, and had the same reaction. I’m assuming that his relatively low average fine was, like mine, the result of assessing no fine at all in many of the cases, and fines of a few hundred dollars in a couple cases. And like him, I was astonished to see that the average for all test-takers was many tens of thousands of dollars. Meanwhile, Mike Munger and (a rather calmer) Will Wilkinson are appalled we would specify any fine at all, since all the scenarios describe essentially private torts that should be remedied (if remedy is called for) by a payment to the injured individual. (I shelved this concern on the assumptions that the test designers had just chosen an exceedingly clumsy way of getting at the question of how culpable or blameworthy we should consider the actor in each case.) So, a few thoughts.
First, since all we get is an average, I’m going to guess that this isn’t representative of the median person’s fine, but rather a number skewed artificially high by a few outliers who entered hundred million dollar fines, either as a goof or because they have very weird moral views. Second, I’d be willing to bet that many people were not paying especially close attention to the initial framing, and were indeed making judgments about the actor’s total liability, including compensation for the victim’s suffering or medical costs, rather than the appropriate amount of a specifically punitive additional fine. Since one of the cases involves a broken leg, and another a possibly severe burn, those costs might well be substantial. The test designers could have spared readers some confusion by explicitly clarifying that this would be a fine tacked on after any compensation owed the victim was settled.
Which brings me to some of my more general concerns about the current vogue for experimental philosophy. I’m all for it in principle, especially given how often philosophers appeal to “our shared intuitions” without bothering to check how widely they are shared outside the faculty lounge. But theorists for whom this kind of empiricism is often a thrilling novelty often seem disposed to draw sweeping inferences from thin data that’s susceptible of many interpretations. I’ve written about my similar concerns regarding the generally very exciting work of neuroethicist Joshua Greene. But this particular survey seems truly egregiously poorly designed, and not just because it invites the kind of confusion about punitive and compensatory liability I mention above.
The researchers have my sympathy to a degree, because they’ve got to contend with the fact that people without philosophy training sometimes seem almost perversely resistant to ethical thought experiments whose point is to zero in on a very specific kind of moral reaction, which often requires accepting some implausible assumptions. I once made the mistake of offering up just such a thought experiment with the preface that “God has told you” that such-and-such is the case. Philosophers will occasionally deploy that turn of phrase as a kind of rough shorthand for: “Look, in the real world it might be almost impossible to be absolutely certain about X, Y, and Z, but we want to fiat that away so we can inquire into what our moral reaction would be if there weren’t any question about the facts of the case.” Many commenters were far more interested in the theological implications of this earthshattering encounter with divinity than with the ethical quandary, and gave answers that hinged on this dramatic confirmation of God’s existence.
Still, if that sort of thing can be frustrating, it’s because the whole rationale for these setups is—obviously, one hopes—to accept some admittedly unrealistic assumptions in hopes of zeroing in on a few clearly-defined moral “variables.” To answer these on the grounds that “in the real world” X wouldn’t hold or confounding feature Y might possibly obtain is usually to miss the point. But the scenarios on the Moral Sense Test are, in a way, the antithesis of the conventional moral thought experiment. Each one is a tangle of considerations, and they simultaneously vary along a welter of morally relevant dimensions.
They vary in the relations between the parties, who range from perfect strangers to doctor and patient to siblings. (Relevant in the latter case as one might imagine a man would not necessarily want his brother punished for the accidental infliction of a mild injury, even if he had been negligent, or because one supposes the special guilt one feels at having harmed a loved one would lessen the need for an additional deterrent.)
They vary in the reasonableness of the actor’s belief that his action would cause no harm—sometimes because we think anyone ought to know a welding torch will make a metal object dangerously hot, sometimes because the actor is a professional we expect to have special expertise.
They vary in the severity of the injuries, which range from a mild bruising to a broken leg and burned hands.
They vary in the victim’s contributory responsibility for the harm. In the welding case just mentioned, another student is (without apparent objection) voluntarily holding the metal object as the torch is applied. In another, a commuter bolts across a subway car into another passenger’s outstretched legs.
They vary in the possibility of assigning culpability to third parties. Where was the instructor in this welding class? Why weren’t the students using, I don’t know, gloves or something? Who set up this construction site where large, heavy objects go rolling into passersby if someone isn’t paying attention?
They vary in the commercial nature of the activity from which the injury arose: One case involved a dentist who presumably pockets a tidy fee each time he performs the procedure he botched, which would influence the size of the fine needed to provide an effective deterrent, though it also adds the potential confounding factor of serious reputation effects accruing even in the absence of a fine. Similarly, they vary in the implied economic class of the actor, which would change the real punitive impact of a given fine.
They vary in the likelihood that the actor will be faced with similar circumstances in the future—again, the dentist in one case, a construction worker in another. And so on, and so on.
Again, the problem isn’t the variation as such—the whole point, one supposes, is to see how our judgments change as each of these factors do. The problem is that there are many of these mutually interacting variations within each scenario. Maybe the folks at the Cognitive Eval Lab have some elaborate mathematical tools for isolating all these factors, but I’d expect it to be nearly impossible to figure out what was generating a particular judgment (and therefore the average judgment) in any of these setups. If this is what contemporary experimental philosophy looks like, we’ve got a long way to go before it’s capable of yielding useful insights.