Will Wilkinson suggests, in a long and interesting post on the scientific debate over the existence of an innate moral capacity, that the absence of such an inborn faculty would tend to bolster the case for moral relativism, while its existence would cut in the other direction. Adam Ozimek at Modeled Behavior follows up:
I have to say I find this evidence against moral nativism pretty convincing, just as I find it convincing evidence against moral objectivism. Another reason I’m skeptical, at least of moral objectivism, is the following thought experiment. Say you had an unlimited amount of money and time to persuade the chief of some Amazonian tribe of an objective scientific claim that the best evidence suggests is true. Say, that the earth revolves around the sun, or some other basic scientific claim. You can conduct scientific experiments, sit with him in the library going over the literature, and argue with him for 1,000 years. No matter what the starting point of his knowledge and beliefs, you should eventually be able to convince him of the truth as best as the evidence indicates, after all, its demonstratable knowledge.
He contrasts this with moral beliefs—and in particular, moral beliefs involving gods, which seems like an especially inapt example to pick. After all, for any sufficiently determinate description of “God” or “gods,” either there exists some entity corresponding to the description, or there doesn’t. It is an “objective” question, in the sense there has to be some fact of the matter, quite independently of what we can prove or learn about it. And yet, of course, there is not exactly universal agreement on the topic—despite rather more energy (alas) having been devoted to such questions than to scientific inquiry. Conversely, whether the truth of a particular scientific claim is “demonstrable” is often a pretty contingent matter, depending on the state of technology, and on the forms of “proof” or “demonstration” your interlocutor is culturally conditioned to accept as valid. You expect me to trust your “telescope,” an obvious product of demon magic?
It might be objected that morality at the most abstract or fundamental level is not empirical in the same way, and therefore ought to command universal assent. But even in the realm of pure reason, there’s a world of difference between what is “demonstrable” in principle and what can actually be demonstrated to anyone. There are purely deductive truths of advanced theoretical mathematics that require years of grueling study to grasp—and I mean, just to understand what is being claimed, never mind proving it—and are probably just plain beyond the cognitive capacity of many (most?) people with any amount of study. We assume they’re not talking nonsense because the rest of us can at least understand the elementary components from which the advanced theory is built, and because it does appear to enable them to build stuff that mostly works. It would be pretty unfortunate if morality were like this—if there were correct moral principles whose unassailable proofs were simply too complex to be understood by or persuasive to most people—but I don’t see how the possibility can be ruled out.
One might complain that insofar as “morality” is supposed to motivate action and coordinate the behavior of groups, no system too abstruse to actually provide reasons to most members of a community should qualify. I’m sympathetic to the point in practice, but unless the idea is to just stipulate relativism by building it into the definition of “morality,” it’s still consistent with the claim that the principles would be (objectively) binding on a community capable of formulating and understanding them. In any event, I suspect the proportion of the population that have personally reasoned through the underlying logic of the norms they follow is probably not much greater than the proportion who understand how their iPhones work.
Finally, as with religion, it’s worth making the obvious point that institutions and people that hold power in any particular culture have a pretty potent incentive not to be persuaded of any moral claim that would undermine that power or destabilize the status quo. What ought to really be astonishing is just how much consensus we do see emerging over the long run, even when powerful interests are strongly motivated to reject (and encourage others to reject) an argument that ultimately wins out.
More generally, and returning to Will’s post, I don’t think the plausibility of relativism is affected all that strongly by whether we think there’s a “native moral capacity,” whatever that amounts to. One ought to be careful here: There’s clearly some sense in which morality is “relative” to our intrinsic capacity to be morally motivated. We don’t say wild animals act “wrongly” when they harm people, because we don’t think they’re moral agents—which is to say, they’re biologically incapable of regulating their actions on the basis of reasons and norms (though they might be susceptible to simple training). But I think that’s probably about the level at which this is relevant. For the reasons suggested above, the fact of cultural disagreement about norms just doesn’t seem hugely significant. In a sense, to articulate a moral principle is precisely to acknowledge that we are not biologically constrained from acting otherwise. A species wired so as to be neurologically incapable of consciously lying would have no occasion to develop a concept of “honesty” or regard it as a virtue. Obviously there has to be some hook in the way our brains are structured that provides some kind of general basis for moral reasoning, but I wouldn’t give too much weight to the extent it turns out more specific content is hardwired as opposed to learned.
The explanation for any specific hardwired moral intuition would always be a function of its contribution to our inclusive fitness under mostly hunter-gatherer conditions. Again, we don’t bother talking about “moral principles” unless we’re quite capable of not acting in accordance with them. Given that, having a biological account of widespread moral intuitions could easily make relativism seem more appealing. “It’s just another kind of hardwired cognitive bias; to the extent I’m capable of training myself to overcome that bias, why should I care what was conducive to survival and reproduction a million years ago? Conditions are very different now, and in any event, I have lots of priorities other than inclusive fitness.” The biological account would then serve a kind of Mannheimian “unmasking” function: It would explain why certain rules and mores seem very intuitively plausible or appealing, but in a way that saps the intuition of its reason-giving force. Learning about the biological basis of optical illusions doesn’t make us any less prone to regard them as illusions, after all.
None of this, of course, counts as a refutation of relativism—but it is a reason to suspect the question isn’t really amenable to being decided at this level of abstraction. Human beliefs on all sorts of topics have varied wildly across time and cultures. Sometimes the beliefs are all “locally” correct, sometimes everyone’s confused, and sometimes one or a few groups have got it (more approximately) right. The test is ultimately to look at the arguments. To the extent it’s worth staking out a position in general terms, I tend to think the “relative or objective” question is actually poorly formed, as I hint above: The interesting question is “relative to what, and in what ways?”
Supposing there’s a genuinely universal moral principle prohibiting “fraud” or “cheating,” you need a lot more detail about mores and expectations in a particular culture to know what it actually requires in practice. And even the “universal” principle is necessarily dependent on a whole slew of very general facts about the sort of beings we are—capable of certain forms of reasoning and representation, with interests capable of being promoted or being harmed, and so on. (Stick insects engage in a kind of “deception,” but it has no moral dimensions.) Debating the “objectivity” of “morality” in general just doesn’t seem that fruitful. You need to get into the weeds and see whether a particular moral claim is supported by reasons that should have force for everyone—and if it seems they should, look at what actually accounts for the disagreement. There’s no bird’s-eye-view shortcut that’s going to save us the trouble of actually looking at the specific arguments.
Addendum: I see one trackback link describing this post as a good defense of relativism, which isn’t quite what I thought I was writing… but maybe these things are relative. In all seriousness, though, this is one of the problems I was pointing up above: People use “relativism” as shorthand for a whole array of very different claims, some quite strong, others so weak as to seem indisputable.