Neil the Ethical Werewolf has a post up at Ezra Klein’s joint defending the idea of objective morality from Matt Yglesias, who had two quick posts on the topic last week. I’ll agree with Matt at least this far: I very much doubt the majority of the folks who fling the term “relativist” around as an epithet could come up with anything like a coherent explanation of why moral realism is correct and relativism wrong that would stand up to five minutes of scrutiny from a half-bright philosophy undergrad.
Anyway, a couple quick points. I actually think most of what Matt says is more or less on point. But I’m also not sure it entails a relativist position. That is, I think many of the things Matt claims could be true, and there could still be an objective morality.
Relativism would follow from what Matt says if defending an “objective” morality meant something like finding in nature somehow the Platonic form of the good, or some particle human beings emit to generate a moral-rights field. If, in other words, moral facts were supposed to be like other sorts of natural facts—made true by their correspondence to some thing out there in the world. Now, there are a lot of excellent reasons (which I won’t bother rehearsing here) for thinking that this is not going to be a terribly promising project, and probably doesn’t even make sense.
But that approach is so unpromising, it ought to hint pretty strongly that expecting moral facts to be “out there in the world” like leptons and kittens are is just the wrong way of looking at whether moral claims could be “objectively true.” It’s not just that there don’t happen to be any Platonic forms, it’s that moral claims couldn’t be made true that way even if there were. Now, maybe there’s no way such claims can be true. Maybe the strongest thing we can actually mean when we say “X is wrong” is something like “X is frowned on be me, and maybe also alot of other people around here like me,” and that’s all there is to say. A lot of people think they’re making a stronger or broader claim, but insofar as they do, they’re just confused. But to come to that conclusion, you ought at least to run through some different ways you can cash out the stronger claim and see if those work out. Without getting too much into the Platonist/formalist debates, it is not a good answer to the claim that some mathematical theorem is really true that “five” and “square root” (or whatever) are not natural objects (even though, of course, particular groups of five things might be). And while yyou certainly could test mathematical claims by counting up lots of objects, induction might not be the best way to establish their truth or falsehood. Different sorts of claims, to borrow Nozick’s phrase from Invariances, have different sorts of truth properties.
Another point: “Objective” and “relative” are complicated, multidimensional terms, and not always necessarily opposed in all their senses. The size and speed of some celestial object are going to be relative to the observer’s frame of reference. But that is, itself, an objective truth of relativity theory. And it certainly doesn’t mean that the size and velocity of that object are just whatever my cultural group says they are. So if moral claims turned out to be relative in some way to human beings or rational minds, that kind of relativism maybe shouldn’t bother us so much. If what we care about is moral disagreement, it ought to be enough that what “rightness” and “wrongness” turn out to be relative to isn’t just “whether some particular person or his culture think X is right or wrong.”
Now, as Neil points out, a lot of moral disagreement does just seem to turn on ordinary factual disagreements. Allah (or Jehovah or whomever) either exists or he doesn’t. It might be highly unlikely that we’re ever going to convince people who strongly think He does that they’re mistaken (or that they’ll do the same for us). But what all of us can agree on is that this, at least, is an ordinary kind of fact that’s objectively true or false. That kind of disagreement might be a huge practical obstacle, but it’s not a theoretical problem. Some people think they have reason to wage jihad, because Allah has commanded them to. They are wrong, not for any very interesting moral or metaphysical reason, but just because there is no such fellow as Allah.
So what if we’ve got some people who agree about the facts of the matter? Well, most ethicists seem disposed to regard moral claims as claims about reasons. And we know these kinds of claims, in general, can be objectively true or false, and we can be wrong about them. I think I have a reason (I’m thirsty) to drink that glass of clear liquid, which I believe to be water. Since it’s Drano, I’m mistaken: I have a strong reason not to (I don’t wish to die painfully).
Those kinds of reasons are objective in one sense—whether I have them or not doesn’t just depend on whether I think I do—and relative in another: they depend on whether I want certain things, as well as on facts about the world. And if that’s all the sorts of reasons there are, it might seem as though we’re still in bad shape vis a vis “objective morality,” because even if I’ve got all and only factually true beliefs, maybe whether some action of mine will lead to your painful death just isn’t in the set of things I care about (nor do any of the other things I do care about entail that I ought to care about this).
There are two strategies someone who wants to defend objective morality could shoot for. The first is to argue that, just as someone could be wrong about their derivative, lower level reasons, we can also be wrong about their top-level or “ultimate” reasons. There might be ultimate ends or concerns we have reason to adopt, whether or not, in fact, we do. Derek Parfit takes a shot at this with an example I think I mentioned in a post from a few years ago, “Future Tuesday Indifference.” He imagines a person who says he just doesn’t care about any pain he experiences on future Tuesdays, and so will do nothing, bear no present cost, to prevent the most incredible agony to himself next Tuesday. He has no very queer factual or metaphysical beliefs: He recognizes that pain on Tuesday will be terrible, and that he will hate it and want it very much to stop, but doesn’t care. Now, extending Parfit, we might imagine someone who just doesn’t care about being in pain. And imagine he has no special beliefs about the nobility or dignity of being indifferent to pain or anything like that. It feels as terrible to him as to anyone else, but he just doesn’t care. If someone like that might just be wrong—if he has, perhaps not a decisive reason but at least some kind of reason, to care about whether he will be in pain, even if in point of fact he cares not at all—then we’d be off to at least a start with their being the right sort of objective reasons that might allow us to get to objective morality.
The other strategy is a bit more humble. What we might try to say instead is that someone might have all and only factual beliefs, and not care at all about (say) whether his actions injure other people, or at least people outside some select group. But if they accept some very thin moral premise like “as my ends and goals and concerns provide reasons for me, so (at least to some extent) do the ends and goals and concerns of other people,” then we can actually get some more definite set of rules or principles out of that, and make arguments about why one way of expressing concern or respect for others makes more sense than another.
Never mind for the moment whether we could actually get that. If we could, would it be enough to count as an objective morality? I think it probably would. What it wouldn’t allow us to say is that every rational being has some objective reason to be moral. But there being an objective reason to be moral is not the same thing as there being an objective morality. For the former to be true, we’d have to show there’s an objective reason to have a certain set of concerns, even if you don’t. It is not, to my mind, a very big problem for “objective morality” if we can’t get that. If there are people who lack any such concern, we might say they’re just outside the moral game, so to speak. They’re like wild bears or giant anvils falling from the sky: It’s just a category error to discuss what they do in terms of “right” or “wrong”; the best we can do is to kill them or get out of their way. In other words, there might be just one objective morality, but sentient, rational beings with no reason to adhere to it. That seems fine. It should not be a requirement of a moral theory that it provide reasons for anvils and bears to adhere to it.
To be able to say there’s an objective morality, it seems like it would be enough if we could say something like this: If you are prepared to be responsive to certain kinds of reasons or concerns, if you’re willing to accept some kind of “thin” premise like “other people matter—and, in some sense, equally” that is constitutive of the moral point of view, then through some combination of eliminating false factual beliefs and reflection on the structure of human valuation, it would follow that you have some more determinate set of reasons to act (or refrain from acting) in certain ways.
As I say, I don’t know if we can actually get that. We might need more controversial premises about what kind of responsiveness to other people’s reasons was required before we got anything sufficiently determinate, and there might be many sets of such additional premises, with no external way of choosing between them, such that no particular set had a uniquely compelling claim to constitute the moral point of view. Would there then be no objective morality? Maybe not. For there might be some things that followed from any of the plausible sets of additional premises needed to get to sufficiently determinate principles of conduct. That area of overlap, then, would be the domain of objective morality.
One very very last point. Here’s something Neil says about what he thinks is a good way to arrive at true moral beliefs:
How do I separate the true moral beliefs from the false ones? I first try to determine what sorts of processes of belief-formation are generally reliable, considering many examples where morality isn’t at stake. Then I look at all the ways that people form their beliefs about which states of affairs are good, and which actions are right. I throw out the beliefs that are generated by unreliable processes. Particularly, I throw out the beliefs formed by having some emotionally-driven attitude towards a state of affairs, and thus coming to believe that there’s some objective goodness or badness out there in that state of affairs. All that’s left is the goodness of pleasure and the badness of displeasure, which can be discovered without any emotions standing between us and our pleasure or displeasure.
I think this is a mistake, insofar as it doesn’t take account of the different truth properties of normative propositions and propositions about natural facts. If I want to know whether some ordinary proposition P about the world is true, then whether I want it to be true, hope it is true, am afraid it might be true, my dog would die if it were true, and so on, have no bearing in whether, in fact, it is true. Any beliefs I form about P by a process in which those factors play a role should, at the very least, be considered highly suspect. On the other hand, if the question is whether some state of affairs is good or bad, then my (and other people’s) emotional attitudes toward that state of affairs obtaining might very well be a relevant factor.
In a related vein, I think he’s wrong to try to isolate pleasure and displeasure, as some sort of uniquely objective carriers of value, from our responses to those sensations. (And here, I suppose, I tip my hand re: the Parfit example above.) This is actually tricky, since our attitudes about pleasure and pain are to tightly linked to the sensations that they’re hard to disentangle. But here’s an example I vaguely remember from philosophy of mind. Theres a certain kind of neurological condition, extremely painful, whose sufferers experience attacks during which they report feeling as though they’re being torn at by sharp knives. And there is a certain kind of neurosurgery by which they can be treated, wherein they still report feeling the same sensation, but they don’t mind it anymore. It no longer bothers them. One way to respond to this is to say that this shows it’s not actually “pain” and “pleasure” themselves that are good or bad, but how we feel about (that is, our emotional reaction to) these sensations. Or you could say that what they’re feeling in these cases is not actually pain at all, that a sensation’s being pain involves certain kinds of responses, just as a certain emotion’s being “anger” involves a certain kind of cognitive content in addition to a set of sensations and physiological responses. Either way, though, you don’t get the kind of division Neil’s shooting for.