The New York Times sums up some of the interesting work being done on the neurological bases of morality, in particular the finding that we seem to be hardwired to enjoy altruism. The piece is a little less precise than I might have liked: It should come as no surprise at all that there’s some specific mechanism of reward in the brain that cashes out the old notion that it feels good to do good. But if the mechanism is hardwired, its specific triggers—the group of people I’m going to get satisfaction from helping—are equally clearly not. Because we’re also hardwired to draw moral in-groups and out-groups, typically in culturally determined ways.
Inevitably, there’s a paragraph on how this research is provoking some misguided handwringing:
The research enterprise has been viewed with interest by philosophers and theologians, but already some worry that it raises troubling questions. Reducing morality and immorality to brain chemistry — rather than free will — might diminish the importance of personal responsibility. Even more important, some wonder whether the very idea of morality is somehow degraded if it turns out to be just another evolutionary tool that nature uses to help species survive and propagate.
In one sense, this particular horse has already left the barn. For anything we do or think, there’s going to be some corresponding brain activity, so we already know that all human experience and action is (bracketing, for the moment, quibbles about how precisely to interpret this term) “reducible” to brain chemistry. All that’s changing is that we’re gaining understanding of the specifics of the mechanisms involved. And similarly, all our cognitive capacities are byproducts of an evolutionary process that designed our brains as a tool for survival and reproduction. And we know this in a general way whether or not we get into a taxonomy of the specific adaptation and the reasons for their selection.
On the other hand, it would be a mistake to imagine that this research shows that morality is just some sort of blind reflex, like a sneeze. (Compare Dan Dennett’s idea that when we get distressed by the notion that humans lack radical free will, it’s really because we’re concerned with being “sphexish rather than with being determined per se.) We’re born with a pretty crude and simple set of biologically given needs: Food, love, sex, warmth, security. From these building blocks, we generate varied and complex personal and collective goals, as well as life plans to fulfill them. Knowing something about the hardwired foundation will doubtless tell you something about the structures built atop it, but there’s a lot it won’t tell you as well. It’s interesting if, say, aesthetic appreciation of opera lights up the same part of the brain as an orgasm, but there’s rather a lot left to say about why Wagner is good, and only some of it will amount to an explication of the physiology of hearing.
In any event, why should morality be degraded by having its roots in our evolved brain structures? Degraded relative to what? Suppose, after all, there really were Platonic forms of the Good out there somewhere. Why would that leave things better for morality? Because an evolved, embodied morality is more contingent? But we’re in that contingency. Because it would be more “transcendent”? But we make our moral decisions from behind our own eyes.