Matt McIntosh over at Catallarchy serves up a question that I think is supposed to count in favor of psychological egoism:
Now say some clever scientist, considering him a danger to society, knocks out our sociopathic subject and plants a small device in his head — call it the Artificial Conscience Ã¢â??Â¢ — that will cause him intense pain every time the thinks about harming someone. In order to avoid crippling pain, our subject learns to be a much less abusive person. Note that his motives in reforming himself are entirely self-interested. (Note also that this could be inverted so that rather than feeling pain when he considers abusing people, he gets a shot of endorphins from behaving benevolently.)
Now: in what way is a normal conscience disanalogous from an artificial one?
In fact, this is a decent way of illustrating—if it’s not already obvious—why psychological egoism is wrong. And it may be related to what I suspect goes amiss in neuroethicist Josh Greene’s inferences from fMRI results to broad conclusions about deontic vs. consequentialist reasoning. In both cases, you have an automatic-seeming moral reaction that can lead you to some awfully confused conclusions about either people’s moral reasoning or their moral motivation if you detach that reaction from the networks of belief and deliberation that produced them. So, for instance, if I care about other people, don’t wish to harm them, and so on, I will naturally develop an averse reaction to the realization that I have acted badly toward them. But to conclude from this that good behavior is primarily motivated by a desire to avoid pain, precisely as if I were a sociopath implanted with some Clockwork Orange device, is to get things very badly backwards. I’m reminded of a social science experiment I read about a while back comparing different kinds fo maternal reactions to crying infants. Most mothers had an empathic reaction probably best described as “concern”, motivating them to try to alleviate whatever was causing the infant to cry; others registered annoyance and distress—motivating them to want to leave. You could, of course, argue that in the former case it’s just that the subjects have more more developed moral imaginations, such that they’d continue to feel guilt about the crying infant after they’d left if they didn’t do anything, but you end up needing to add an awful lot of epicycles to make this work.
Anyway, the most obvious distinction is that the sociopath would presumably remove the implant if he could, while most of us would not want to anaesthetise the agenbite of inwit, if this were somehow possible. (In a sense, of course, it probably is: People do sometimes decide they’re so empathic they’re letting themselves be taken advantage of and try, presumably with occasional success, to dampen those reactions.) Ordinary guilt and conscience emphatically aren’t some kind of alien psychic burden we’d as soon do without: They’re parts of a complex emotional ecology deeply embedded in our systems of evaluation. The kernel of truth in the egoistic analysis is that we would not need these things if it weren’t true that a direct feeling of personal distress may be a more effective short-term motivator than more abstract concern for others, especially where there’s an immediate and palpable gain to be had by acting badly. But they aren’t phenomena that can be invoked to explain away an illusory altruism; rather, they’re what we need altruism to explain.