I’ve written about this at some length before, but I want to quickly repeat the point because I keep seeing reader submissions to Andrew Sullivan’s running colloquy on free will that make a point to the effect that, whatever the metaphysical truth of the matter, we all have this “illusion of free will,” and ordinary people all act “as if” they have it. I think this is a confusion.
Suppose I look at a spoon in a glass of water, and note that while it appears to be bent, this is only an optical illusion. What do I mean? Well, it means I have a concept of what kind of visual experience I’d have if the spoon were actually bent, and that my perception corresponds to this, though I know that in the instance my perception is caused by the refractive properties of water, rather than anything unusual about the shape of the spoon. Some similar account works for illusions more generally: I have an idea of what it would look like if the magician really sawed his assistant in half, and he manages to approximate my concept without actually doing it. How would this work in the case of (radically contracausal) free will? If we don’t have it, after all, this is presumably not a recent development. If we haven’t got the genuine article, then we have no point of reference for what it would subjectively be like (if anything) to have it.
I think people who talk like this usually mean a few related things. For one, ordinary decisionmaking does not seem like other kinds of “unfree” action—doing something by reflex or under hypnosis, maybe. Also, we’re no more directly aware of the neural underpinnings of our decisionmaking than we are of, say, our sensory processing. And of course, we aren’t aware of what the results of our deliberation will be in advance—otherwise, why deliberate?—so they will be necessarily “open ended” in that sense. But to call these things an “illusion of free will” just seems like a mistake. It is as if someone had told me for the first time about subatomic theory, and I mused that I nevertheless have this illusion of a solid desk chair, when after all, it is really these clouds of quarks and whatnot. And this would be silly: The parameters of “solid” and “desk chair” are given by ordinary life, and within those bounds the chair is exactly as solid as it ever was. A theory about the microstructure of the chair could not be in conflict with, or prove “illusory,” my ordinary perceptions, because they were not perceptions of the microstructure in the first place.
The same goes for claims that we “act as if” we are free, or “cannot help talking as though” we were free. What does that mean? That when I decide what to have for breakfast, I must simultaneously represent to myself a metaphysical theory about the nature of consciousness and human agency? Clearly not. Is it that our language is riddled with talk like “if I were to do so-and-so,” in which it’s implicit that multiple courses of action are live options? But as one of Andrew’s commenters points out, we use “if” talk constantly in cases where there’s epistemic uncertainty about things everybody thinks are determined by ordinary causal processes: “If it rains tomorrow, we’ll have to cancel the picnic.” And again, if we don’t have contracausal free will, then we talk and act exactly “as if” we are beings without that strange property, since evidently this is just how such beings act. What is really meant by this is that Western thinkers have ginned up a bunch of metaphysical theories that implicate terms like “if” and “ought” and “choice,” and then import the theoretical baggage back in to our ordinary use of the terms. But our use predates our theorizing. “Choice” and “if” are not like, say, “phlogiston”—derived from a theory and rendered conversationally obsolete when we discover that the theory is mistaken and the term extensionless. When we try to explain what we “mean” by the word “could,” that explanation or definition is really a further act of interpretation of our linguistic practice. We sometimes misleadingly say our ordinary way of talking about choice “assumes” or “presupposes” some theory implicitly, but we cannot literally believe that toddlers must have consciously formulated a position on the nature of human choice before they are able to begin using such terms. (Nobody, I hope, will suggest instead that we should believe the toddler subconsciously holds the theory, whatever that would look like.) There’s nothing contradictory or incongruous about continuing the practice after you’ve decided a particular interpretation of it is wrong.
You’ve probably heard this old joke: Guy goes to the psychiatrist and says his brother’s gone mad, and believes he’s a chicken. Psychiatrist asks why they haven’t already brought him in for treatment, to which the guy replies “Well, we could use the eggs.” There’s actually a neat ambiguity here. You can read the joke as being that the man hasn’t drawn an obvious implication from what he knows—that his brother is not really a chicken, and so he shouldn’t expect any eggs. He says he doesn’t believe his brother is a chicken, but he continues to talk “as if” he does believe it. But there’s another reading: That the guy who comes to supposedly report on his “brother’s” condition turns out to be the crazy one, under the delusion that this chicken is his brother. His talk about eggs is just fine, but it’s gotten tangled with some weird ideas about kinship ties. (And of course, there’s the surreal reading: That the human brother’s delusion is so powerful he really has started laying eggs.) With free will, we’re in the second joke. Our ground-level talk about choices and counterfactuals is fine; it just seems problematic because we think free will is our chicken brother. OK, that doesn’t quite work, but you get the drift.
We can say something similar about the folks who weigh in with dire concerns about what the rejection of free will means for moral judgment. Our particular intuitions about its content may benefit from theoretical reflection, but it’s just backwards to suggest that the “wrong” answer to a metaphysical question about agency or the nature of the mind could somehow require us to throw out the whole language of value and meaning. And while it would require a long post of its own to really cash this out, I think it’s a good sign that something’s wrong with your value theory if it does depend in this crucial, systemic way on the answer we give here.
The theoretical error may have practical costs as we start to learn more and more about human genetics and neurology. Jonah Lehrer notes several studies in which subjects prompted to think about a view of persons as “biological machines” were more disposed to cheat on a mock-test. You can say: Aha! The corrosive effect of materialism! Or you can view it as an unfortunate side effect of an excess of transcendentalism in our ethical thinking: People confronted with facts that throw the contracausal theory into doubt come to confusedly think moral precepts are undermined. I think elsewhere I’ve called this theorizing “in the shadow of God,” but we can be less dramatic about it and just say we’re in the first version of the chicken joke. Why not correct this mistaken metaphysical theory? Well, we could use the ethics…