Economist Bryan Caplan has a recent blog post making a rather weird argument in favor of free will:
If you take a closer look at BG research, though, you’ll notice something interesting. Virtually every BG study partitions variance into three sources: genes, shared family environment, and non-shared environment. Typical estimates are something like 40-50% for genes, 0-10% for shared family environment, and 50% for non-shared environment.
And what exactly is non-shared environment? Everything other than genes and family environment!
Why do I bring this all up? Well, suppose human beings had real, honest-to-goodness free will. If it made a difference for behavior, where would it show itself? In the BG framework, it would be filed under “non-shared environment.”
OK, now let’s get Bayesian. If you could fully account for a person’s choices using genetics and measurable environmental variables, you’d count it as a confirmation of determinism, right? Well, if you buy this argument, you also have to buy its mirror image: The harder it is to account for a person’s choices using genetics and measurable environmental variables, the stronger the case for free will.
From this perspective, the large empirical estimates of the importance of non-shared environment are noteworthy. Identical twins raised together are still, in many ways, very different. The believer in free will can simply say, “The good twin and the evil twin just made different choices.” The determinist, in contrast, can only ask for a blank check: “One day, we’re figure out the hidden forces that caused them to be so different. Until then, bear with us.”
This is strange for a couple of related reasons. First, the case against metaphysically robust free will is conceptual, not empirical. That is, the argument has never been that we’re doing such a good job of accounting for the choice process on mechanistic grounds that the hypothesis of free will has been falsified. The argument is that the idea of a process that is simultaneously “free” in the sense of being undetermined by the brain’s previous physical states and a “choice” in the sense of being an action of the agent, as opposed to a simply random occurrence, is incoherent. Now, that said, I’ll allow that if close neurological analysis revealed the brain operating in radically unpredictable ways that differed fundamentally from other physical systems, we’d have to entertain the possibility that something weird is going on that doesn’t fit well into our conceptual schemas. But everyone understands that the brain is so dizzyingly complex, so sensitive to minor perturbations in initial conditions, that a detailed prediction of someone’s thoughts and behavior is never going to be practical. It’s impossible to predict the exact second of next Tuesday when the sun will emerge from behind a cloud over my house, and will probably never be possible. Nobody thinks this should make us more prone to believe that the weather has free will.
In a short essay linked from that post, Caplan lays out a series of further considerations in favor of free will, none of which fare much better. There are four core arguments laid out there, but the first and third can be combined into the claim that we simply know by introspection that we have free will, and that, furthermore, this direct experience of free will is so fundamental and self-validating that any argument purporting to undermine the claim that we have it must rest on premises less certain than our own introspective evidence.
There are many problems with this line of argument, some more obvious than others. First, it conflates the ineradicability of a perceptual frame with its persuasiveness as an abstract belief. So, for instance, the fact that John Searle finds he can’t help but think of himself as having free will in his day to day life is not in any real conflict with his theoretical conviction that there’s no such thing. I cannot help but perceive the sun as setting while I remain still, even though I know full well that the earth is actually rotating on its axis. I infer from this that our experience sometimes fails to accurately present the world as it is. Note also that this argument is in rather poor company, being derived from G.E. Moore’s “here is one hand” proof of an external world. Imagine a world not too far off where we routinely enter highly realistic virtual worlds, then ask how incontestable someone’s conviction that “this is a (real, external) hand” would be. All of which is just to say that it’s easy to overestimate how foundational or incontestable some particular belief grounded in perception really is.
The deeper issue, which I think these considerations hint at, is that I just doubt it’s the case that anyone really has introspective evidence for their own free will. Borrow a page from Wittgenstein: Suppose someone had doubts about Copernican astronomy, noting that the sun clearly seems to be setting in the sky. And how would it seem if you were standing on a rotating earth? Since, of course, we are standing on a rotating earth, we know that it would seem exactly the same: This is how a rotating earth seems. The error committed by the person who says that it “seems” as though the sun is setting is to imagine that our perception is self-interpreting, that our brute observations translate into beliefs without the aid of some model that gives sense to them. By the same token, we can ask what it would feel like to be a sentient, information processing being that didn’t have free will. What about this sort of existence would we experience differently? Or, alternatively, when you introspectively observe some choice you make, what part of the process constitutes “freedom”? What we actually experience, I want to suggest, is the absence of awareness of all the causal mechanisms underlying our decision process. But this is no sort of evidence at all. Neither am I aware of the underlying microprocesses that give rise to my visual or tactile experience. And speaking for myself, I generally find that if I really try to focus my awareness on the moment of decision, the actual choice presents itself as given. Certainly, contra Caplan, I do not normally feel as though I “decide” what to believe, but rather that the force of reasons or the balance of considerations compel a certain conclusion. One can, I suppose, steadfastly refuse to believe something in the face of evidence or logic, but that’s normally because there’s either some other sort of reason either consciously or unconsciously motivating you to do so. (viz: If I accepted the argument against free will, I would become depessed, and having the correct metaphysical view here is considerably less important than my happeness.) Similarly, suppose I want to choose between writing another blog post when this one is done or grabbing a book and sitting out in the sunshine for bit. I weigh the diverse satisfactions of each activity, consider my mood, have thoughts like “there’s plenty of time to blog when the sun’s down,” and then… what? Both the considerations weighing in favor of each option and the decision ultimately seem to be things that present themselves to me. I did not consciously decide which considerations would seem salient, or when I was done weighing and would come to a conclusion—or at any rate, if I did, I did not decide to decide to decide: The regress ends somewhere. Nowhere in this process do I find anything incompatible with the supposition that my thoughts and choices are determined by ordinary causal processes that I cannot observe. That I cannot observe, in fact, precisely because they are constitutive of my thoughts and choices. A map cannot depict a territory containing itself at one-to-one scale. As a small empirical aside, it’s worth noting that when people are given subliminal direction to take a certain action, and then proceed to do so, they will construct all sorts of elaborate stories about why they chose to behave that way, without being aware of the actual underlying cause—the subliminal instruction.
Next, an argument that a surprising number of people seem to find convincing, what we might call the epistemic argument for free will. The idea goes like this: If our thoughts and choices are determined (or some mix of random and determined), then it’s preordained (or arbitrary) that we form the beliefs we have, including the belief that there’s no free will. But this (supposedly) should undermine the belief, since our conclusion is determined by the physical facts about our brains rather than the truth of the proposition. This is odd first because, when we think about a system like a computer, our confidence in its calculations is a function of our belief that it performs its operations in a consistent, reliable, and truth preserving way. Or, turning it around, ask: What, exactly, is “free choice” supposed to be adding here that would make us feel better about the status of our reasoning? Presented with an argument, on top of whatever mechanistic logical churning I put it through, suppose I can freely choose to accept or reject it, whatever you think that means. On what conceivable grounds should this increase my confidence that my reasoning process will leave me with true beliefs?
The worry here, of course, is that we could simply be so constituted that certain conclusions seem unassailably true to us, though they are false. And, indeed, we could. But any argument that purported to eliminate this possibility would prove far too much. We are, indeed, stuck with a particular set of cognitive machinery. We can tweak it and improve our methods of belief formation, but we can’t ultimately escape the fact that we have a certain sort of mind—and this would be true even if we were disembodied souls rather than supervenient on physical brains. We can test our belief formation system for coherence by letting one component operate on or correct another, but we can’t actually step outside the system to validate it wholesale from some outside perspective. It is not some kind of devestating rejoinder to point out that this conclusion, too, is the contingent product of a particular type of cognitive system, and therefore lacking transcendent validation. That’s what we’ve got.
The implicit counterclaim here is that if we’ve got free will, then however our brains happen to be constructed, we can know that our conclusions are really transcendently true, rather than merely being reported as true by the specific cognitive apparatus we happen to have. And I just don’t see why or how that’s supposed to follow. My best guess is that somehow even if our brains were determnistically chugging their way toward a false conclusion, our free wills would somehow intercede in the process and just directly apprehend the truth of things. Or something. I’m genuinely mystified by how this is supposed to work, or why metaphysically strong free will is supposed to avoid this problem Maybe someone who finds it compelling can clue me in.
Finally, the most easily dispatched argument of all:
Fourth, try the following thought experiment. Our brilliant neurophysiologists come up with an equation that they claim will predict all of our behavior. The equation is so good that it even incorporates our reaction to the equation, our reaction to knowing that it incorporates our
reaction, and so on indefinitely. Suppose that the equation says that the next thing that you will do is raise your arm. Do you seriously believe that you couldn’t falsify this prediction by failing to raise your arm? But if you can falsify any prediction about your arm, and if the prediction
is derived perfectly from a comprehensive knowledge of your body’s constituent micro-particles, then your mind must be free.
My programming skills have atrophied pretty badly, but I can write a simple C++ program that asks the user “Will my next output be zero or one?” If the user answers “zero,” it spits back “one.” If the user answers “one,” it spits back “zero.” Now, having looked at the source code and having all the information I could want about the chipset of the machine the program runs on, I can easily predict what this very simple program will do in response to my inputs. What I cannot do is accurately report my prediction to the program and have it come true. By Caplan’s logic, this demonstrates that my feeble coding skills are nevertheless sufficient to create an artificial intelligence posessed of free will. By more pedestrian logic, it should demonstrate that there’s something wrong with Caplan’s thought experiment.
Which brings us to why I expend so much time on this. In an ordinary, day-to-day sense, we all know what I mean when we talk about people “making choices,” and for the most part the metaphysical dressing on that concept doesn’t make a great deal of difference. But some people seem to be convinced—I’d claim because they haven’t thought carefully enough about what it really means—that it would be just terrible if we turned out not to have this fuzzy property of “free will.” So they start going to great lengths to mount a defense of the concept, which ends up requiring them to employ a lot of sloppy methods of reasoning. The result, perhaps ironically, is to poison the cognitive system in ways that make it less likely, in a variety of contexts, that they will be able to sort truth from falsehood.