John Searle makes a game attempt to give an account—what Nozick would call a “philosophical explanation”—of how there could possibly be free will, of what it would have to look like if there were, in spite of all the familiar problems with the concept. He admits, frankly enough, that it is a loose and sketchy account, and that there is not much good reason to prefer it to the view that we merely seem to have strongly contracausal powers of choice. The one argument he does tentatively offer, though, is so manifestly bad that either he is not thinking clearly or I am not getting something important.
He proposes the argument that it is unlike evolution to confer upon us complex and extremely resource-expensive capacities that serve no function. Which is true, but has little enough to do with free will, which he appears to conflate with two other distinct properties: Consciousness and rational calculation. It is clearly resource expensive to have a brain capable of planning, making inferences, abstracting, formulating general principles, and so on. But it is not very mysterious what the evolutionary value of such capacities might be—especially in an arms race against your fellow primates once everyone has passed a certain cognitive threshold.
Now, there is an interesting evolutionary point to be made here about consciousness. You can say: Look, if the functional benefit all comes from the “plumbing,” as Searle has it—the neurons firing away to plot the best spear trajectory toward that mammoth, in strict accordance with physical laws—then you might well ask why that has to be conscious. Couldn’t the brain do all that calculating without there being something it’s like, subjectively to perceive, experience, and think about the world? And so you might think that if evolution has given us minds that are conscious on top of all this, then consciousness can’t merely be an epiphenomenon of the plumbing—it has to make a causal difference that yields some selective advantage. And here I’ll just say… we don’t know. Consciousness could well be a spandrel. That is to say, it may just be that when you have a sufficiently complex information processing system made of the particular kind of physical stuff our brains are composed of, the processes involved will have some kind of subjective character. If conscious mental activity just is brain activity, and not some kind of strange excretion from it, however, then they have precisely the same causal properties, and it’s just a confusion to describe it as “epiphenomenal.” (Aside: Maybe “causal properties” is the wrong way to describe it. The usual picture is that event A has properties in virtue of which it causes event B—but as Hume noted, the “causes” relationship between them here is kind of a black box; all we actually see is the succession. There would be a neat sort of symmetry if consciousness were in the black box.) Whatever the case may be there, we just have no reason to think it “cost” evolution anything to add sentience as some kind of bonus feature on top of the capacities for planning, strategy, and so on. If a brainlike system with these capacities—able to merge input from many sense modalities and abstracting from them for various purposes— is necessarily conscious, for reasons we don’t fully understand, then the cost of consciousness is just the cost of the capacities. Or to put it another way: The alternative picture is that evolutionary selection pressure might have produced these very strategic zombies—like vastly more complex insects, say, all stimulus-response with nobody home— but then some mutation won out that added this further feature, consciousness, to the system, because it yielded some additional improvement. And that just doesn’t sound quite right, does it?
Whatever the case with that is, it’s the only place where there seems to me to be any kind of genuine puzzle. Because once you get through the calculating function and the conscious subjective aspect, what question is left about freedom? Maybe Searle’s thought is along the lines: It wouldn’t be terribly parsimonious for evolution to go about imbuing us with this illusion of free will—as he notes in response to a questioner, there’s a circularity to the sort of obvious stories you might try to tell about the advantage of such an illusion—so it must be that it’s authentic. But of course, the optical illusions to which we’re subject aren’t like extra little easter eggs evolution had to program in; they’re side effects of a certain kind of perceptual system that it was too costly to get rid of. And indeed, the reason you’d end up with this kind of illusion seems much more obvious than in he perceptual case. As Searle himself notes, the subjective sense of freedom is a “gappy feeling” between our premises and our decisions. But of course if you’ve got an information processing system that’s conscious, the part where it’s working to the conclusion will seem “gappy” or “open” because it hasn’t gotten there yet. To feel like the conclusion or the output of your mental process was compelled or inevitable, you’d have to be conscious of its result before the termination of the process that makes you conscious of its result. So there’s just no reason to suppose that the “gappy feeling” is something extraneous evolution had to pay some extra price to inject.
So much for that. The second thought is that, at the end of the Q&A, during which all of the questioners are clearly very smart people who are thinking carefully about what they’ve heard, a woman gets up and asks a question (the details aren’t important) about whether Searle’s own objection to compatibilism didn’t apply just as strongly to his own idea that a macro-level brain system might inherit the indeterminacy of the micro-level quantum processes that constitute it, without inheriting their randomness. What’s interesting is that I know I instantly thought to myself: “Aha, a serious question!” And Searle instantly came back with: “That’s just a question a professional philosopher would ask!” (Perhaps because they had) At which point the questioner, somewhat sheepishly, confessed to having studied philosophy.
The thing that interests me is that without even having thought about the question in enough detail to have a sense of whether it was really any good—before, in other words, I’d had time to recall precisely what he’d said about compatibilism and how it related to his argument about possible differences in micro- and macro-properties—there was something about the shape of the question that hinted at formal training, apparently to both of us. It wasn’t, I think, that she used any jargon or was adapting some hoary argument that would be familiar to a philosophy student. It wasn’t the presence of anything you really learn explicitly at all; it was just a very philosopher-y strategy of probing the view for flaws.
The thing is, I don’t know that it was really a better question substantively than some of the others that were posed. There was just something about the thought pattern it betrayed that said “member of the club.” As a sort of heuristic, that’s probably not a bad predictor of whether the question will turn out to be a smart one, but it’s clearly not itself a feature that makes the question smart. (That depends on how easy the reply turns out to be in the instance: A “dumb question” is one you really should’ve been able to answer for yourself with a few moment’s thought. An interesting second-order signifier of intelligence: The ability to quickly search the space of potential questions for the ones that require significant processing. Presumably that means discarding questions whose answers aren’t immediately obvious, but for which an answer-generating set of strategies is.) It makes me wonder how often we collapse these levels—that is, confuse “That was a smart question” with “That question was produced by a reliable strategy for generating smart questions.” The best questions, it seems likely, won’t actually be so immediately recognizable in this way, because they’re the tough ones that can’t be easily reached by familiar question-generation procedures.