With this in mind, here’s another possibility for what happens after we create fantastically advanced computing capabilities that are thoroughly merged with human consciousness: we discover — in a way that’s truly convincing — that free will doesn’t exist. And so we give up. Within a few decades, the human race chooses to put itself out of existence because there’s really no point to its continued survival and our biological urge toward self preservation, honed over millennia by evolution, no longer controls our merged biological/machine selves.
Now, I think Lindsay’s right to reply that if we (well, many people) have been able to hang on to a belief in free will in the face of what are already some fairly potent arguments against it, there’s not a great deal of reason to think we won’t be able to simply continue doing so. Anyway, Drum’s own analysis hints at a sort of evolutionary answer: If any of us decide to program our unfree selves to be compelled to either continue believing in free will or not be bothered that we haven’t got it, then that group will become the core of future, similarly programmed populations as the others die off.
But the more obvious response to me is: Why should we think it will really make that much of a difference? Free will, it seems, is a little like God in this respect: Believers often seem to think it’s so centrally important that without it, life would necessarily be meaningless. Yet those who don’t believe seem to get on just fine: Life is not, after all, sapped of its meaning. Most atheists manage to care about being good people without the threat of divine sanction—indeed, it may even come to seem as though trying to treat people well because we fear hell, rather than because we see the value of others’ happiness or dignity in itself, rather misses the point. And we manage to take seriously our own choices, to see them as (ideally) flowing from who we are and what we value, even if we realize that these things are not themselves “open” in some very deep metaphysical way, all the way down. (Compare Nozick’s comments on desert here.)
Now, what might well bother us is to see ourselves as what Daniel Dennett calls “sphexish“: determined in some crude or stupid way. But this is already part of the process of most people’s self-development anyway: We realize that we fall into self-defeating patterns of behavior habitually, or that we’re “fighting the last war” in our relationships, or that we’re otherwise following simplistic behavioral scripts instead of really thinking about what we want to do and be. If we think that is usually healthy and good, then insofar as technology lays bare our own scriptedness, it seems less likely to lead us into despair than to open up new possibilites for greater complexity and autonomy.