Julian Sanchez header image 2

photos by Lara Shipley

What Would a Determinist Choose to Do?

May 27th, 2008 · 10 Comments

In the course of making a somewhat different point, Tyler Cowen writes:

It’s funny how Bryan thinks he can cite my actions as evidence against the correct belief.  That’s absurd; for instance I also don’t act as if determinism is true, but citing that doesn’t settle the matter.

That seemed reasonable at first pass, but then I did a bit of a double take: What would it mean, exactly, to “act as if determinism is true”? What are we imagining that people would do differently if they were convinced of determinism? I’m not even sure this is the sort of belief one could coherently import into practical reasoning with any effect: “My actions are predetermined; what should I do about it?” I may regard my next five decisions are preordained, but that doesn’t spare me the trouble of having to make them, which means treating them (counterfactually) as “open,” in some sense, during my deterministic process of deliberation—it means invoking an “if”. What would the alternative even look like?

Maybe we’re imagining that I shouldn’t blame people for their bad actions or credit them for their good ones, since they were fated to act that way? (But then can I be blamed for blaming or praising them inappropriately?) I’ve never seen why this is supposed to be a real worry: If the premise is that radically free will is necessary for the assignment of moral responsibility, and it turns out we don’t have radically free will, the obvious inference to make is… to reject the premise. More generally, I can’t think of any line of reasoning that moves from the truth of determinism to some important conclusion about how we ought to act differently without either collapsing into incoherence or relying on a premise far more controversial than the conclusion it’s meant to support.

Tags: General Philosophy


       

 

10 responses so far ↓

  • 1 Loren Michael // May 27, 2008 at 11:45 am

    I’m not sure if this is appropriate, but I’ve been having a discussion about pretty much exactly what you’re inquiring about for the past week or so at the Penny Arcade message boards.

    http://forums.penny-arcade.com/showthread.php?t=58251

    I’m wantonly plagiarizing from Robert Wright in there, and if you haven’t read The Moral Animal, I suggest you do so.

  • 2 asg // May 27, 2008 at 11:58 am

    Good lord, what a mess that thread is!

  • 3 Julian Sanchez // May 27, 2008 at 12:06 pm

    Bob is a good science popularizer, but I wouldn’t lean on him super heavily as a philosopher.

  • 4 Loren Michael // May 27, 2008 at 12:14 pm

    Of course, it’s a public forum!

    Anyways, regarding “what does a determinist do?”…

    All my concerns revolve around perceptions toward wrongdoers.

    Punishing a criminal is like punishing a robot for its own malfunctions, but punishing such a robot for its own programming can be an effective preventative measure for future malfunctions. So, it’s a tragic necessity.

    As such, not a lot changes in terms of actual practices, but I think understanding and compassion are increased as blameworthy entities become essentially nonexistent and everyone becomes a victim.

  • 5 Loren Michael // May 27, 2008 at 12:18 pm

    I’m in China, and as such my ability to conveniently reference stuff is severely hampered.

    I happened to take a few of my favorite books with me though, and Bob happened to write a lot about this particular subject. Determinism is a pretty simple concept, and he devotes a couple of chapters to it, which I would suggest is more than enough to reference for most discussions on the matter.

  • 6 asg // May 27, 2008 at 1:34 pm

    It is worth mentioning that not all naturalists reject retributivism in favor of deterrence or other views that understand punishment as purely instrumental. See, e.g. Michael S. Moore’s essay on retributivism, or his book Placing Blame.

  • 7 dan // May 27, 2008 at 6:07 pm

    I find this subject absolutely fascinating, which is why I spent a long time considering PhD programs in philosophy. But ultimately I came to the same conclusion you did, which is why I went to law school.

  • 8 Gil // May 27, 2008 at 11:50 pm

    I think that many people may assume that behaving as if determinism is true would be to behave irresponsibly; as if there were no consequences one need be concerned with, because we can’t choose to affect the consequences we’ll face.

    Of course, as you indicate, that’s wrong.

  • 9 Julian Elson // May 29, 2008 at 2:34 am

    Okay. First, there’s the question of “if not determinism, then what?”

    One commonly cited alternative might be considered stochasticism. I.e., instead of a deterministic process, our minds are governed by stochastic processes. As Hume pointed out, this is, if anything, worse for moral responsibility than determinism.

    The other is what might pejoratively be called “weird spooky metaphysical free will.” This is, I suppose, what most people have in mind — something neither stochastic nor deterministic, and possibly seperate from normal chains of cause and effect. It’s not celar what this view actually entails, but people seem to draw intuitions about moral responsibility from it.

    Since I don’t really know how to deal with the weird spooky metaphsical free will view, I’ll just say I think there might be differences in how I might act if I believed the human mind was stochastic vs. deterministic — but the differences are pretty subtle and abstruse.

    For instance, if I were a stock market analyst, and I were responsible for creating models used by an investment firm, I might approach models a different way if I believed that the human mind was stochastic vs. deterministic. In reality, it probabily wouldn’t make a difference, because even if I believed that, at a deep-down, low-enough level, human minds were deterministic, I’d still end up using probability-based models with a sort of Bayesian view of “what will actually happen is preordained, but I don’t know what, so I’ll assign probabilities based on what I view as most likely.”

    I’m reminded of the Foundation series, which I just finished. (I started it about seven years ago or something, but couldn’t find Foundation and Earth, the last book, until recently, for some reason. I was about to write that if human behavior was stochastic, then that has implications for modelling it, but I thought of how in the Foundation series, human behavior is stochastic, but as numbers increase, collective behavior becomes extraordinarily deterministic, to the point where the future can be predicted for centuries into the future, so deterministic models of history work in spite of humans who aren’t deterministic. Okay, it’s sci-fi, and psychohistory is about as realistic as the faster-than-light hyperdrives, but anyway, maybe even the view that social models can’t be too deterministic without determinism is wrong, and it actually has no implications at all. I don’t know.

  • 10 Neil the Ethical Werewolf // Jun 1, 2008 at 8:38 pm

    If the premise is that radically free will is necessary for the assignment of moral responsibility, and it turns out we don’t have radically free will, the obvious inference to make is… to reject the premise.

    Does it help if we weaken the consequences of determinism a little bit? So, rather than not being able to hold others morally responsible, there’s some narrowly defined emotion of condemnation that becomes irrational. You can still evaluate people negatively for their actions — regard them with some sort of disdain or loathing, perhaps. But there’s some particular thing you can’t do.

    Lately I’ve been thinking that getting a handle on the compatibilism / incompatibilism issue will require us to finely differentiate the various ways we hold people responsible, and reactive emotions we feel. Compatibilists will be right about most of these responses, I think, but maybe incompatibilists will be right somewhere.