An article in Scientific American introduces an interesting game theoretic puzzle, the Travelers Dilemma, and proceeds to spin some truly bizarre political conclusions from it. First, the puzzle. Since the effect of the little accompanying story, to judge by the comments at Digg, is primarily to induce people who don’t understand game theory to say lots of profoundly inane things, I’m going to skip it and just sum up the numbers.
You and another player each simultaneously pick a number between 2 and 100 inclusive. The person who picked the lower number (L) gets the payoff (L + 2). The other player gets the payoff (L – 2). If they pick the same number, that’s the payoff for both. The “dilemma” is that the pure strategy Nash Equilibrium for the game is (2, 2), because each bid is dominated by the next smallest number, and backwards induction turns this into a death spiral. Intuitively, it seems like there ought to be a mixed strategy equilibrium, but it’s been years since I had formal game theory training, and I’m not inclined to do much math at the beach, so I’m going to assume there isn’t or the SA piece would’ve mentioned it.
Now, as it turns out, most players—even trained game theorists—don’t follow the zipline of logic to its mutually self-defeating conclusion, and instead pick a number in the mid-90s. This just seems to suggest that there are any number of formal conceptions of “rationality” that, depending on the situation, have some claim to model “rationality” in the normative sense. This is at least one way to read Newcomb’s paradox: As highlighting a tension between a Bayesian decision process and one that proceeds by the elimination of dominated strategies. Just at a first pass, it seems like you could use standard game theory to model the choice between approaches or meta-strategies to the Traveler’s Dilemma—standard backwards induction on the one hand, a fuzzy “pick some high number” strategy on the other:
Of course, maybe this just reproduces the original problem when it comes to specifying a range for the fuzzy strategy, but again, disinclined to do the math at the beach.
Anyway, all that’s neither here nor there. What’s really bizarre is that the author, Cornell economist Kaushik Basu, seems to take this as an explosion of the “libertarian presumption” that individuals allowed to freely and rationally pursue their interests will generate good collective outcomes. First, as Jason Kuznicki notes, there’s a hell of a lot of air between this sort of jiggered up formal example and real-world situations in which people have all sorts of ways of ensuring collaboration. The Prisoner’s Dilemma, after all, is also an example of this sort, but it’s well understood that in iterated games over time, much more common in real life, people can arrive at collaborative solutions. More to the point, what Basu finds is precisely that while formal decision theory predicts this sort of perverse outcome, actual people reason in ways that avoid that trap even in the one-shot game. Now, I’m not be especially persuaded that findings from behavioral economics demonstrating systematic cognitive biases away from rational optima establish a strong case for un-libertarian interventions. I do, however, understand why one might think that constitutes a case for such intervention. But I’m damned if I can see how deviations from the predictions of formal decision theory that lead people to collectively superior outcomes are supposed to constitute such a case. If anything, this seems to dovetail with Vernon Smith’s experiments showing that actual economic actors manage to find efficient outcomes even in the absence of conditions the formal models regard as prerequisites.