Via Brad Plumer, I see Cato’s Jerry Taylor is riled at responses to an open letter ad the Institute published in which a group of scientists signed off on a statement questioning the strength of the case for catastrophic climate change. I’m broadly sympathetic with his irritation at the proportion of ad hominem attacks in debates like these, but I’m not sure I agree with his bottom line in context:
An argument’s merit has nothing to do with the motives of the arguer, the credentials of the arguer, or the popularity of the argument. Full stop. No exceptions.
As a matter of logic, of course, that’s true, but I’m not sure it’s to the point when the proximate cause of the tussle is a single-page ad consisting mostly of signatures followed by credentials. We’re accustomed to calling the “argument from authority” a fallacy, but in fact, that’s what the vast majority of us have to go on most of the time. Provided you ensure that authority’s authority actually applies to the field in question, it’s as good a strategy as any.
Obviously, when it comes to an argument between trained scientific specialists, they ought to ignore the consensus and deal directly with the argument on its merits. But most of us are not actually in any position to deal with the arguments on the merits. Which, of course, is why the ad itself makes only a gesture in the direction of an argument and then proceeds to the long list of names.
Sometimes the arguments are such that the specialists can develop and summarize them to the point that an intelligent layman can evaluate them. But often—and I feel pretty sure here—that’s just not the case. Give me a topic I know fairly intimately, and I can often make a convincing case for absolute horseshit. Convincing, at any rate, to an ordinary educated person with only passing acquaintance with the topic. A specialist would surely see through it, but in an argument between us, the lay observer wouldn’t necessarily be able to tell which of us really had the better case on the basis of the arguments alone—at least not without putting in the time to become something of a specialist himself. Actually, I have a possible advantage here as a peddler of horseshit: I need only worry about what sounds plausible. If my opponent is trying to explain what’s true, he may be constrained to introduce concepts that take a while to explain and are hard to follow, trying the patience (and perhaps wounding the ego) of the audience.
Come to think of it, there’s a certain class of rhetoric I’m going to call the “one way hash” argument. Most modern cryptographic systems in wide use are based on a certain mathematical asymmetry: You can multiply a couple of large prime numbers much (much, much, much, much) more quickly than you can factor the product back into primes. A one-way hash is a kind of “fingerprint” for messages based on the same mathematical idea: It’s really easy to run the algorithm in one direction, but much harder and more time consuming to undo. Certain bad arguments work the same way—skim online debates between biologists and earnest ID afficionados armed with talking points if you want a few examples: The talking point on one side is just complex enough that it’s both intelligible—even somewhat intuitive—to the layman and sounds as though it might qualify as some kind of insight. (If it seems too obvious, perhaps paradoxically, we’ll tend to assume everyone on the other side thought of it themselves and had some good reason to reject it.) The rebuttal, by contrast, may require explaining a whole series of preliminary concepts before it’s really possible to explain why the talking point is wrong. So the setup is “snappy, intuitively appealing argument without obvious problems” vs. “rebuttal I probably don’t have time to read, let alone analyze closely.”
If we don’t sometimes defer to the expert consensus, we’ll systematically tend to go wrong in the face of one-way-hash arguments, at least outside our own necessarily limited domains of knowledge. Indeed, in such cases, trying to evaluate the arguments on their merits will tend to lead to an erroneous conclusion more often than simply trying to gauge the credibility of the various disputants. The problem, of course, is gauging your own competence level well enough to know when to assess arguments and when to assess arguers. Thanks to the perverse phenomenon psychologists have dubbed the Dunning-Kruger effect, those who are least competent tend to have the most wildly inflated estimates of their own knowledge and competence. They don’t know enough to know that they don’t know, as it were.
That said, a side point about this from Ryan Avent:
That is to say, confronted by a problem demanding solutions inimical to libertarian beliefs, libertarians were faced with the choice of reneging on their beliefs or turning their back on science. Tellingly, they chose the latter. One might think that’s a rather drastic decision, given the role scientific endeavors have played in delivering the material prosperity so dear to the hearts of the libertarian world, and one would be right.
At a higher level of abstraction, a purist libertarian position is arguably quite radically green. That is, assuming the harm from climate change is demonstrable, every affected property owner gets a veto over “aggression” by carbon emitters, at least on a strict Rothbardian type of view. That’s rather too strict for my taste, but I just mean to point out that the conflict with “libertarian beliefs” here isn’t really at the level of principle or theory. If climate change is actually going to be profoundly harmful, then it’s precisely the sort of problem libertarian principles say the state ought to be trying to solve.
Addendum: Perhaps I should’ve made this clearer, but I don’t mean to deny that as a sociological matter, a general aversion to regulatory solutions has left libertarians too prone to hope the problem can be wished away as long as someone out there with a Ph.D. remains unconvinced. But it also seems as though, at least in recent years, they’ve grown out of that a bit and we’re starting to see more useful debates about exactly how bad it’s going to be, exactly how much cost is worth bearing to ameliorate or delay it to such and such an extent, and so on.
Addendum II: In the comments Pithlord lives up to his moniker:
Most fallacies aren’t really fallacies when you reinterpret them as Bayesian reasons to give an idea more credence rather than iron-clad syllogisms. Without the “argument from authority” and the “ad hominem fallacy”, you would either never get lunch or you’d give all your money to Nigerian spammers.