An interesting point from Ezra, summarizing Brad Plumer:
Brad Plumer has a very nice post on why the Congressional Budget Office has, historically, predicted that pollution regulations would cost much more than they actually did. When Congress went to create a cap-and-trade plan for sulfur dioxide in the early ’90s, the CBO figured that permits would sell for $750 a ton. By 1997, they were $100. And that’s not an isolated example.
The basic story, as Plumer explains, is that “markets almost always tend to be smarter than forecasters, and adjust in ways that no one expected (and, as such, are hard to build into the models).” Technological innovation tends to emerge much more rapidly than CBO’s predictions admit. And there’s a reason for this. The CBO plays by “all-else-being-equal” rules. Asked to evaluate cap-and-trade, they essentially ask, all else being equal, what will this do to the economy?
Ezra and Brad are mostly concerned with why this means the CBO is unreliable on environmental regulations. But I think there’s a larger point here libertarians and free-market types ought to be interested in. If we think markets are generally going to be pretty good at adapting to changing circumstances in unpredictable ways, we need to look at whether they’ll also be robust enough to deal reasonably well with regulation. Apropos of the example above, folks like Julian Simon long argued that dire environmentalist predictions of resource depletion were wrong because ingenious humans responding to price signals would find substitutes and more efficient uses for scarce goods. That argument has held up pretty well, but then the same should be true if oil gets more expensive because of a carbon tax, say. The flip side is that if you want to claim that some awful cataclysm—like the recent financial implosion—can be laid at the feet of government, it’s worth asking whether that level of instability and sensitivity to tampering isn’t itself a problem. A theoretically optimal system that blows up at the slightest perturbation is not a great real-world solution.
So when is regulation just one more changing circumstance we should expect the market to adapt to pretty well, and when should we expect it to cause serious problems? Two obvious kinds of cases where you might expect trouble come to mind. The first is when the market actually does adapt “correctly” to a regulation that’s sending the wrong signal. Price controls are the clearest example here: The shortages that result from price caps aren’t a market failure, they’re the right response to a false measure of demand. In this case, it’s the market adaptation that’s actually the problem—just as many disease symptoms are actually the result of the immune system’s reaction. The other obvious sort of case is a regulation that actually targets the underlying adaptive mechanism itself—a rule that limits the flow of information, or that bars whole classes of transactions or contracts.
These are not, I realize, especially novel observations, but a lot of public debate seems to treat “regulation” and “deregulation” as fairly homogenous. If the case for markets consists in their generally being smarter than we are, we should infer that, in most cases, narrow regulations will end up being less burdensome than one would expect, because the market will adapt in ways that can’t be predicted in advance. Regulations that target the adaptive mechanisms will be costlier than they seem, because the loss is in the form of adjustments or innovations that don’t happen. If that’s right, though, it might be a reason to worry that cost-benefit analysis of regulation—Cass Sunstein’s new gig at OIRA—will be systematically skewed toward the wrong type. My first instinct was “well, surely they’ve thought of this and attempt to compensate in some way,” but if the estimates are based on CBO-style “other things equal” scoring, maybe not.