Some of the author’s suggestions seem not particularly useful. For instance:
To say that a human life is ”priceless” does not necessarily mean that it is worth more than any amount of money. It may just mean that money is the wrong yardstick to use when our decisions involve the loss of life.
Sounds pretty, but the problem is that money is the yardstick used when we’re considering the burden imposed by various regulations. Keeping our air purer or a particular place safer is costly. We don’t, and shouldn’t, ignore that when deciding how much safety we want. So at the end of the day, we are asking a question about dollar values: How much are we willing to spend on that marginal life saved?
Still, it does seem like a straight cost-benefit analysis elides some important distinctions. Consider highway safety vs. air purity. Everyone has to breathe air, whether they want to or not. People presumably knowingly assume some level of risk when they get in their cars—it seems appropriate to weigh this in, to be willing to bear a greater cost to spare those who have no choice about whether to subject themselves to risks.
Some other interesting points: We have a “discount rate” on future lives, analogous to that of money. But the discounting of money is an individual cognitive bias; it’s not at all clear that it’s appropriate to discount future lives in the same way, except to the extent that consequences far in the future are always less certain and therefore a somewhat less sound basis for forming current policy. Perhaps most interesting, though, is the closing sentence:
Even the most ardent cost-benefit analyst would spend more money to rescue a single actual child than to save 10 ”statistical lives.”
As a psychological observation, that seems plausible on face, although I wonder: Monetary values for a human life range from about $3-6 million. Do we spend $6 million to treat mortal but curable illnesses? Do countries with socialized healthcare do this? But let’s assume it’s true. Is it rational? Those ten “statistical lives” are usually projections, but imagine in a particular case that you’re all but certain some regulation would save about that many—maybe 8 or 9, maybe 11 or 12, but about that—does it really make sense to say you should spend more than your upper bar for that regulation to save a known person? Emotionally, it seems more urgent when you’ve got a specific person in front of you, but it’s not clear why that’s particularly rational or moral. Imagine you’re choosing between two societies, each of which will tax you the same amount for public safety purposes. The former seeks to maximize statistical lives saved per dollar, the latter focuses on saving particular, known victims. Is there any reason you’d rather live in the second, from the perspective of maximizing your benefit in a quasi-Veil of Ignorance sense? It’s hard to see one. Prospectively, they’re all “statistical lives”. And ex-ante, it’s always particular individuals who live or die.