Writing at The American Prospect a few weeks back, Patrick Caldwell expressed puzzlement at the view, seemingly widespread on the right, that the hegemonic forces of secularism are somehow forcing believers out of the public square:
When I first read Santorum’s comments though, I was mostly struck by how off base his statement is from the actual reality of our political class. People who lack a specific faith are the ones typically closed out from government service. Out of 538 members of Congress, California Rep. Pete Stark is the only self-avowed atheist. For as much as Republicans opine about the secularist goals of Obama’s presidency, he has stocked his cabinet with Catholics and other gentiles. The highest court of the land has six Catholics and three Jews.
A Gallup poll last December had 15 percent of Americans list their religious preference as none, atheist, or agnostic, though another Gallup poll from earlier in the year found that 7 percent claim to have no belief in God. By either measure, Americans lacking allegiance to an organized religion are vastly underrepresented among public officials.
It’s worth saying, first, that Santorum’s complaint is not so much about religious people being somehow hounded from public office, but about the secularism of mainstream political discourse. Which is just to say that we generally expect political actors in a pluralistic country to offer justifications for their preferred policies that do not hinge on one’s sharing a particular interpretation of a particular sacred text. Santorum thinks it should be perfectly sufficient to say: “It should be illegal because the Bible condemns it,” and he’s irritated that even believers mostly feel obligated to focus on religiously neutral “public reasons” that could be accepted by people who don’t acknowledge the authority of (that reading of) the Christian Bible. He’s not empirically wrong about this (and a good thing!), he just has a repugnant, medieval vision of how things ought to be.
That aside, though, I suspect “self-avowed” is a key qualifier in the passage quoted above. Whatever they check off on their census forms, the political class in D.C. have always struck me as pretty secular. Maybe they’re just quiet about their faith—praying quietly in private, regularly attending worship services on the weekend without making much fuss about it. And I certainly wouldn’t claim that people I happen to know socially are anything like a representative sample of “the D.C. political class.” Still, if you asked me to guess what percentage of the under-40 political professionals in this town—hill staffers, pundits, journalists, wonks, and activists—are agnostic or atheist in their private beliefs, I’d hazard a number much higher than 15 percent. If you expand that definition to encompass what I’d call “operational atheists”—people who might tell a pollster they’re whatever faith they grew up in, and might “believe” in some vague abstract sense, but whose nominal religion plays no discernible role in their thinking or everyday life—you’re probably well over 50 percent.
Of course, there are obvious reasons for Congress to be unrepresentative. Given the widespread popular prejudice against atheists, they’re probably disproportionately likely to self-select into think tanks and magazines and various other supporting roles. And I wouldn’t be surprised if some smart, ambitious young people with political aspirations either consciously or subconsciously made a pragmatic decision, maybe at some point in college, that there was no real benefit in subjecting this particular corner of their belief systems to special scrutiny. Most of us, after all, hold thousands of beliefs with little better warrant than “I’m pretty sure I read that somewhere”—so it would be easy enough for a would-be politico to conclude there’s no sense rocking this particular epistemic boat.
But it’s still very, very hard for me to believe that there’s really only one atheist in the United States Congress. Not everyone who concludes, in an hour of quiet reflection, that religious doctrines are probably false feels compelled to shout it from the rooftops as loudly as Christopher Hitchens or Richard Dawkins. Lots of them are even perfectly happy to go through the motions at appropriate occasions, for the sake of family (presumably not everyone who converts at marriage has a genuine theological epiphany) or because they enjoy the sense of community, or even just because the ceremonial trappings have grown familiar and comfortable. People fake it—so routinely that a Google search for “coming out atheist” brings up an incredible deluge of stories and discussions about people making the decision to leave the closet after years of going along to get along… or not. YouTube is packed with similar testimonials. Historically even intellectuals felt obliged to play along: David Hume (to pick one famous example from a rich pool) halfheartedly professed to be persuaded by the “Argument from Design”—then gives all the most devastating arguments in his “Dialogues Concerning Natural Religion” to the skeptic who demolishes that argument. It strains credulity to think there aren’t at least a few—and maybe more than a few—comparable closet cases in a profession where success depends on convincing this cycle’s electorate that you’re deeply committed to whatever it is they believe… even if it’s the opposite of what the last electorate believed.
It’s something of a cliché at this point to talk about the “paranoid style” of conservative politics—and the seeming migration of that paranoia from the fringe to the mainstream. But maybe in part it has roots in a perfectly common real-life experience that must, to believers, seem a bit like something out of Invasion of the Body Snatchers: The bright young child everyone was so proud to ship off to a prestigious university comes back over break subtly different somehow… dutifully says grace at supper, but seems (for reasons you can’t quite nail down—maybe just that hint of a smirk?) to be humoring the ritual. For Americans who (mistakenly) take faith to be a sort of minimum prerequisite for moral conduct, this has to seem like the ultimate form of deception: Lying about even the general possibility of being honest. What had been understood as a kind of polite dissimulation—yes, of course your newborn is the most beautiful baby in the history of babies—starts to look downright insidious.
Previously faith could more or less be taken for granted—maybe the candidate makes a passing reference to the church they regularly attend—and that’s all there is to it, really, because of course everyone’s a believer of one stripe or another. Increasingly, isn’t so—that there are actually quite a lot of unbelievers, many of them effectively operating in stealth mode. This was probably always the case, but outside the academy and a few urban enclaves, nobody was terribly vocal about it—you certainly didn’t have anything like a visible public “movement.” Suddenly, if you’re someone who thinks of faith as a minimal prerequisite for decency, what was previously tacitly understood has to be signaled with extra vigor.
A comparison with gay rights may be apt here: Go back a few decades and the idea is so marginal that nobody really thinks of it as a political issue. (Note that in some of the most virulently homophobic societies, you also see a lot more normal physical affection between men that would be normal in the U.S., possibly because it’s so beyond the pale that nobody worries about sending the wrong signals.) Roll forward another decade or two and it’ll so normalized that nobody can quite understand why there was ever a fuss about it: Every city has plenty of nice gay families, and everyone can see they’re not fundamentally different from the nice straight family next door. You get “culture wars” in the middle: When a phenomenon is prevalent enough to seem threatening, but not yet (visibly) prevalent enough that it becomes obvious it’s not actually a threat at all.
I’ve always found the more aggressive, proselytizing sort of atheism a bit distasteful: Do we really need a People-who-don’t-play-chess club or a non-basketball-team? As a writer or pundit or whatever I am, it’s no surprise that I’ll occasionally bring up this aspect of my worldview, but since most of us don’t think our fellow citizens have souls that need saving, shouldn’t the modal atheist just go on quietly not-believing, and hope polite circumspection on these issues catches on? Maybe, though, there’s a case for being a little more vocal—for coming out secular—at this particular historical moment, in the interest of hastening the journey across the valley between invisibility and normalcy.
For a while, I’d just hoped that keeping the kitchen clean and food tightly sealed away would encourage the mouse to move on, but this stratagem backfired: It grew increasingly bold, even venturing tentatively toward the living room occasionally. Finally we broke down and got a trap and… the mouse vanished. Trap unsprung, bait untaken, but suddenly no sign of our furry little friend. So now I have the following scene playing out in my head.
APT. KITCHEN, AFTERNOON: The mouse skitters out into the center of the kitchen floor, props up on hind paws, sniffing the air and looking about for signs of humans.
MOUSE: Hey, you guys? You guys, I just got the “Game of Thrones” Season 1 box set, I was thinking maybe we could order a four-cheese pizza, the tall hairy one could go out for a six of Magic Hat…
Something catches the corner of his eye; the mouse looks over his shoulder spots the trap for the first time.
MOUSE: Oh. Oh wow. You guys?
Mouse pads over to the trap, sniffing contemptuously at the smoked gouda, shakes its head slightly, more hurt and disappointed—and at a professional level, a little insulted by the crude latch mechanism—than truly angry.
MOUSE: It’s like that, huh? Geez, I know I’ve been… but I really thought we had a kind of… [here the mouse see-saws its forepaws back and forth in the universal gesture for we’re-on-the-same-page-here-right?] Yeah, I guess not. Ok, well. Whatever you guys.
With an over-shoulder glance at the gouda bait—which he totally could extract safely, but doesn’t even want to now—mouse slowly pads back toward the kitchen cabinets, produces a tiny bindle (tied around a splinter) and squeezes through a gap in the woodwork out into the backyard, accompanied by a canned “AWWW”. The CLOSING THEME FROM “THE INCREDIBLE HULK” TV SERIES plays as we FADE TO BLACK.
On the upside… no more mouse.
I think it’s fairly clear, at this point, that the initial police investigation into the killing of Trayvon Martin was shamefully slipshod, and that George Zimmerman’s shaky story needs to be heard and evaluated by a jury, not accepted on faith by sympathetic law enforcement. But I’ve also been mulling the facts that have been made public and trying to figure out if there’s any plausible scenario that makes it genuinely “tragic”—not in the modern sense of “really sad” but in line with the classic Greek formula, where the protagonists come to grief, not as a result of outright wickedness, but flaws in basically decent characters. Is there a way for this to be a story without villains? Having just written that Zimmerman’s actions seemed unreasonable under any believable set of circumstances, I figured I should at least try to come up with a counterexample. Here’s one possibility that seemed plausible—though with luck, an actual jury will soon get to determine what actually happened:
We start with Zimmerman spotting Martin on the way back from the convenience store and concluding that he seems “suspicious” in light of recent burglaries. Maybe it’s just because he’s an unfamiliar black teenager and Zimmerman is applying a racist stereotype, or maybe there’s something else that Zimmerman misunderstands—the boy is walking slowly to prolong the trip home while he talks to his girlfriend, which on a rainy night, Zimmerman perceives as someone “casing” houses. Failing to understand how creepy his own slow-motion monitoring of the teen from his SUV seems, Zimmerman takes it as a further confirmation of his suspicions when Martin breaks into a run.
Stupidly disregarding the 911 dispatcher’s advice, Zimmerman gets out of his car to ensure he can point the boy out to police when they arrive. After all, he thinks, if this is the guy who’s been breaking into local houses, it would be crazy to let him slip away to strike again, and he only intends to get a closer look and maybe ask a few questions. This would be dumb, but not inherently criminal. He either approaches Martin, or Martin himself finally decides to confront this determined stranger to demand an explanation for why he’s being followed. Zimmerman, in turn, demands to know “what he’s doing in the neighborhood,” meaning “explain what I regard as suspicious behavior.” Martin seems visibly edgy—as you would be with a creepy stranger tailing you!—and maybe Zimmerman simultaneously exposes the holstered firearm, hoping it will deter the teen from trying anything.
But Martin isn’t connecting the question with the recent spate of break-ins, which he has no reason to know about: What he hears is a threat from an armed and menacing stranger who has been stalking him from his vehicle, and now sounds angry to see a black teenager in “his” neighborhood. And when Zimmerman exposes the gun, Martin reasonably concludes that he’s about to become the victim of a hate crime.He could run—but he won’t outrun a bullet, and risks being shot in the back. It seems like his only chance is to disable and disarm this nut before he can draw the weapon. It’s a risky gambit, but in another few seconds, Zimmerman will have time to draw the gun and fire, so Martin doesn’t see any other good options. Making a split-second decision, the football player goes for the tackle, thinking he can get the gun away and hold this creep for police.
Zimmerman, meanwhile, has no idea what Martin is thinking. All he knows is he’s on the ground taking hits from someone who now appears to be going for his gun. He shouts for help but doesn’t see anyone coming, and doesn’t realize the teen had regarded him as an imminent threat. He assumes Martin’s making a grab for the gun in order to use it against him. Panicked, flat on his back, and seeing no alternative, Zimmerman fires.
This is, obviously, complete speculation, but as far as I can tell, it’s consistent with the public facts—and with the general principle that fear and stupidity are more common than malice. If it’s accurate, both parties would have honestly believed they were acting in self defense. And, incidentally, the “Stand Your Ground” law wouldn’t appear to be relevant, because neither of them would have regarded retreat as a viable option at the time they reasonably believed themselves to be threatened. Again, I think it clearly ought to fall to a jury to figure out whether this is what happened—or at least a believable possibility, once all the evidence is on the table. But I figured it was worth throwing out this scenario as a reminder that we should insist on justice for Trayvon Martin without insisting that we’re certain in advance what that means. We need a real investigation and a trial—not a particular verdict.
Update: Just in case it wasn’t clear enough, I’m not saying I think this is what happened, or even especially likely compared with other alternatives. I’m just wary of repeating the error of the Stanford police: Seizing on the first version of events that fits the initially available facts, and then locking yourself into that as the only possibility. It is, as a rule, a good idea to generate alternative hypotheses even when you think your first one is probably correct.
Most of the commentary on the Trayvon Martin case has focused on the growing mountain of evidence suggesting that shooter George Zimmerman, far from acting in “self defense,” was the instigator of the confrontation between the two late last month. But I keep coming back to a slightly different question: Are we really supposed to believe that this armed 28-year-old man was afraid for his life in a confrontation with a barely-17 boy who he outweighed by 100 pounds, at 7 o’clock in the evening, with many homes full of people a few dozen yards away? Because even if we ignored all the evidence and assumed Trayvon Martin had been the aggressor, Zimmerman’s decision to shoot his supposed attacker would only be protected by Florida’s now-notorious “Stand Your Ground” law if he had a reasonable belief that it was “necessary to do so to prevent death or great bodily harm to himself … or another.”
Some personal context here: About a year ago, walking home pretty late at night, I was assaulted—an attempted mugging—by not one but three teens about Trayvon Martin’s age. I wasn’t armed, and I certainly didn’t have Zimmerman’s 100-pound advantage over any of them. I am not anyone’s idea of a badass by any stretch: An unrepentant wonk in mediocre physical shape who had never been in a real fight in his adult life. Nor am I possessed of any extraordinary physical courage—ask my girlfriend how long it took me to work up the nerve to do a cliff jump if you want a chuckle sometime. But for all that, there was no time during the whole encounter when I feared for my life. I did, to be sure, fear a somewhat worse ass-kicking than I ended up having to endure, and was lucky to end up with only a few cuts and bruises. (I had the good fortune of a neighbor who, hearing a cry for help, opened the door to see if I needed help, rather than making a phone call and waiting—and I’m a little surprised not to see anyone remarking that there seems to be a bit of a Kitty Genovese aspect to the Martin tragedy.) But nothing about the circumstances suggested these (seemingly unarmed) kids were prepared to kill me over a little cash and a cell phone, and while I certainly felt entitled to fight back (as I did), it would never in a million years have occurred to me that lethal force was in order.
You always want to give the benefit of the doubt to a victim, who you can hardly expect to engage in perfectly cool rational deliberation in the face of an assault, but surely there are circumstances where the use of some force in self-defense is justified, but lethal force isn’t, at least as a first alternative. Even when someone “stands their ground,” studies suggest that when a gun is drawn for defensive purposes, an attacker will almost always break off an assault after a threat or a warning shot. So supposing we actually believed Zimmerman’s unbelievable story, could it have been remotely reasonable for him to think lethal force was necessary to defend himself from imminent death or grave bodily harm? He had no hope of holding the boy off for a few minutes until someone else arrived? No “I’ve got a gun” or “I’ll shoot” against an unarmed opponent? Maybe there’s some story he could tell at trial that would at least get you to reasonable doubt, but I don’t see why a jury would be forbidden from concluding that Zimmerman’s response was so wildly disproportionate to the threat that no reasonable person could regard it as necessary, even if they believe Martin threw the first punch. Not to be flip about it, but fistfights happen all the time—and I’ve got to assume that killing the guy who started it would not be a reasonable or justifiable resolution to the large majority of them.
Just back from a much-needed vacation, a quick pass at some stuff around the web relevant to the last few posts from before I left.
First, my post on the Cato/Koch fight stressed my substantive concerns about how it appears, on the basis of what I regard as pretty compelling circumstantial evidence, the Kochs intend to reorient Cato as a more partisan and political institution. But Gene Healy makes an important point: Even if you ignore all that evidence and assume, implausibly, that they don’t mean to change anything about how Cato is run, the mere fact of their control would have a damaging effect on Cato’s credibility, not just with hardcore lefties who think the Kochs are in league with Satan (and already regard Cato as non-credible), but any reasonable person.
Some folks are going to believe that Cato must just be shilling for corporate donors no matter what, and for them a direct Koch takeover probably won’t make much difference. But the reality is that Cato’s donor base—overwhelmingly individuals, not corporations—is large and broad enough that we can and do write what we think without being concerned about whether it will cause any particular entity to pull funding. I’m told this occasionally happens: A company likes the study we published one year and starts contributing, then stops a year or two later when they realize we’re publishing studies at odds with their interests. Probably this causes the occasional gray hair up in the development department, but as a rule the scholars don’t even hear about it, which is as it should be. You don’t even have to take my word on this: Cato leadership and the Kochs rather publicly did fall out back in the 90s, the Kochs did pull funding, and life went on. I don’t know the details, but manifestly they weren’t prepared to do whatever it took to keep a big donor happy.
But there’s a world of difference between some corporation or businessperson being one funder among many and literally owning the institution. Even if there were never the least hint of pressure to bring scholarship in line with the corporate interests or partisan political activities of the owners, it would not be reasonable to expect people outside Cato to take this on faith. An ordinary person, with no particular animosity toward the Kochs, should be skeptical about whether a think tank that’s a wholly owned subsidiary of a U.S. petrochemical firm, and whose owners routinely take sides in electoral contests, can really produce scholarship that pursues truth in complete disregard of those facts. The fact of independence is important, but so is the plausibility of the public claim of independence to readers who can’t easily check behind the scenes.
Second, Corey Robin argues that a brief aside in my post unwittingly undermines the libertarian worldview by tacitly acknowledging that being unemployed is a hardship, and even more so if you’ve got children to support or mortgage payments to make—facts so obvious that I didn’t think anyone, of any political orientation, seriously doubted them. Much too quick here, though, is Robin’s leap to the inference that this makes employment relationships inherently coercive. This, of course, turns on the thorny question of what counts as “coercive” in a morally significant way, which it probably doesn’t make sense to try and explore here at the length it demands. In passing, though, I doubt it’s helpful to define “coercion” so broadly that it applies to any offer of a conditional benefit involving serious tradeoffs, and in particular when the downsides to refusal depend so heavily on personal choices the employer has no real control over. Jessica Flannigan has some helpful remarks on this point, but again, it’s not really an apt topic for a quick blog post, and it’s a little odd that Robin effectively assumes that his sense of “coercion” is unproblematic, and it’s just that most libertarians labor under the strange false belief that it’s awesome to lose your job.
Additionally, I think Robin reads me as objecting to the constriction of professional autonomy as a kind of personal imposition or obstacle to my flourishing as a human being. Certainly I enjoy the freedom to write what I think about what interests me, but I was really making an argument about the conditions of doing a particular kind of job well and credibly. I could make a parallel argument about the norms governing journalists or attorneys, that would stress the importance of not feeling free to always just express your own view. In any event, my point was not that I’m personally entitled to be paid a salary for writing what I please because it’s an important condition of my personal fulfillment, but that certain kinds of scholar autonomy are necessary for building institutional trust and credibility.
Finally, Ross Douthat complained that “Straussian social conservatism” is a misnomer given that conservative writers make the kind of argument I describe in plain sight all the time. True enough—and I pointed to a few examples in an update to the original post. But for the most part, we’re talking about forums in which conservative intellectuals make the argument for an audience of other conservative intellectuals. I’m prepared to be proven wrong, but I’d wager you find it much less frequently, at least in explicit form, in political speeches or talk radio jeremiads, because it’s at the very least an awkward fit with the populist tone of so much modern conservatism.
I’m a strong believer in free speech: With a few narrow and well-defined exceptions, I think people have a moral and legal right to voice their opinions, however misguided their views and however offensive their mode of expression. I also think (to pick an example from the headlines) that it’s grotesque, sexist, and idiotic to exercise that right by, say, verbally abusing a college student for her views on mandatory contraception coverage. When Rush Limbaugh exercises his rights in that way, I think he deserves to be roundly condemned. Also, I sure as hell wouldn’t want to work for him. Nobody who actually understands the principle of “free speech” thinks that this is somehow “ironic” or hypocritical.
Yet when it comes to the ongoing Koch/Cato conflict, there’s a bafflingly widespread round of herp-derpery rippling through blogs on the left and the right, wherein people imagine it’s clever to point out the supposed irony of libertarian scholars failing to enthusiastically embrace a couple billionaires’ putative property rights over the institution. This is just strange.
I don’t know anything about Kansas corporate law, so I have no idea whether, in fact, the Kochs are legally entitled to exercise majority control over Cato now. I’ve heard some persuasive-sounding arguments that their legal case is flawed, but who knows. Still, purely for the sake of argument, suppose they are. If that’s the case, I’m not arguing that Congress should intervene somehow. I’m arguing that exercising those rights as they seemingly intend to is a bad idea; that their direct control would, in itself, be damaging to Cato’s credibility; and that I’m not interested in working for the Republican talking-point factory that all evidence suggests they envision. Like rain on your wedding day and other infamous Alanisisms, that’s kind of crappy, but not “ironic” in any recognizable sense. I realize progressives think libertarianism is just code for uncritical worship of rich people, but as that’s not actually the case, the only irony here is that people think they’re scoring some kind of gotcha point when they’re actually exposing the silliness of their own caricature.
As you may have already heard, the Brothers Koch have mounted a campaign to take control of the Cato Institute, where I hang my hat. Now, I have a pretty fantastic day job, which basically consists of reading, writing, and speaking about whatever I find interesting in any given week. And I don’t generally subscribe to the popular caricature of the Kochs as supervillains. For a lot of progressives, the Kochs now serve the same function as the Liberal Media does for conservatives: The shadowy elite cabal whose pernicious influence explains why your own common sense views aren’t universally embraced, as they otherwise would be by all right-thinking Americans. Obviously, I don’t buy that, and in any event, of all the ways wealthy people use money to influence politics, openly sponsoring ideological advocacy seems by far the least pernicious. So if this were ultimately just about an ego contest between the pretty-rich guy (Cato President Ed Crane) and the insanely rich guy (megabillionaire Charles Koch), I’d be content to keep my head down and scribble away without too much regard for what the nameplate on the top-floor corner office reads. Nothing personal, Ed.
Unfortunately, it’s fairly clear already that rather more than that is afoot. As my colleague Jerry Taylor lays out over at Volokh Conspiracy, after years of benign neglect, the Kochs have suddenly decided to use their existing shares in the Institute to attempt to pack the board with loyalists, several of whom are straight-up GOP operatives. To give you an idea: They apparently nominated neocon blogger John “Hindrocket” Hindraker of PowerLine. There’s every indication that they (and their proxies on the board) think Cato would be more useful if it were integrated more tightly into the Koch portfolio of advocacy groups—Americans for Prosperity, etc.—for which it could serve as a source of intellectual ammunition in the ongoing struggle to defeat Barack Obama and the Democratic Party. Indeed, they’ve said as much, more or less verbatim, to the chair of Cato’s board. I don’t think it’s the end of democracy if people want to throw money at that cause, but I doubt Cato’s the right place to do it, and I know it’s not what I signed up for.
At a purely practical level, I write a lot about civil liberties issues where I’m often in agreement with Democrats and progressives. In my time here, I’ve invited Sen. Ron Wyden in to speak about government location tracking, been invited to testify on the Patriot Act by Democrats on the House Judiciary Committee, and written pieces for venues like The Nation and The American Prospect. That sort of thing gets a lot harder if we’re perceived as an overtly partisan shop.
More importantly, I can’t imagine being able to what I do unless I’m confident my work is being judged on the quality of the arguments it makes, not its political utility—or even, ultimately, ideological purity. Obviously Cato has an institutional viewpoint, and I wouldn’t have been hired in the first place if my views on the topics I write about weren’t pretty reliably libertarian. But when it comes down to specific issues and controversies, nobody tells me what to write. If my honest appraisal of the evidence on a particular question leads me to a conclusion that’s not “helpful” in the current media cycle’s partisan squabble, or that differs from either the “official” libertarian line, or from the views of my colleagues, I can write it without worrying that I’ll be summoned to the top floor to explain why I’m “off message.” That’s the essential difference between an analyst and an activist: I can promise readers that what appears under my name—whether I get it right or wrong—represents my sincere best effort to figure out what would be good policy, not an attempt to supply a political actor with a talking point. If I couldn’t make that promise, I’d have no right to expect people to take my work seriously.
As I said, I’m in no great hurry to leave a job I enjoy a lot—so I’m glad this will probably take a while to play out either way. But since I’m relatively young, and unencumbered by responsibility for a mortgage or kids, I figure I may as well say up front that if the Kochs win this one, I will. I’m not flattering myself that they’ll especially care; I’d just be saving their appointee the trouble of canning me down the road. But I suspect I wouldn’t be the only one looking for the door under the administration they seem to be envisioning, and my hope is that saying this publicly now might encourage someone in the Koch empire to reconsider whether they can win this particular prize without damaging it.
So, first, apologies to my colleagues for springing this on them in a blog post: I wanted it to be clear that I’m not doing this on the encouragement of Cato’s current leadership, and the easiest way to ensure that was not to tell them.
Second: I hereby tender my pre-resignation from Cato, effective if and when the Kochs take command. I’ll be sad to go, if it comes to that, but sadder to see a proud institution lose its autonomy.
February 27th, 2012 · 2 Comments
Political philosopher Kevin Vallier has an interesting but, I think, ultimately confused post over at Bleeding Heart Libertarians suggesting that consequentialism and value subjectivism—two views frequently held by economists—are in tension. To summarize briefly, he argues:
- Value subjectivism posits that states of affairs have value if and only if people subjectively value those states of affairs.
- Reasons for action are provided by these subjective values.
- Therefore value subjectivism entails that reasons for action are (all) agent-relative.
- Consequentialism gives to all moral actors the same overriding aim: Maximize overall good (however defined).
- Therefore consequentialism entails at least one (overriding) agent-neutral reason for action.
- Therefore value subjectivism and consequentialism cannot both be correct.
There’s a simple mistake here, and a slightly more subtle one. The simple mistake is that economic theories of practical rationality are supposed to be descriptive and predictive: They give an account of our explanatory or motivating reasons. The form of value subjectivism involved here does not entail any deep metaphysical commitments about the nature of value: It requires only the near-tautological premise that if you want to explain how people behave, the only relevant “reasons” are the ones people take themselves to have. This is a familiar distinction that has nothing to do with a gap between “self interested” and moral reasons. If I smash my laptop in a moment of frustration, it is perfectly coherent to say, in a descriptive and explanatory mode, that my reason for acting as I did was provided by my desire to vent my annoyance, and also that I had no good normative reason to act as I did, given my considered goals and preferences. Whatever objective values or reasons for action there may be, they are only motivationally effective to the extent that the values are internalized subjectively, and the reasons for action apprehended. Again, this should not be a controversial metaphysical thesis; it’s a tautology. What you have reason to do depends (in part) on the facts; what is rational for you to do depends on your beliefs. Subjective values are the only ones with explanatory relevance; it doesn’t follow that they’re the only ones with normative relevance.
The slightly more subtle mistake is conflating a view about the structure of motivation with a substantive view about what makes states of affairs objectively valuable—and in particular, with the view that subjective mental states are uniquely capable of making states of affairs objectively good or bad. Everyone has a subjective reason to defect in a one-shot Prisoner’s Dilemma; everyone’s defecting makes the outcome worse, even though subjective levels of satisfaction are the only inputs into that outcome. Despoiling the environment for short-term benefit can make the outcome worse by creating huge amounts of subjective misery, even if no current agents care about the effects of their actions on future generations.
A sufficiently selfish population may have no motivating reason, based on their subjective pattern of concerns, to avert a future ecological catastrophe—and in this case, economic theory can predict that they will fail to do so. They will act morally badly, on one plausible view, because they fail to align their subjective preferences with the objective disvalue of avoidable suffering. That objective disvalue may be wholly dependent on subjective mental states: If our descendants turn out to prefer postapocalyptic hellholes, everything’s copacetic. You can coherently believe that there are objective (normative) reasons for action, but that only subjective mental states can be ultimate sources of such reasons.
I have some more general doubts about the usefulness of making the agent-relative/agent-neutral distinction so central—the real action, as it were, being in the specification of what counts as a reason—but that’s probably a knot to untangle some other time.
Update: Maybe the most concise way to put this is that Vallier isn’t really making an argument against consequentialism, but against externalism about reasons generally. Consequentialism is not unique in suggesting that, for instance, you should not murder an annoying rival when you are certain to get away with it, even if you have a strong present desire to do so. That it may be economically rational to act on this desire just reminds us that theories of economic rationality are not, and are not supposed to be, moral theories.
The philosopher Leo Strauss was perhaps best known for the view that great philosophical works—especially those produced in times when persecution for heretical views was commonplace—often concealed an “esoteric” message, intended only for an elite of truly “philosophical” readers, that was different from, and often quite at odds with, the surface meaning of the text. A somewhat cruder version of that view has often been associated with neoconservatives—many of whom count themselves admirers of Strauss. This was perhaps most clearly expressed by the late neocon grand don Irving Kristol in an interview on the subject of religious belief:
There are different kinds of truths for different kinds of people. There are truths appropriate for children; truths that are appropriate for students; truths that are appropriate for educated adults; and truths that are appropriate for highly educated adults, and the notion that there should be one set of truths available to everyone is a modern democratic fallacy.
Increasingly, I find myself thinking that it’s not really neoconservatives but social conservatives who are more prone to this view. Years ago, I wrote that the conservative case against gay marriage only really makes sense if seen through this lens. The reason their public arguments against marital equality so often seem incoherent is that the true rationale rests on the social desirability of a myth or “noble lie” that would cease to be effective if it were exposed by making the argument publicly.
Something similar is at work, I suspect, in some recent arguments over contraception. Many commentators have expressed understandable bafflement, for instance, at Rick Santorum’s attempt to explain what he means when he alludes to the “dangers of contraception”:
What I was talking about is we have a society — Charles Murray just wrote a book about this and it’s on the front page of “The New York Times” two days ago, which is the increasing number of children being born out of wedlock in America, teens who are sexually active.
What we’re seeing is a problem in our culture with respect to children being raised by children, children being raised out of wedlock, and the impact on society economically, the impact on society with respect to drug use and all — a host of other things when children have children.
On its face, this is nonsensical: How can contraception, of all things, be responsible for an increase in out of wedlock births?
The clue here is the reference to Murray’s book Coming Apart—which if not exactly a work of high sociological rigor, is interesting and a quick enough read to be worth a look. Murray’s core topic is what he believes to be a growing cultural gap between the educated elite and the rest of America. While the elite profess the most socially liberal attitudes, Murray argues that they hew to “conservative” norms in some crucial respects—such as generally bearing children in the context of stable marriages. Yet as Phoebe Maltz observes, this is hardly a thoroughgoing conservatism: Sure, the educated elite eventually settle down to marry and have children, but that’s routinely preceded by a decade or more of extramarital sexual activity enabled by contraception.
Here is one way to make sense of Santorum’s remarks. Elite sexual norms have transformed over the past half century, as a result of a combination of factors: Contraceptive technology; improved educational sorting that increases the frequency of romantic partnerships between highly educated professionals, and an economy that rewards longer periods of education and correspondingly delayed marriage and reproduction. Under the new norm, there’s no expectation of marriage or childbearing before one’s late-20s or early-30s, but also no expectation that people will abstain from sexual activity or romantic cohabitation until that point. There is no stigma against premarital sex, engaged in strictly for the enjoyment and emotional satisfaction of the participants. For the cognitive elite—who are generally reasonably good at impulse control and long-term planning, have the social and economic resources to provide a buffer against youthful misjudgments, and have powerful incentives to avoid derailing their ambitious career plans—this works out just fine.
Here is where the “esoteric” or Straussian social conservative argument comes into play: For those outside the cognitive elite (they would argue) this does not work out fine. Lacking both the incentives and the resources of the elite, the erosion of the stigma against premarital sex among the “lower classes” yields increased premarital childbearing, locking both generations into poverty and dependency. This outcome can only be avoided (the Straussian social conservative might argue) if the “lower orders” do not adopt the sexual norms that work perfectly well for the cognitive elite.
But nobody likes to be told they’re simply not capable of enjoying the same freedoms as the elite: The only publicly acceptable norm for a democratic polity is the rule that sex outside the confines of traditional marriage is “just wrong” or somehow “immoral.” This norm may not make sense, but since most people do not think deeply about the underlying ethical rationale for local norms, it will be widely accepted so long as it appears to be widely accepted—which is to say, so long as the cultural elite at least continue to give it lip service, whatever their private behavior. Contraception (in tandem with those other changes) makes it possible for the elite to visibly reject that norm, enjoying successful and happy lives in which a long sexually active young-adulthood precedes eventual marriage and reproduction. The argument that non-elites are incapable of successfully living according to the new elite norms is publicly unacceptable in a democratic society, and so those norms become widely accepted, with damaging results.
I am not, I should stress, endorsing this argument. But it is, at least, an intelligible argument. It is the only remotely coherent way I can see to make sense of Santorum’s purported link between the prevalence of contraception and rising non-marital births. It’s just not an argument social conservative elites can make openly—certainly not if they wish to retain any pretensions of populism. The best justification of “traditional values” is ultimately pragmatic and utilitarian, and their acceptance as general rules depends crucially on the idea that most people are not good at making ad hoc judgments by applying a pragmatic and utilitarian standard directly. But they’re only effective if this pragmatic foundation is not laid bare: If people simply accept the traditional rules as “what everyone knows to be right.”
This is, I suspect, why so many social conservative positions seem not just misguided but downright baffling and mysterious to others. It’s not that they lack any intelligible justification, it’s that social conservative elites (believe that they) cannot openly or publicly advance that justification without undermining their own ends.
Addendum: As a commenter reminds me, this argument is occasionally stated explicitly, as it was in a famous “No Guardrails” editorial in the Wall Street Journal in the early 1990s, which Radley Balko references in this 2003 Fox News article on… Rick Santorum! It’s also the basic idea underlying Gertrude Himmelfarb’s 1995 book One Nation, Two Cultures (the wife, as it happens, of Irving Kristol). But again, it’s rare for the argument to be made quite so explicitly in these terms, because it implicitly concedes that it’s not inherently immoral for the elite to deviate from conservative values, except insofar as doing so openly sets a bad example for non-elites who can’t handle that level of personal freedom.
It occurs to me that some of the confusion I mentioned in the previous post has to do with a certain ambiguity around the terms “responsible” and “responsibility.” In addition to to the simple causal sense of “responsible,” which is obviously linked to the others, we use it to mean “properly subject to moral praise and blame” but also something roughly like “prudent, exercising foresight and impulse control, mindful of long-term consequences.” The latter has no necessary connection to the former—a totally amoral robot could be programmed to behave “responsibly” in this sense—except insofar as praise and blame are popular mechanisms for inculcating habits of prudence in humans. (Their usefulness for this purpose need not have anything to do with whether people actually “deserve” the praise or blame in some morally deep sense: The repugnance of “victim blaming” is no guarantee that it doesn’t confer some social benefit.) It’s actually hard to briefly describe the relevant propensities in terms that don’t have some thick normative connotation. There’s not really a colloquial language for calling people imprudent or work-averse that doesn’t sound like moral criticism, and history suggests that any scrupulously neutral term coined for the purpose will just take on all the baggage of he old one fairly quickly. Still, if we could manage to adopt a more anthropologically detached way of talking about behavior, norms, and their social consequences, it might make conservatives and liberals less prone to talking past each other.
All this reminds me, incidentally, of a panel I had the great pleasure of sitting on with the social psychologist Philip Zimbardo, of “Stanford Prison Experiment” fame. He was, naturally, speaking about the “power of the situation”—how social context can make decent people behave appallingly. He notes that he, too, fell victim to it: Absorbed in his role as the detached experimenter, he went on enthusiastically gathering data as young men broke down under the psychological torment they were inflicting upon each other. He stopped only when his new girlfriend (and future wife) yanked him out of the situation by reacting with horror at what he was allowing to go on under his nose. Yet the content of this new situation, as I pointed out, was a bit of old fashioned individualistic moralizing: “You are responsible for allowing this to go on, and if that is the sort of person you are, I don’t want to be romantically involved with you!” From an external or scientific perspective, we can accurately say that his behavior was more determined by social context than any immutable character traits or some act of radical free will. But the context that generates morally desirable behavior will often be one that, from the internal perspective of practical reason, enjoins people to think of themselves as fundamentally responsible (and subject to praise and blame) for the consequences of their actions.