Julian Sanchez Main header image

photos by Lara Shipley

Political Metastasis

March 30th, 2012 · 35 Comments

Browsing a conservative news site the other day, I was struck by the sheer oddness of that familiar genre of political commentary that treats  liberals and conservatives, not just as groups of people with systematic disagreements on policy questions, but as something like distinct subspecies of humanity. The piece that triggered this was something along the lines of “Five Reasons Liberals Are Awful People,” and it had almost nothing to do with any concrete policy question, or ultimately even the broad-brush contours of liberal political thought: It was a string of assertions about broad types of character flaws purportedly shared by liberals, of which their policy views were only a symptom. The same day, I chanced across a piece by Chris Mooney— based on his new book The Republican Brain—making a similar sort of argument from the other side by drawing on recent social science. Then just yesterday, my friend Conor Friedersdorf tweeted a request for good summaries of the liberal view of the right to privacy, and I was again struck by how odd it sounded: Scholars have advanced a whole array of views on the question, and while certainly liberals and conservatives would tend to find different ones more congenial, it seemed like an unhelpful way to map the terrain or illuminate the key points on which various thinkers diverge.

Without denying that political and policy differences are likely to track deeper differences in temperament—differences that shape our preferences and behavior across many domains—it’s worth recalling that the binary nature of our political discourse, featuring two main parties with corresponding ideologies, is a highly contingent feature of our electoral rules. As libertarians never tire of pointing out, there is no particularly compelling philosophical reason that one’s views on abortion, foreign military intervention, environmental regulation, tax policy, and criminal justice should cluster in the particular pattern we find among Republican and Democratic partisans. So we ought to be awfully skeptical about the (growing?) tendency to treat this binary divide as reflecting some essential fact about human nature, or as providing a frame within which to understand all intellectual or cultural life.

Cracking open Will Kymlicka’s excellent Contemporary Political Philosophy: An Introduction, I find he actually makes this point right at the outset: “Our traditional picture of the political landscape views political principles as falling somewhere on a  single line, stretching from left to right… [and] it is often thought that the best way to understand or describe someone’s political principles is to try to locate them somewhere on that line.” But of course, as anyone who has taken a course in political philosophy can tell you, that’s not what the main divisions look like at all: The syllabus will not contain a section on “liberal political philosophy” or “conservative political philosophy.” More likely, you’ll see a section on the various flavors of utilitarianism (act vs rule, aggregate vs average), maybe Kantian and Lockean rights theories and their progeny, communitarianism, contractualism of at least the Rawlsian variety—with Gauthier and Buchanan thrown in if the professor is feeling ecumenical. Again, you may be slightly more likely to find conservatives or liberals gravitating to one view or another, but thinkers with very different practical political commitments may be quite close at the theoretical level, and vice versa. Friedrich Hayek famously declared himself to be in almost complete agreement with the egalitarian John Rawls when it came to the normative fundamentals.

In legal theory, interpretive schools of thought fit somewhat better into “conservative” and “liberal” compartments, but there are plenty of exceptions: Yale’s Jack Balkin, for instance, is a vocal proponent of progressive originalism. More importantly, while people undoubtedly do sometimes choose an interpretive theory by working backwards from the policy preferences they’d like to justify, this categorization tends to obscure the underlying arguments for each approach, and is in any event highly contingent on the controversies that happen to be politically salient at any given time.

It starts to seem, as Albert Camus once put it, that we’ve made the mind into an armed camp—in which not only politicians and legislative proposals, but moral philosophies, artworks, even scientific theories, have to wear the insignia of one or the other army. This obviously oversimplifies—a taxonomy with two categories is not particularly rich—but also obscures the internal faultlines within each domain in a way that’s guaranteed to undermine our understanding. We’re at the point where people are morally certain about the empirical facts of what happened between Trayvon Martin and George Zimmerman on the basis of their general political worldviews. This isn’t exactly surprising—we are tribal creatures who like master narratives—but it feels as though it’s gotten more pronounced recently, and it’s almost certainly making us all stupider.

Addendum: On a related note, Kevin Drum notes an obvious problem for Chris Mooney’s thesis: Basic temperaments are supposed to be universal, but many of the political phenomena Mooney identifies as functions of those temperaments are pretty unique to American conservatives. Their European counterparts, for instance, don’t tend to exhibit the same hostility to the results of mainstream climate research or evolutionary biology. Even if people with different personality types tend to gravitate toward one local tribe or another, there’s obviously an enormous amount of contextual variation in what that will actually amount to.

→ 35 Comments

Bookmark and Share

 

 

 

 

Aren’t There Photos of George Zimmerman’s Supposed Injuries?

March 29th, 2012 · 31 Comments

The latest development in the Trayvon Martin case is the leak of police surveillance footage showing a not-conspicuously-injured George Zimmerman being ushered into the Sanford police station on the night of the shooting, calling into question the account that puts Zimmerman on the receiving end of a brutal pummeling that made him fear for his life. Now we’ve got people squabbling over fuzzy tape trying to determine whether some blob on the back of his head is a wound or a shadow, how much he might have been cleaned up by medics at the scene, and on, and on…

This all seems unnecessary. When I was jumped about a year ago, the police who came to the scene took close-up photos of every visible injury—all, mercifully, quite minor—presumably so they could prove battery if they ended up catching the kids. This seems to be pretty standard procedure, and it’s unfathomable that they wouldn’t do the same in a case where those injuries are the main physical evidence backing a claim of self defense in the shooting of an unarmed teenager. I am not intimately familiar with Florida’s records laws, but it would also be pretty standard to have privacy exemptions barring the release of potentially sensitive photographs, such as those showing bodily injuries of identifiable crime victims. But in this case it would seem to be in Zimmerman’s interest to waive that protection if the photographs actually show serious injury.

At the very least, it seems as though someone should ask the obvious question: Did police take close-up photos of whatever injuries Zimmerman sustained on the night of the shooting? If they did not, it would be incredibly suspicious. Assuming there are photos, even if they can’t be released to the public, has a state medical examiner or forensic scientist at least independently reviewed them to see whether they suggest a beating of such severity that a reasonable person would think lethal force was a necessary response? If not, that sounds like an obvious first step that might go a ways toward clarifying what really happened.

→ 31 Comments

Bookmark and Share

 

Trayvon Martin and the Moral Clarity Hypothesis

March 27th, 2012 · 16 Comments

Sanford police are pushing back in the face of public criticism, saying that witnesses have corroborated George Zimmerman’s account of his fatal encounter with Trayvon Martin. Given how many salient facts about the case seem to have been missed in the initial investigation—Zimmerman’s history of arrests for violence, the failure to test the admitted shooter for drugs or alcohol at the scene, the account given by Martin’s girlfriend of their cell conversation during the minutes leading up to the confrontation—that’s no reason to back off calls for a more thorough independent investigation. But it does reinforce my worry that when facts are incomplete, we tend to gravitate toward (and even insist upon) the least ambiguous narrative template available, ideally featuring one completely reprehensible villain and one completely innocent victim.

The most obviously repellent version of this has come in the form of attempts to shoehorn Martin—you know, the unarmed dead teenager—into the crude stereotype of a young thug. Any admission that maybe young black men are routinely subject to unfair “profiling,” that maybe racism exists as something other than a charge to unfairly hurl at white conservatives, would be a victory for “the left”—so it cannot be allowed! Hence we’re treated to reports that Martin used a lot of vulgarity on Twitter or maybe smoked pot, as though these were capital crimes, or even slightly unusual activities for a 17-year-old.

Much more understandably, considering who was shot dead in this encounter, there seems to be a tacit supposition that if Zimmerman was a racial-profiling jerk who mistook himself for Batman, and if he approached Martin (against the advice of a 911 dispatcher) to question him about his “suspicious” presence in the neighborhood for no better reason than his age and race, then it must also be Zimmerman who turned the confrontation violent. And maybe it was. But it also seems entirely possible that Martin—whether from fear of assault or anger and frustration at being treated like a criminal just for Walking While Black—really did strike first, and a panicked Zimmerman (head injury perhaps clouding his already poor judgment) genuinely thought he was defending himself when he fired.

That wouldn’t in any way mean that Martin “deserved” to get shot, or that Zimmerman wasn’t also seriously in the wrong, or that the use of lethal force was a justifiable form of self-defense against an unarmed assailant under the circumstances. It would just mean the situation was complicated, and that it’s possible for both parties to have been partly in the wrong even if one person was more culpable—or even a generally worse human being. But a whole lot of people seem impossibly confident that things must have played out one way or the other, as though it were a matter of moral principle rather than (possibly unknowable) fact.

A commenter on an earlier post suggested that one factor in people’s varied reactions here may be what psychologists call the Just World Hypothesis. Briefly: People want to believe the world is basically fair, and that good people don’t suffer for no reason, so they strive mightily to rationalize that suffering as somehow earned or justified. That (along with a generous helping of plain old racism) may work for the folks who are at such pains to lay all the blame on the slain Martin. But a slightly different story might account for the general impulse to insist that, wherever the blame falls, it falls wholly—all or nothing.

Probably there’s already some other name for the phenomenon I’m about to describe, but I’ll call it the Moral Clarity Hypothesis. This allows that the world is not always perfectly just, but still maintains a strict moral order by insisting on perfect injustice as the sole alternative. Life may not be fair, but at least it isn’t arbitrary. The moral ledger is always balanced: For every good person that suffers, some bad person is culpable in direct proportion to that suffering; for every unjustified harm, there is a corresponding wrong. The result is a kind of compensatory absolution—a way of sparing the unfairly injured the added insult of ascribing responsibility, however small a share. You see something like this, I think, across a number of domains: Either the poor deserve their lot because they’re feckless and lazy or it’s an injustice inflicted by malign plutocrats.

As a corollary, every story has exactly one moral: If the most important lesson to learn from the killing of Trayvon Martin is that racial stereotyping—by citizens and perhaps also police—remains a pervasive problem with catastrophic consequences for young black men, then anything that complicates that picture is a rationalization, a distraction, an attempt to make excuses and blame the victim. Or: If the picture actually is more complicated in any way, we can breathe easy knowing that racism was abolished in 1964, there’s no need to question a criminal justice system that incarcerates black men at rates that would make Stalin blush, and any indications to the contrary can be put down to a leftist plot to score political points.

Maybe it turns out that this case really is that simple: That Zimmerman launched an unprovoked assault on Martin, and bears complete, unqualified responsibility for all that ensued. Even at this late date, I hope a more thorough and independent investigation can provide everyone concerned with more certainty about what actually happened. But the facts of this particular case aren’t stakes in some allegorical battle. They won’t confirm or refute any overarching point about Race in America—only the details of one horrifying night in February in one Florida town.  Which means we don’t have to insist that reality become a cartoon to validate our moral commitments.

→ 16 Comments

Bookmark and Share

 

 

 

 

And May the Demographic Odds Be Ever in Your Favor. Or Not.

March 26th, 2012 · 17 Comments

The weekend, a depressing number of supposed Hunger Games fans expressed attitudes ranging from surprise to undisguised racist hostility at the discovery that black actors had been cast to play the characters Rue and Thresh in the movie. As more attentive fans were quick to point out, these reactions were not only ugly but obtuse: The characters are pretty clearly described in the book as having dark brown skin, and it’s strongly intimated that the agrarian District 11 from which they hail overlaps with the contemporary American South.  (True, the author doesn’t describe them as “African American” because… what’s an American? We’re in Panem, remember?)

The book doesn’t dwell on this, though, and a reader skimming along at a fast clip could be forgiven for missing the two quick references. The deeper stupidity here is the assumption that the default race of any character is Caucasian when it’s not stated explicitly, and that casting a person of color in this case would represent some kind of deviation from the book’s implicit characterization. This would be wrongheaded for an adaptation of a book set in the present, but at least quasi-understandable:  The social realities of people of color in contemporary America are different in a variety of ways, enough so that we do generally expect authors to make at least passing reference to a major character’s minority status.

It makes no sense at all, however, in a dystopian sci-fi novel (implicitly) set two or three centuries in the future. First, we have no real idea what the racial dynamics of Panem are like, so there’s no particular reason to think Suzanne Collins would need to make note of it if Katniss were of (say) Korean or Chicana descent. Second, and maybe more to the point, non-Hispanic whites are already projected to constitute less than half of the U.S. population in 2050, long before the earliest possible date for the events of the book. (Incidentally, reactions to the unfolding Trayvon Martin story reveal a surprising number of Americans struggling with the notion that the adjectives “white” and “Hispanic” might apply to the very same person. Kindly refer to the photo atop this blog if you’re among them.) Unless the war—and possibly other apocalyptic events—that precede the events of the book had some kind of wildly skewed demographic effects, you’d think our default expectation would be that a randomly chosen character of unspecified race won’t match your basic Anglo phenotype. If anything, then, the filmmakers probably should have gone a good deal heavier on the non-white actors—though I shudder to think how vile the Twitter reaction would have been if they had.

→ 17 Comments

Bookmark and Share

 

Undercover Atheists?

March 26th, 2012 · 18 Comments

Writing at The American Prospect a few weeks back, Patrick Caldwell expressed puzzlement at the view, seemingly widespread on the right, that the hegemonic forces of secularism are somehow forcing believers out of the public square:

When I first read Santorum’s comments though, I was mostly struck by how off base his statement is from the actual reality of our political class. People who lack a specific faith are the ones typically closed out from government service. Out of 538 members of Congress, California Rep. Pete Stark is the only self-avowed atheist. For as much as Republicans opine about the secularist goals of Obama’s presidency, he has stocked his cabinet with Catholics and other gentiles. The highest court of the land has six Catholics and three Jews.

A Gallup poll last December had 15 percent of Americans list their religious preference as none, atheist, or agnostic, though another Gallup poll from earlier in the year found that 7 percent claim to have no belief in God. By either measure, Americans lacking allegiance to an organized religion are vastly underrepresented among public officials.

It’s worth saying, first, that Santorum’s complaint is not so much about religious people being somehow hounded from public office, but about the secularism of mainstream political discourse. Which is just to say that we generally expect political actors in a pluralistic country to offer justifications for their preferred policies that do not hinge on one’s sharing a particular interpretation of a particular sacred text. Santorum thinks it should be perfectly sufficient to say: “It should be illegal because the Bible condemns it,” and he’s irritated that even believers mostly feel obligated to focus on religiously neutral “public reasons” that could be accepted by people who don’t acknowledge the authority of (that reading of) the Christian Bible. He’s not empirically wrong about this (and a good thing!), he just has a repugnant, medieval vision of how things ought to be.

That aside, though, I suspect “self-avowed” is a key qualifier in the passage quoted above. Whatever they check off on their census forms, the political class in D.C. have always struck me as pretty secular. Maybe they’re just quiet about their faith—praying quietly in private, regularly attending worship services on the weekend without making much fuss about it. And I certainly wouldn’t claim that people I happen to know socially are anything like a representative sample of “the D.C. political class.” Still, if you asked me to guess what percentage of the under-40 political professionals in this town—hill staffers, pundits, journalists, wonks, and activists—are agnostic or atheist in their private beliefs, I’d hazard a number much higher than 15 percent. If you expand that definition to encompass what I’d call “operational atheists”—people who might tell a pollster they’re whatever faith they grew up in, and might “believe” in some vague abstract sense, but whose nominal religion plays no discernible role in their thinking or everyday life—you’re probably well over 50 percent.

Of course, there are obvious reasons for Congress to be unrepresentative.  Given the widespread popular prejudice against atheists, they’re probably disproportionately likely to self-select into think tanks and magazines and various other supporting roles. And I wouldn’t be surprised if some smart, ambitious young people with political aspirations either consciously or subconsciously made a pragmatic decision, maybe at some point in college, that there was no real benefit in subjecting this particular corner of their belief systems to special scrutiny. Most of us, after all, hold thousands of beliefs with little better warrant than “I’m pretty sure I read that somewhere”—so it would be easy enough for a would-be politico to conclude there’s no sense rocking this particular epistemic boat.

But it’s still very, very hard for me to believe that there’s really only one atheist in the United States Congress. Not everyone who concludes, in an hour of quiet reflection, that religious doctrines are probably false feels compelled to shout it from the rooftops as loudly as Christopher Hitchens or Richard Dawkins. Lots of them are even perfectly happy to go through the motions at appropriate occasions, for the sake of family (presumably not everyone who converts at marriage has a genuine theological epiphany) or because they enjoy the sense of community, or even just because the ceremonial trappings have grown familiar and comfortable.  People fake it—so routinely that a Google search for “coming out atheist” brings up an incredible deluge of stories and discussions about people making the decision to leave the closet after years of going along to get along… or not. YouTube is packed with similar testimonials. Historically even intellectuals felt obliged to play along: David Hume (to pick one famous example from a rich pool) halfheartedly professed to be persuaded by the “Argument from Design”—then gives all the most devastating arguments in his “Dialogues Concerning Natural Religion” to the skeptic who demolishes that argument.  It strains credulity to think there aren’t at least a few—and maybe more than a few—comparable closet cases in a profession where success depends on convincing this cycle’s electorate that you’re deeply committed to whatever it is they believe… even if it’s the opposite of what the last electorate believed.

It’s something of a cliché at this point to talk about the “paranoid style” of conservative politics—and the seeming migration of that paranoia from the fringe to the mainstream. But maybe in part it has roots in a perfectly common real-life experience that must, to believers, seem a bit like something out of Invasion of the Body Snatchers: The bright young child everyone was so proud to ship off to a prestigious university comes back over break subtly different somehow… dutifully says grace at supper, but seems (for reasons you can’t quite nail down—maybe just that hint of a smirk?) to be humoring the ritual. For Americans who (mistakenly) take faith to be a sort of minimum prerequisite for moral conduct, this has to seem like the ultimate form of deception: Lying about even the general possibility of being honest. What had been understood as a kind of polite dissimulation—yes, of course your newborn is the most beautiful baby in the history of babies—starts to look downright insidious.

Previously faith could more or less be taken for granted—maybe the candidate makes a passing reference to the church they regularly attend—and that’s all there is to it, really, because of course everyone’s a believer of one stripe or another. Increasingly, isn’t so—that there are actually quite a lot of unbelievers, many of them effectively operating in stealth mode. This was probably always the case, but outside the academy and a few urban enclaves, nobody was terribly vocal about it—you certainly didn’t have anything like a visible public “movement.” Suddenly, if you’re someone who thinks of faith as a minimal prerequisite for decency, what was previously tacitly understood has to be signaled with extra vigor.

A comparison with gay rights may be apt here: Go back a few decades and the idea is so marginal that nobody really thinks of it as a political issue. (Note that in some of the most virulently homophobic societies, you also see a lot more normal physical affection between men that would be normal in the U.S., possibly because it’s so beyond the pale that nobody worries about sending the wrong signals.)  Roll forward another decade or two and it’ll so normalized that nobody can quite understand why there was ever a fuss about it: Every city has plenty of nice gay families, and everyone can see they’re not fundamentally different from the nice straight family next door. You get “culture wars” in the middle: When a phenomenon is prevalent enough to seem threatening, but not yet (visibly) prevalent enough that it becomes obvious it’s not actually a threat at all.

I’ve always found the more aggressive, proselytizing sort of atheism a bit distasteful: Do we really need a People-who-don’t-play-chess club or a non-basketball-team? As a writer or pundit or whatever I am, it’s no surprise that I’ll occasionally bring up this aspect of my worldview, but since most of us don’t think our fellow citizens have souls that need saving, shouldn’t the modal atheist just go on quietly not-believing, and hope polite circumspection on these issues catches on? Maybe, though, there’s a case for being a little more vocal—for coming out secular—at this particular historical moment, in the interest of hastening the journey across the valley between invisibility and normalcy.

→ 18 Comments

Bookmark and Share

 

Mouse Psychodrama

March 26th, 2012 · 6 Comments

For a while, I’d just hoped that keeping the kitchen clean and food tightly sealed away would encourage the mouse to move on, but this stratagem backfired: It grew increasingly bold, even venturing tentatively toward the living room occasionally. Finally we broke down and got a trap and… the mouse vanished. Trap unsprung, bait untaken, but suddenly no sign of our furry little friend.  So now I have the following scene playing out in my head.

APT. KITCHEN, AFTERNOON: The mouse skitters out into the center of the kitchen floor, props up on hind paws, sniffing the air and looking about for signs of humans.

MOUSE: Hey, you guys? You guys, I just got the “Game of Thrones” Season 1 box set, I was thinking maybe we could order a four-cheese pizza, the tall hairy one could go out for a six of Magic Hat…

Something catches the corner of his eye; the mouse looks over his shoulder spots the trap for the first time.

MOUSE: Oh. Oh wow. You guys?

Mouse pads over to the trap, sniffing contemptuously at the smoked gouda, shakes its head slightly, more hurt and disappointed—and at a professional level, a little insulted by the crude latch mechanism—than truly angry.

MOUSE:  It’s like that, huh? Geez, I know I’ve been… but I really thought we had a kind of… [here the mouse see-saws its forepaws back and forth in the universal gesture for we're-on-the-same-page-here-right?]  Yeah, I guess not. Ok, well.  Whatever you guys.

With an over-shoulder glance at the gouda bait—which he totally could extract safely, but doesn’t even want to now—mouse slowly pads back toward the kitchen cabinets, produces a tiny bindle (tied around a splinter) and squeezes through a gap in the woodwork out into the backyard, accompanied by a canned “AWWW”.  The CLOSING THEME FROM “THE INCREDIBLE HULK” TV SERIES plays as we FADE TO BLACK.

On the upside… no more mouse.

→ 6 Comments

Bookmark and Share

 

Tragic Scenarios

March 22nd, 2012 · 98 Comments

I think it’s fairly clear, at this point, that the initial police investigation into the killing of Trayvon Martin was shamefully slipshod, and that George Zimmerman’s shaky story needs to be heard and evaluated by a jury, not accepted on faith by sympathetic law enforcement. But I’ve also been mulling the facts that have been made public and trying to figure out if there’s any plausible scenario that makes it genuinely “tragic”—not in the modern sense of “really sad” but in line with the classic Greek formula, where the protagonists come to grief, not as a result of outright wickedness, but flaws in basically decent characters. Is there a way for this to be a story without villains? Having just written that Zimmerman’s actions seemed unreasonable under any believable set of circumstances, I figured I should at least try to come up with a counterexample. Here’s one possibility that seemed plausible—though with luck, an actual jury will soon get to determine what actually happened:

We start with Zimmerman spotting Martin on the way back from the convenience store and concluding that he seems “suspicious” in light of recent burglaries. Maybe it’s just because he’s an unfamiliar black teenager and Zimmerman is applying a racist stereotype, or maybe there’s something else that Zimmerman misunderstands—the boy is walking slowly to prolong the trip home while he talks to his girlfriend, which on a rainy night, Zimmerman perceives as someone “casing” houses. Failing to understand how creepy his own slow-motion monitoring of the teen from his SUV seems, Zimmerman takes it as a further confirmation of his suspicions when Martin breaks into a run.

Stupidly disregarding the 911 dispatcher’s advice, Zimmerman gets out of his car to ensure he can point the boy out to police when they arrive. After all, he thinks, if this is the guy who’s been breaking into local houses, it would be crazy to let him slip away to strike again, and he only intends to get a closer look and maybe ask a few questions. This would be dumb, but not inherently criminal. He either approaches Martin, or Martin himself finally decides to confront this determined stranger to demand an explanation for why he’s being followed. Zimmerman, in turn, demands to know “what he’s doing in the neighborhood,” meaning “explain what I regard as suspicious behavior.” Martin seems visibly edgy—as you would be with a creepy stranger tailing you!—and maybe Zimmerman simultaneously exposes the holstered firearm, hoping it will deter the teen from trying anything.

But Martin isn’t connecting the question with the recent spate of break-ins, which he has no reason to know about: What he hears is a threat from an armed and menacing stranger who has been stalking him from his vehicle, and now sounds angry to see a black teenager in “his” neighborhood. And when Zimmerman exposes the gun, Martin reasonably concludes that he’s about to become the victim of a hate crime.He could run—but he won’t outrun a bullet, and risks being shot in the back. It seems like his only chance is to disable and disarm this nut before he can draw the weapon. It’s a risky gambit, but in another few seconds, Zimmerman will have time to draw the gun and fire, so Martin doesn’t see any other good options. Making a split-second decision, the football player goes for the tackle, thinking he can get the gun away and hold this creep for police.

Zimmerman, meanwhile, has no idea what Martin is thinking. All he knows is he’s on the ground taking hits from someone who now appears to be going for his gun. He shouts for help but doesn’t see anyone coming, and  doesn’t realize the teen had regarded him as an imminent threat. He assumes Martin’s making a grab for the gun in order to use it against him. Panicked, flat on his back, and seeing no alternative, Zimmerman fires.

This is, obviously, complete speculation, but as far as I can tell, it’s consistent with the public facts—and with the general principle that fear and stupidity are more common than malice. If it’s accurate, both parties would have honestly believed they were acting in self defense. And, incidentally, the “Stand Your Ground” law wouldn’t appear to be relevant, because neither of them would have regarded retreat as a viable option at the time they reasonably believed themselves to be threatened. Again, I think it clearly ought to fall to a jury to figure out whether this is what happened—or at least a believable possibility, once all the evidence is on the table. But I figured it was worth throwing out this scenario as a reminder that we should insist on justice for Trayvon Martin without insisting that we’re certain in advance what that means. We need a real investigation and a trial—not a particular verdict.

Update: Just in case it wasn’t clear enough, I’m not saying I think this is what happened, or even especially likely compared with other alternatives. I’m just wary of repeating the error of the Stanford police: Seizing on the first version of events that fits the initially available facts, and then locking yourself into that as the only possibility. It is, as a rule, a good idea to generate alternative hypotheses even when you think your first one is probably correct.

→ 98 Comments

Bookmark and Share

 

Fearing for Your Life

March 21st, 2012 · 141 Comments

Most of the commentary on the Trayvon Martin case has focused on the growing mountain of evidence suggesting that shooter George Zimmerman, far from acting in “self defense,” was the instigator of the confrontation between the two late last month. But I keep coming back to a slightly different question: Are we really supposed to believe that this armed 28-year-old man was afraid for his life in a confrontation with a barely-17 boy who he outweighed by 100 pounds, at 7 o’clock in the evening, with many homes full of people a few dozen yards away?  Because even if we ignored all the evidence and assumed Trayvon Martin had been the aggressor, Zimmerman’s decision to shoot his supposed attacker would only be protected by Florida’s now-notorious “Stand Your Ground” law if he had a reasonable belief that it was “necessary to do so to prevent death or great bodily harm to himself … or another.”

Some personal context here: About a year ago, walking home pretty late at night, I was assaulted—an attempted mugging—by not one but three teens about Trayvon Martin’s age. I wasn’t armed, and I certainly didn’t have Zimmerman’s 100-pound advantage over any of them. I am not anyone’s idea of a badass by any stretch: An unrepentant wonk in mediocre physical shape who had never been in a real fight in his adult life. Nor am I possessed of any extraordinary physical courage—ask my girlfriend how long it took me to work up the nerve to do a cliff jump if you want a chuckle sometime. But for all that, there was no time during the whole encounter when I feared for my life. I did, to be sure, fear a somewhat worse ass-kicking than I ended up having to endure, and was lucky to end up with only a few cuts and bruises. (I had the good fortune of a neighbor who, hearing a cry for help, opened the door to see if I needed help, rather than making a phone call and waiting—and I’m a little surprised not to see anyone remarking that there seems to be a bit of a Kitty Genovese aspect to the Martin tragedy.) But nothing about the circumstances suggested these (seemingly unarmed) kids were prepared to kill me over a little cash and a cell phone, and while I certainly felt entitled to fight back (as I did), it would never in a million years have occurred to me that lethal force was in order.

You always want to give the benefit of the doubt to a victim, who you can hardly expect to engage in perfectly cool rational deliberation in the face of an assault, but surely there are circumstances where  the use of some force in self-defense is justified, but lethal force isn’t, at least as a first alternative. Even when someone “stands their ground,” studies suggest that when a gun is drawn for defensive purposes, an attacker will almost always break off an assault after a threat or a warning shot. So supposing we actually believed Zimmerman’s unbelievable story, could it have been remotely reasonable for him to think lethal force was necessary to defend himself from imminent death or grave bodily harm? He had no hope of holding the boy off for a few minutes until someone else arrived? No “I’ve got a gun” or “I’ll shoot” against an unarmed opponent? Maybe there’s some story he could tell at trial that would at least get you to reasonable doubt, but I don’t see why a jury would be forbidden from concluding that Zimmerman’s response was so wildly disproportionate to the threat that no reasonable person could regard it as necessary, even if they believe Martin threw the first punch. Not to be flip about it, but fistfights happen all the time—and I’ve got to assume that killing the guy who started it would not be a reasonable or justifiable resolution to the large majority of them.

→ 141 Comments

Bookmark and Share

 

Vacation Response Roundup

March 19th, 2012 · 6 Comments

Just back from a much-needed vacation, a quick pass at some stuff around the web relevant to the last few posts from before I left.

First, my post on the Cato/Koch fight stressed my substantive concerns about how it appears, on the basis of what I regard as pretty compelling circumstantial evidence, the Kochs intend to reorient Cato as a more partisan and political institution. But Gene Healy makes an important point: Even if you ignore all that evidence and assume, implausibly, that they don’t mean to change anything about how Cato is run, the mere fact of their control would have a damaging effect on Cato’s credibility, not just with hardcore lefties who think the Kochs are in league with Satan (and already regard Cato as non-credible), but any reasonable person.

Some folks are going to believe that Cato must just be shilling for corporate donors no matter what, and for them a direct Koch takeover probably won’t make much difference. But the reality is that Cato’s donor base—overwhelmingly individuals, not corporations—is large and broad enough that we can and do write what we think without being concerned about whether it will cause any particular entity to pull funding. I’m told this occasionally happens: A company likes the study we published one year and starts contributing, then stops a year or two later when they realize we’re publishing studies at odds with their interests. Probably this causes the occasional gray hair up in the development department, but as a rule the scholars don’t even hear about it, which is as it should be. You don’t even have to take my word on this: Cato leadership and the Kochs rather publicly did fall out back in the 90s, the Kochs did pull funding, and life went on. I don’t know the details, but manifestly they weren’t prepared to do whatever it took to keep a big donor happy.

But there’s a world of difference between some corporation or businessperson being one funder among many and literally owning the institution. Even if there were never the least hint of pressure to bring scholarship in line with the corporate interests or partisan political activities of the owners, it would not be reasonable to expect people outside Cato to take this on faith.  An ordinary person, with no particular animosity toward the Kochs, should be skeptical about whether a think tank that’s a wholly owned subsidiary of a U.S. petrochemical firm, and whose owners routinely take sides in electoral contests, can really produce scholarship that pursues truth in complete disregard of those facts. The fact of independence is important, but so is the plausibility of the public claim of independence to readers who can’t easily check behind the scenes.

Second, Corey Robin argues that a brief aside in my post unwittingly undermines the libertarian worldview by tacitly acknowledging that being unemployed is a hardship, and even more so if you’ve got children to support or mortgage payments to make—facts so obvious that I didn’t think anyone, of any political orientation, seriously doubted them. Much too quick here, though, is Robin’s leap to the inference that this makes employment relationships inherently coercive. This, of course, turns on the thorny question of what counts as “coercive” in a morally significant way, which it probably doesn’t make sense to try and explore here at the length it demands. In passing, though, I doubt it’s helpful to define “coercion” so broadly that it applies to any offer of a conditional benefit involving serious tradeoffs, and in particular when the downsides to refusal depend so heavily on personal choices the employer has no real control over. Jessica Flannigan has some helpful remarks on this point, but again, it’s not really an apt topic for a quick blog post, and it’s a little odd that Robin effectively assumes that his sense of “coercion” is unproblematic, and it’s just that most libertarians labor under the strange false belief that it’s awesome to lose your job.

Additionally, I think Robin reads me as objecting to the constriction of professional autonomy as a kind of personal imposition or obstacle to my flourishing as a human being. Certainly I enjoy the freedom to write what I think about what interests me, but I was really making an argument about the conditions of doing a particular kind of job well and credibly. I could make a parallel argument about the norms governing journalists or attorneys, that would stress the importance of not feeling free to always just express your own view. In any event, my point was not that I’m personally entitled to be paid a salary for writing what I please because it’s an important condition of my personal fulfillment, but that certain kinds of scholar autonomy are necessary for building institutional trust and credibility.

Finally, Ross Douthat complained that “Straussian social conservatism” is a misnomer given that conservative writers make the kind of argument I describe in plain sight all the time. True enough—and I pointed to a few examples in an update to the original post. But for the most part, we’re talking about forums in which conservative intellectuals make the argument for an audience of other conservative intellectuals. I’m prepared to be proven wrong, but I’d wager you find it much less frequently, at least in explicit form, in political speeches or talk radio jeremiads, because it’s at the very least an awkward fit with the populist tone of so much modern conservatism.

→ 6 Comments

Bookmark and Share

 

Like Rain on Your Wedding Day

March 6th, 2012 · 48 Comments

I’m a strong believer in free speech: With a few narrow and well-defined exceptions, I think people have a moral and legal right to voice their opinions, however misguided their views and however offensive their mode of expression. I also think (to pick an example from the headlines) that it’s grotesque, sexist, and idiotic to exercise that right by, say, verbally abusing a college student for her views on mandatory contraception coverage. When Rush Limbaugh exercises his rights in that way, I think he deserves to be roundly condemned. Also, I sure as hell wouldn’t want to work for him. Nobody who actually understands the principle of “free speech” thinks that this is somehow “ironic” or hypocritical.

Yet when it comes to the ongoing Koch/Cato conflict, there’s a bafflingly widespread round of herp-derpery rippling through blogs on the left and the right, wherein people imagine it’s clever to point out the supposed irony of libertarian scholars failing to enthusiastically embrace a couple billionaires’ putative property rights over the institution. This is just strange.

I don’t know anything about Kansas corporate law, so I have no idea whether, in fact, the Kochs are legally entitled to exercise majority control over Cato now. I’ve heard some persuasive-sounding arguments that their legal case is flawed, but who knows. Still, purely for the sake of argument, suppose they are. If that’s the case, I’m not arguing that Congress should intervene somehow. I’m arguing that exercising those rights as they seemingly intend to is a bad idea; that their direct control would, in itself,  be damaging to Cato’s credibility; and that I’m not interested in working for the Republican talking-point factory that all evidence suggests they envision. Like rain on your wedding day and other infamous Alanisisms, that’s kind of crappy, but not “ironic” in any recognizable sense. I realize progressives think libertarianism is just code for uncritical worship of rich people, but  as that’s not actually the case, the only irony here is that people think they’re scoring some kind of gotcha point when they’re actually exposing the silliness of their own caricature.

→ 48 Comments

Bookmark and Share