Julian Sanchez Main header image

photos by Lara Shipley

Cato and the Kochs: A Presignation Letter

March 5th, 2012 · 102 Comments

As you may have already heard, the Brothers Koch have mounted a campaign to take control of the Cato Institute, where I hang my hat. Now, I have a pretty fantastic day job, which basically consists of reading, writing, and speaking about whatever I find interesting in any given week. And I don’t generally subscribe to the popular caricature of the Kochs as supervillains.  For a lot of progressives, the Kochs now serve the same function as the Liberal Media does for conservatives: The shadowy elite cabal whose pernicious influence explains why your own common sense views aren’t universally embraced, as they otherwise would be by all right-thinking Americans. Obviously, I don’t buy that, and in any event, of all the ways wealthy people use money to influence politics, openly sponsoring ideological advocacy seems by far the least pernicious. So if this were ultimately just about an ego contest between the pretty-rich  guy (Cato President Ed Crane) and the insanely rich guy (megabillionaire Charles Koch), I’d be content to keep my head down and scribble away without too much regard for what the nameplate on the top-floor corner office reads. Nothing personal, Ed.

Unfortunately, it’s fairly clear already that rather more than that is afoot. As my colleague Jerry Taylor lays out over at Volokh Conspiracy, after years of benign neglect, the Kochs have suddenly decided to use their existing shares in the Institute to attempt to pack the board with loyalists, several of whom are straight-up GOP operatives. To give you an idea: They apparently nominated neocon blogger John “Hindrocket” Hindraker of PowerLine. There’s every indication that they (and their proxies on the board) think Cato would be more useful if it were integrated more tightly into the Koch portfolio of advocacy groups—Americans for Prosperity, etc.—for which it could serve as a source of intellectual ammunition in the ongoing struggle to defeat Barack Obama and the Democratic Party. Indeed, they’ve said as much, more or less verbatim, to the chair of Cato’s board. I don’t think it’s the end of democracy if people want to throw money at that cause, but I doubt Cato’s the right place to do it, and I know it’s not what I signed up for.

At a purely practical level, I write a lot about civil liberties issues where I’m often in agreement with Democrats and progressives. In my time here, I’ve invited Sen. Ron Wyden in to speak about government location tracking, been invited to testify on the Patriot Act by Democrats on the House Judiciary Committee, and written pieces for venues like The Nation and The American Prospect. That sort of thing gets a lot harder if we’re perceived as an overtly partisan shop.

More importantly, I can’t imagine being able to what I do unless I’m confident my work is being judged on the quality of the arguments it makes, not its political utility—or even, ultimately, ideological purity. Obviously Cato has an institutional viewpoint, and I wouldn’t have been hired in the first place if my views on the topics I write about weren’t pretty reliably libertarian. But when it comes down to specific issues and controversies, nobody tells me what to write. If my honest appraisal of the evidence on a particular question leads me to a conclusion that’s not “helpful” in the current media cycle’s partisan squabble, or that differs from either the “official” libertarian line, or from the views of my colleagues, I can write it without worrying that I’ll be summoned to the top floor to explain why I’m “off message.” That’s the essential difference between an analyst and an activist: I can promise readers that what appears under my name—whether I get it right or wrong—represents my sincere best effort to figure out what would be good policy, not an attempt to supply a political actor with a talking point.  If I couldn’t make that promise, I’d have no right to expect people to take my work seriously.

As I said, I’m in no great hurry to leave a job I enjoy a lot—so I’m glad this will probably take a while to play out either way.  But since I’m relatively young, and unencumbered by responsibility for a mortgage or kids, I figure I may as well say up front that if the Kochs win this one, I will. I’m not flattering myself that they’ll especially care; I’d just be saving their appointee the trouble of canning me down the road. But I suspect I wouldn’t be the only one looking for the door under the administration they seem to be envisioning, and my hope is that saying this publicly now might encourage someone in the Koch empire to reconsider whether they can win this particular prize without damaging it.

So, first, apologies to my colleagues for springing this on them in a blog post: I wanted it to be clear that I’m not doing this on the encouragement of Cato’s current leadership, and the easiest way to ensure that was not to tell them.

Second: I hereby tender my pre-resignation from Cato, effective if and when the Kochs take command.  I’ll be sad to go, if it comes to that, but sadder to see a proud institution lose its autonomy.

→ 102 Comments

Bookmark and Share

 

 

 

 

Are Economic Value Subjectivism and Consequentialism Inconsistent?

February 27th, 2012 · 2 Comments

Political philosopher Kevin Vallier has an interesting but, I think, ultimately confused post over at Bleeding Heart Libertarians suggesting that consequentialism and value subjectivism—two views frequently held by economists—are in tension. To summarize briefly, he argues:

  1. Value subjectivism posits that states of affairs have value if and only if people subjectively value those states of affairs.
  2. Reasons for action are provided by these subjective values.
  3. Therefore value subjectivism entails that reasons for action are (all) agent-relative.
  4. Consequentialism gives to all moral actors the same overriding aim: Maximize overall good (however defined).
  5. Therefore consequentialism entails at least one (overriding) agent-neutral reason for action.
  6. Therefore value subjectivism and consequentialism cannot both be correct.

There’s a simple mistake here, and a slightly more subtle one.  The simple mistake is that economic theories of practical rationality are supposed to be descriptive and predictive: They give an account of our explanatory or motivating reasons. The form of value subjectivism involved here does not entail any deep metaphysical commitments about the nature of value: It requires only the near-tautological premise that if you want to explain how people behave, the only relevant “reasons” are the ones people take themselves to have. This is a familiar distinction that has nothing to do with a gap between “self interested” and moral reasons. If I smash my laptop in a moment of frustration, it is perfectly coherent to say, in a descriptive and explanatory mode, that my reason for acting as I did was provided by my desire to vent my annoyance, and also that I had no good normative reason to act as I did, given my considered goals and preferences. Whatever objective values or reasons for action there may be, they are only motivationally effective to the extent that the values are internalized subjectively, and the reasons for action apprehended. Again, this should not be a controversial metaphysical thesis; it’s a tautology. What you have reason to do depends (in part) on the facts; what is rational for you to do depends on your beliefs. Subjective values are the only ones with explanatory relevance; it doesn’t follow that they’re the only ones with normative relevance.

The slightly more subtle mistake is conflating a view about the structure of motivation with a substantive view about what makes states of affairs objectively valuable—and in particular, with the view that subjective mental states are uniquely capable of making states of affairs objectively good or bad. Everyone has a subjective reason to defect in a one-shot Prisoner’s Dilemma; everyone’s defecting makes the outcome worse, even though subjective levels of satisfaction are the only inputs into that outcome. Despoiling the environment for short-term benefit can make the outcome worse by creating huge amounts of subjective misery, even if no current agents care about the effects of their actions on future generations.

A sufficiently selfish population may have no motivating reason, based on their subjective pattern of concerns, to avert a future ecological catastrophe—and in this case, economic theory can predict that they will fail to do so. They will act morally badly, on one plausible view, because they fail to align their subjective preferences with the objective disvalue of avoidable suffering. That objective disvalue may be wholly dependent on subjective mental states: If our descendants turn out to prefer postapocalyptic hellholes, everything’s copacetic. You can coherently believe that there are objective (normative) reasons for action, but that only subjective mental states can be ultimate sources of such reasons.

I have some more general doubts about the usefulness of making the agent-relative/agent-neutral distinction so central—the real action, as it were, being in the specification of what counts as a reason—but that’s probably a knot to untangle some other time.

Update: Maybe the most concise way to put this is that Vallier isn’t really making an argument against consequentialism, but against externalism about reasons generally. Consequentialism is not unique in suggesting that, for instance, you should not murder an annoying rival when you are certain to get away with it, even if you have a strong present desire to do so.  That it may be economically rational to act on this desire just reminds us that theories of economic rationality are not, and are not supposed to be, moral theories.

→ 2 Comments

Bookmark and Share

 

Straussian Social Conservatism and the “Dangers of Contraception”

February 24th, 2012 · 43 Comments

The philosopher Leo Strauss was perhaps best known for the view that great philosophical works—especially those produced in times when persecution for heretical views was commonplace—often concealed an “esoteric” message, intended only for an elite of truly “philosophical” readers, that was different from, and often quite at odds with, the surface meaning of the text.  A somewhat cruder version of that view has often been associated with neoconservatives—many of whom count themselves admirers of Strauss. This was perhaps most clearly expressed by the late neocon grand don Irving Kristol in an interview on the subject of religious belief:

There are different kinds of truths for different kinds of people. There are truths appropriate for children; truths that are appropriate for students; truths that are appropriate for educated adults; and truths that are appropriate for highly educated adults, and the notion that there should be one set of truths available to everyone is a modern democratic fallacy.

Increasingly, I find myself thinking that it’s not really neoconservatives but social conservatives who are more prone to this view. Years ago, I wrote that the conservative case against gay marriage only really makes sense if seen through this lens. The reason their public arguments against marital equality so often seem incoherent is that the true rationale rests on the social desirability of a myth or “noble lie” that would cease to be effective if it were exposed by making the argument publicly.

Something similar is at work, I suspect, in some recent arguments over contraception.  Many commentators have expressed understandable bafflement, for instance, at Rick Santorum’s attempt to explain what he means when he alludes to the “dangers of contraception”:

What I was talking about is we have a society — Charles Murray just wrote a book about this and it’s on the front page of “The New York Times” two days ago, which is the increasing number of children being born out of wedlock in America, teens who are sexually active.

What we’re seeing is a problem in our culture with respect to children being raised by children, children being raised out of wedlock, and the impact on society economically, the impact on society with respect to drug use and all — a host of other things when children have children.

On its face, this is nonsensical: How can contraception, of all things, be responsible for an increase in out of wedlock births?

The clue here is the reference to Murray’s book Coming Apart—which if not exactly a work of high sociological rigor, is interesting and a quick enough read to be worth a look. Murray’s core topic is what he believes to be a growing cultural gap between the educated elite and the rest of America. While the elite profess the most socially liberal attitudes, Murray argues that they hew to “conservative” norms in some crucial respects—such as generally bearing children in the context of stable marriages.  Yet as Phoebe Maltz observes, this is hardly a thoroughgoing conservatism: Sure, the educated elite eventually settle down to marry and have children, but that’s routinely preceded by a decade or more of extramarital sexual activity enabled by contraception.

Here is one way to make sense of Santorum’s remarks. Elite sexual norms have transformed over the past half century, as a result of a combination of factors: Contraceptive technology; improved educational sorting that increases the frequency of romantic partnerships between highly educated professionals, and an economy that rewards longer periods of education and correspondingly delayed marriage and reproduction. Under the new norm, there’s no expectation of marriage or childbearing before one’s late-20s or early-30s,  but also no expectation that people will abstain from sexual activity or romantic cohabitation until that point. There is no stigma against premarital sex, engaged in strictly for the enjoyment and emotional satisfaction of the participants. For the cognitive elite—who are generally reasonably good at impulse control and long-term planning, have the social and economic resources to provide a buffer against youthful misjudgments, and have powerful incentives to avoid derailing their ambitious career plans—this works out just fine.

Here is where the “esoteric” or Straussian social conservative argument comes into play: For those outside the cognitive elite (they would argue) this does not work out fine. Lacking both the incentives and the resources of the elite, the erosion of the stigma against premarital sex among the “lower classes” yields increased premarital childbearing, locking both generations into poverty and dependency.  This outcome can only be avoided (the Straussian social conservative might argue) if the “lower orders” do not adopt the sexual norms that work perfectly well for the cognitive elite.

But nobody likes to be told they’re simply not capable of enjoying the same freedoms as the elite: The only publicly acceptable norm for a democratic polity is the rule that sex outside the confines of traditional marriage is “just wrong” or somehow “immoral.” This norm may not make sense, but since most people do not think deeply about the underlying ethical rationale for local norms, it will be widely accepted so long as it appears to be widely accepted—which is to say, so long as the cultural elite at least continue to give it lip service, whatever their private behavior. Contraception (in tandem with those other changes) makes it possible for the elite to visibly reject that norm, enjoying successful and happy lives in which a long sexually active young-adulthood precedes eventual marriage and reproduction. The argument that non-elites are incapable of successfully living according to the new elite norms is publicly unacceptable in a democratic society, and so those norms become widely accepted, with damaging results.

I am not, I should stress, endorsing this argument. But it is, at least, an intelligible argument. It is the only remotely coherent way I can see to make sense of Santorum’s purported link between the prevalence of contraception and rising non-marital births. It’s just not an argument social conservative elites can make openly—certainly not if they wish to retain any pretensions of populism. The best justification of “traditional values” is ultimately pragmatic and utilitarian, and their acceptance as general rules depends crucially on the idea that most people are not good at making ad hoc judgments by applying a pragmatic and utilitarian standard directly. But they’re only effective if this pragmatic foundation is not laid bare: If people simply accept the traditional rules as “what everyone knows to be right.”

This is, I suspect, why so many social conservative positions seem not just misguided but downright baffling and mysterious to others. It’s not that they lack any intelligible justification, it’s that social conservative elites (believe that they) cannot openly or publicly  advance that justification without undermining their own ends.

Addendum: As a commenter reminds me, this argument is occasionally stated explicitly, as it was in a famous “No Guardrails” editorial in the Wall Street Journal in the early 1990s, which Radley Balko references in this 2003 Fox News article on… Rick Santorum! It’s also the basic idea underlying Gertrude Himmelfarb’s 1995 book One Nation, Two Cultures (the wife, as it happens, of Irving Kristol). But again, it’s rare for the argument to be made quite so explicitly in these terms, because it implicitly concedes that it’s not inherently immoral for the elite to deviate from conservative values, except insofar as doing so openly sets a bad example for non-elites who can’t handle that level of personal freedom.

→ 43 Comments

Bookmark and Share

 

 

 

 

An Afterthought on “Responsibility”

February 24th, 2012 · No Comments

It occurs to me that some of the confusion I mentioned in the previous post has to do with a certain ambiguity around the terms “responsible” and “responsibility.”  In addition to to the simple causal sense of “responsible,” which is obviously linked to the others, we use it to mean “properly subject to moral praise and blame” but also something roughly like “prudent, exercising foresight and impulse control, mindful of long-term consequences.” The latter has no necessary connection to the former—a totally amoral robot could be programmed to behave “responsibly” in this sense—except insofar as praise and blame are popular mechanisms for inculcating habits of prudence in humans. (Their usefulness for this purpose need not have anything to do with whether people actually “deserve” the praise or blame in some morally deep sense: The repugnance of “victim blaming” is no guarantee that it doesn’t confer some social benefit.) It’s actually hard to briefly describe the relevant propensities in terms that don’t have some thick normative connotation. There’s not really a colloquial language for calling people imprudent or work-averse that doesn’t sound like moral criticism, and history suggests that any scrupulously neutral term coined for the purpose will just take on all the baggage of he old one fairly quickly. Still, if we could manage to adopt a more anthropologically detached way of talking about behavior, norms, and their social consequences, it might make conservatives and liberals less prone to talking past each other.

All this reminds me, incidentally, of a panel I had the great pleasure of sitting on with the social psychologist Philip Zimbardo, of “Stanford Prison Experiment” fame. He was, naturally, speaking about the “power of the situation”—how social context can make decent people behave appallingly. He notes that he, too, fell victim to it: Absorbed in his role as the detached experimenter,  he went on enthusiastically gathering data as young men broke down under the psychological torment they were inflicting upon each other.  He stopped only when his new girlfriend (and future wife) yanked him out of the situation by reacting with horror at what he was allowing to go on under his nose. Yet the content of this new situation, as I pointed out, was a bit of old fashioned individualistic moralizing: “You are responsible for allowing this to go on, and if that is the sort of person you are, I don’t want to be romantically involved with you!”  From an external or scientific perspective, we can accurately say that his behavior was more determined by social context than any immutable character traits or some act of radical free will. But the context that generates morally desirable behavior will often be one that, from the internal perspective of practical reason, enjoins people to think of themselves as fundamentally responsible (and subject to praise and blame) for the consequences of their actions.

→ No Comments

Bookmark and Share

 

Worst. Sale. Ever.

February 19th, 2012 · 7 Comments

Spotted by my eagle-eyed girlfriend this weekend at Whole Foods:

→ 7 Comments

Bookmark and Share

 

Culture and Responsibility

February 19th, 2012 · 14 Comments

Ross Douthat and David Brooks both hit the same point in recent columns on Charles Murray’s new book Coming Apart. Here’s Brooks:

[Murray's] left-wing critics in the blogosphere have reverted to crude 1970s economic determinism: It’s all the fault of lost jobs. People who talk about behavior are blaming the victim. Anybody who talks about social norms is really saying that the poor are lazy.

And Douthat:

Murray’s critics accuse him of essentially blaming the victim: the social breakdown he described may be real enough, they allow, but it’s an inevitable consequence of an economic system that Republicans have rigged to benefit the rich. In the liberal view, there’s nothing wrong with America’s working class that can’t be solved by taxing the wealthy and using the revenue to weave a stronger safety net.

Putting aside the question of who’s right, I think it’s accurate that folks who identify with the left tend to be skeptical of cultural or behavioral explanations for social problems, typically regarding them as a form of victim blaming. And given that conservatives often seem particularly concerned with cultivating a “culture of personal responsibility,” it’s not particularly surprising that such explanations are interpreted as entailing the view that the badly-off are, well, “personally responsible” for their plight. But this is, I think, a mistake.

When you’re talking about variations in outcomes within a socioeconomic group—about why some individuals seem to have done exceptionally better or worse than others similarly situated—it may make sense to talk about individual choice and responsibility. But when you’re talking about population level trends, this won’t do: If you want to know why an entire group of people are choosing to behave in a certain way, and especially why they’re collectively behaving conspicuously differently than they used to, the explanation is necessarily going to be something exogenous to any individual’s idiosyncratic choice. Whatever you think of 21st century American bourgeois mores, there aren’t many reports of Aztec royalty or !Kung bushmen spontaneously adopting them; people read from the locally available scripts. At the group level, “choice” isn’t an adequate explanation for anything, because it’s the pattern of choices that stands in need of explanation.

One hypothesis may be that certain macro-level changes are partially explained by a cultural shift away from norms of strong individual responsibility. But this is not at all the same as saying that we’re individually responsible for the cultural norms we absorb. Sometimes we arguably are—as when we opt to seek out and join a particular subculture—but more often we’re not. I didn’t decide to grow up speaking English. If you and your peers acquired cultural norms conducive to health, prosperity, stable families, law-abidingness, and so on, odds are you had the good fortune to grow up in an environment where those norms were both espoused and modeled—and you can scarcely claim responsibility for that. It is at least possible, however, that an important component of this salutary body of norms involves talking and acting as though individuals are responsible for how they fare on these various dimensions, whether or not this is true.  It is false that economic success is a sign of predestined salvation, but if you believe Max Weber, it was economically advantageous for Calvinists to hold this false belief.

Fear of “victim blaming,” then, is not a particularly good reason to steer clear of cultural or behavioral accounts of social pathology. As far as any particular individual is concerned, cultural context is as much an immutable given as economic context. But there’s another motive that Douthat and Brooks both hint at: If you assume culture is not wholly determined by the economic and political facts, then there’s not a whole lot politics can do to remedy social problems rooted in culture, without resorting to some pretty grossly illiberal interventions. Invoking culture, then, can look like an excuse for not doing anything.  And here, again, one can imagine the collectively advantageous belief being the false one.  That is, it may be that serious social problems are primarily a function of culture, and therefore resistant to political or economic remedy, but that we will not be politically motivated to get the modest improvement achievable by economic means unless we falsely believe that economics are paramount and culture insignificant or epiphenomenal. I’m not convinced this is true, mind you, but it’s a possibility with a neat symmetry.

→ 14 Comments

Bookmark and Share

 

Online Anarchy

February 14th, 2012 · 7 Comments

At regular intervals—too short for it to even be amusing anymore—we now hear that debates over Internet regulation would be more productive if only people would get it through their thick skulls that the Internet is not some special free-for-all zone. There’s no reason it can or should remain magically exempt from the rules that apply everywhere else (we are reminded) and it is absurd and mysterious that some people (we are assured) believe otherwise.

This is a fair point. But what about all these hippy-dippy Real World anarchists who think meatspace can remain immune to the rules any well-managed virtual community understands to be essential? How is it, for instance, that citizens are physically capable of injuring each other, regardless of whether they’ve opted in to player-versus-player? And what fool designed it so that my image is visible to all other users in the same city, even if we aren’t friends? You’ve even apparently got to jump through a bunch of hoops to get something called a “restraining order” just to implement a simple user block!

What will actually make debates over Internet regulation more productive is universal recognition that the first paragraph is exactly as dumb as the second. (Possibly more so, since the second at least hints at some interesting possibilities.) You cannot implement an analogy. The rules that you’d want to apply if you could make it so just by wishing are not always the rules it is wise or feasible to attempt to actually put in place, once you’ve factored in the probable efficacy of that attempt and its unintended side-effects. Both of these, alas, are determined by annoyingly stubborn “facts” about the nature of the technological context in which you want to enforce the rules.

Not everyone understands the intricate technical details of how packet-switched digital networks function, and not everyone needs to. But if you truly don’t comprehend that “closing down an illegal shop” is not actually the same as—and in every possible way a pretty awful metaphor for—”getting thousands of ISPs to implement DNS filtering at the domain level,” you should quietly recuse yourself from Internet policy debates until you do understand the difference. And if you find yourself suggesting that Google “helped overthrow the head of an entire country in a weekend,” and therefore must simply lack willpower when they say they can’t automatically screen out trademark and copyright violations, perhaps you should think twice about sitting on committees that vote on Internet legislation.  I say this, incidentally, as a pure policy wonk who hasn’t done anything remotely resembling “network administration” since I was a teenager running a dial-up BBS out of my bedroom. You can pick up enough to follow along at home without going to MIT.

Most World of Warcraft players at least understands that it’s only the code of the game that enables adolescent fantasies of magical omnipotence to be played out without consequence—and that the laws of physics are unlikely to be so obliging. They understand that their problems can’t be banished just by reading a few words from an enchanted parchment—even if we feel that’s how the world ought to work. I live in perpetual hope that legislators will someday grasp this point as well, and realize that not all resistance to regulation is born of a mean-spirited hatred of magic.

→ 7 Comments

Bookmark and Share

 

Nothing Ever Ends.

February 14th, 2012 · 10 Comments

When I heard that DC Comics was planning a series of prequels to Watchmen, my first reaction was the one seemingly shared by most fans of the seminal graphic novel: “For the love of God, why?” Satirists have had plenty of fun contemplating the schlocky derivatives that might be spun off Alan Moore’s masterpiece for the sake of a quick buck, but it had gone unsullied so long that many of us had begun to imagine that taste might have scored a rare victory over avarice this time around.

My second thought also appears to have been fairly common: Moore is probably the last person in any position to wax righteous on this point. With the conspicuous exception of V for Vendetta (and arguably the historical fiction From Hell) his most famous works have involved re-imagining characters created by others. Moore cut his teeth on Mick Anglo’s Marvelman (AKA Miracleman), and was introduced to American readers by his brilliant run writing Len Wein’s Swamp Thing. His Lost Girls cast beloved heroines of children’s literature in what Moore himself characterizes as “pornography,” while The League of Extraordinary Gentlemen mashes up classic pulp adventure stories to give us a 19th century Avengers, with Mr. Hyde, Captain Nemo, and Allan Quatermain filling in for the Hulk, Tony Stark, and Captain America. Watchmen itself was originally supposed to star a roster of Charlton comics characters that D.C. had acquired, including Steve Ditko creations Captain Atom and The Question (ultimately reimagined as Doctor Manhattan and Rorschach).

My third thought, however, was that  looking at why he didn’t end up using those original Charlton characters—and instead giving us their vastly more interesting close cousins—might shed some light on why fans are so dismayed in this case by the quintessential comics ritual of passing an iconic character to a new generation of writers. The problem was that it quickly became clear to the honchos at DC that Watchmen would break their new toys. It would have ended with (spoiler alert) Thunderbolt as a mass murderer, The Question a bloody smear in the snow, Blue Beetle as an aging husband, and Captain Atom an autistic spacegod. These were not promising starting points for ongoing use of the characters.

The classic comics solution would have been to simply set the events of Watchmen in an alternate continuity or parallel timeline, leaving the mainstream versions of the characters untouched, but at the time, D.C. had just launched its Crisis on Infinite Earths, which was to climax with the compression (since undone) of D.C.’s increasingly cluttered multiverse into a single, more manageable universe. Even had that not been the case, though, it seems inevitable that either (1) the power of Watchmen would have been diluted by relegating it to a mere “What If” story about characters who continued to have “ordinary” superhero adventures in main continuity, or (2) more likely, the power of the Watchmen narrative, and the characters’ actions and attitudes within it, would have overwhelmed any later attempt to portray them differently.

This ultimately brings us to an important distinction between standard, self-contained fiction and what I’m inclined to call “Extended Universe” narratives. By “Extended Universe” fiction, I mean the growing body of fictional continuities that are understood to provide a unified context—though just how unified varies from case to case—for stories told across many years or decades by many authors working in many different media. The two major comic book universes—D.C.’s and Marvel’s—are obvious examples, as are the various tangled continuities of Star Wars, Star Trek, Doctor Who/Torchwood, and perhaps Joss Whedon’s “Buffyverse,” among many others. The larger background story is often referred to as a “mythos,” which seems appropriate enough, because one of the earliest forms of human narrative—myth—embodies just this kind of collaborative, cross-generational storytelling.

What these have in common is that they’re united—however unstably—by the idea of a single “official” or “canonical” continuity that isn’t necessarily tied directly to a single creator. If you look, by way of contrast, to the Sherlock Holmes stories, everyone agrees that the “canon” consists of the original novels and short stories written by Sir Arthur Conan Doyle. If Alan Moore or Nicholas Meyer or Michael Chabon or Guy Ritchie later decide to tell new stories involving Doyle’s characters, we understand these as new and independent tales, which don’t affect what “really” happened to Holmes in the original story.  By contrast, if later writers decide, with the blessing of the official keepers of continuity, that James T. Kirk’s middle name is “Tiberius,” that Spider-Man’s parents were secret agents, or that Green Lantern rings are vulnerable to the color yellow because of a demonic impurity, then that is the “real” version of events and always was—even if the original authors of those characters hadn’t envisioned any such thing. This is why Star Wars fans reacted with such annoyance at George Lucas’ decision to explain The Force as a scientific rather than a spiritual phenomenon, generated by microscopic “midichlorians”: Because Lucas can issue an “authoritative” decree that changes how the events of the beloved original trilogy are to be interpreted and experienced.

Some such considerations appear to have transformed Moore’s own attitude toward film adaptations of his work. His initial reaction was professed indifference to movie versions: The comic is the comic, and stands on its own, untouched by however good or bad a movie might be. His shift from indifference to hostility may reflect a recognition that this isn’t really the case, at least when the movie version is perceived as The Movie Version—a unique, official cinematic rendering, as opposed to simply one of many possible motion picture interpretations. If it’s The Movie, then however Doctor Manhattan’s voice sounds there (for instance) is what the character “really” sounds like in many people’s minds.

This might all seem a little silly: Obviously none of this stuff “really” happened, and so individual fans are free to disregard the events of the Star Wars prequels if they so desire. For that matter, they’re free to hit “pause” fifteen minutes from the end of Jedi and imagine (or film) their own ending where Luke and Vader do join forces to overthrow the Emperor and rule the galaxy as father and son.  But fiction becomes more credible and immersive when we abandon a measure of control—imbuing it with the same obstinate independence as physical reality.  And that means, in part, surrendering to local conventions about what counts as a “true” event within the extended narrative, as opposed to (say) fanfic or authorized “alternate” versions. (The lines, of course, are blurry. Batman: The Animated Series took place outside mainstream D.C. continuity, but the character of Harley Quinn and the vastly-more-interesting revised origin of Mr. Freeze were both rapidly imported from the former into the latter.) And all this, I think, brings us back to what  bugs people about prequels to Watchmen.

Whatever uncertainty there may have originally been for the audience, Alan Moore told a superbly self contained story with exactly the right amount of narrative negative space. Few authors have as Moore’s finely tuned sense of the unique powers of comics as “sequential art”—frozen images where the “action” is provided by what readers supply to fill that razor thin, infinite gap between frames. But this is true in some form or another whatever the medium, as every good storyteller knows. It’s what renders the sequence excerpted above—maybe my favorite in the whole of Watchmen, though competition’s stiff—so powerful. What if Jon had stayed to explain his remark? What if we’d continued to a tedious illustration of exactly what he meant, and how inevitably Veidt’s hard won peace would decay? It would be inane; the scene works because all that is left implicit.

Like its own central chapter “Fearful Symmetry,” this is a narrative in exquisite, delicate balance. And why is it constructed that way? Because Moore didn’t use the Charlton characters. Because the Charlton characters had to be good for further adventures, but the protagonists of Watchmen (who are not, nota bene Dan DiDio, called “The Watchmen”) could be single use. And everything about the reader’s experience of the book turns on this being the case—on Watchmen not being, as many superhero comics are, a chapter in a series that continues indefinitely. That “Nothing ends Adrian” gets its power because we understand that Manhattan’s denial actually is the end of our story—that there isn’t any “Stay tuned for issue 13, where we find out just what Jon meant!”

In other stories, the reverse is true: Batman defeats the Joker—again, and we knew he would–but never for good. He’ll escape from Arkham and the battle will start over, because the only final victory of order over chaos is the heat death of perfect, entropic equilibrium. Like a Wagner motif, it doesn’t have its full meaning in isolation, but only as one permutation in a series. Pick your favorite Batman/Joker story, and suppose it really ends there: Joker dies (for good!), Batman maybe beats up on ordinary thugs for a couple years before a silver-haired Bruce Wayne retires or finally succeeds in joining his parents. However masterfully written that last story is, it would be a cheat and an anticlimax. To say (as we accurately do) that their conflict is iconic is just to say that it transcends the specific: It’s always still happening now, whenever you’re reading. Nothing ever ends.

The real trouble, then, isn’t so much that nobody should ever do to Alan Moore characters what he did to Arthur Conan Doyle and Bram Stoker and Jules Verne’s characters. The trouble is that Moore told a self-contained story in a corporate and cultural context where the Extended Universe narrative is the norm, and where readers are primed to treat continuations of a story, if they have the proper institutional imprimatur, as “real” for that fictional context. Nobody reads The League of Extraordinary Gentlemen back into 40,000 Leagues Under the Sea in any comparable way.

The irony here is that you actually could have sequels and prequels and parallel versions of Watchmen without damaging the reader’s experience of the original.  It’s just that D.C. Comics can’t be the one to do it, because the narrative worlds would be too close to keep them from colliding—a recipe, as every D.C. reader knows, for a Crisis.

 

→ 10 Comments

Bookmark and Share

 

More Fresh Scoops from the New York Times

February 11th, 2012 · 5 Comments

Traveling Light in a Time of Digital Thievery,” Nicole Perlroth, New York Times 2/10/12:

When Kenneth G. Lieberthal, a China expert at the Brookings Institution, travels to that country, he follows a routine that seems straight from a spy film.

He leaves his cellphone and laptop at home and instead brings “loaner” devices, which he erases before he leaves the United States and wipes clean the minute he returns. In China, he disables Bluetooth and Wi-Fi, never lets his phone out of his sight and, in meetings, not only turns off his phone but also removes the battery, for fear his microphone could be turned on remotely.

In China, business travelers take extreme precautions to avoid cyber-espionage,” Ellen Nakashima & William Wan, 9/26/11:

“I’ve been told that if you use an iPhone or BlackBerry, everything on it — contacts, calendar, e-mails — can be downloaded in a second. All it takes is someone sitting near you on a subway waiting for you to turn it on, and they’ve got it,” said Kenneth Lieberthal, a former senior White House official for Asia who is at the Brookings Institution. [...]

Travelers there often tote disposable cellphones and loaner laptops stripped of sensitive data. Some U.S. officials take no electronic gear. [...] Another common tactic is to remove batteries from cellphones, which makes digital tracking more difficult and prevents microphones from being activated remotely.

Times, 2/10/12:

Hackers’ preferred modus operandi, security experts say, is to break into employees’ portable devices and leapfrog into employers’ networks — stealing secrets while leaving nary a trace. [...]

“If a company has significant intellectual property that the Chinese and Russians are interested in, and you go over there with mobile devices, your devices will get penetrated,” said Joel F. Brenner, formerly the top counterintelligence official in the office of the director of national intelligence.

Post, 9/26/11:

On the eve of the 2008 Beijing Olympics, Joel Brenner, then the U.S. national counterintelligence executive, first issued government safety guidance to overseas travelers, with such tips as: “If you can do without the device, don’t take it.” Though no country was named, “it was really directed at countries like China and Russia,” Brenner said in a recent interview. [...]

What’s at stake is not only the security of your current communications, but the security of your secrets back home,” said Brenner, who advises clients on data security at the law firm Cooley LLP. “That’s the real danger.”

This is actually a good deal less egregious than some of the Times‘ other recent rewrites, and in any event, I don’t think there’s anything fundamentally wrong with looking at a competitor’s article and saying: “Hey, we should really do our own version of this”—especially when, as in this case, they’ve updated it with some interesting new information. But is it really so hard to just link back from the online version so the reader can see what you’re building on? It’s kinda standard practice in 2012—which makes not doing it seem like a conscious, petty refusal to admit that occasionally someone else gets to an interesting story first. The Post did a strong story, and it’s great that Times readers get the benefit of an updated version—just be adults and acknowledge that’s what you’re giving them.

→ 5 Comments

Bookmark and Share

 

On the Enforcement Fantasy

January 25th, 2012 · 14 Comments

This is probably the least interesting (because it should be so self-evident) and yet most important paragraph in a must-read Cory Doctorow essay:

In short, [proponents of more aggressive copyright enforcement] made unrealistic demands on reality and reality did not oblige them. Copying only got easier following the passage of these laws—copying will only ever get easier. Right now is as hard as copying will get. Your grandchildren will turn to you and say “Tell me again, Grandpa, about when it was hard to copy things in 2012, when you couldn’t get a drive the size of your fingernail that could hold every song ever recorded, every movie ever made, every word ever spoken, every picture ever taken, everything, and transfer it in such a short period of time you didn’t even notice it was doing it.”

I’ve found myself stressing this to reporters who call to ask about what we should do “instead” of SOPA and PIPA, because the framing of this entire debate remains mindblowingly shortsighted. In five years, regardless of anything Congress does now, the current round of garment rending over “rogue websites” is going to seems as comically quaint and irrelevant as old jeremiads against the libidinous excesses of jazz dancing and Elvis lyrics. The big, dumb, obvious technological fact that an awful lot of smart people seem reluctant to grok is this: Copying and sharing information is vastly cheaper and easier than it has ever been at any time in human history. It is also vastly more difficult and expensive than it will ever be again.

This weekend I finally upgraded to the latest version of Mac OSX, Lion. Owing to some weird decisions by Apple, I had to install it from physical media: An 8 gigabyte “thumb drive,” which is really rather misnamed, because it’s actually about the size of two wooden matchsticks. The year I was born, that amount of data storage space—without any software—would have cost more than the office building you’d need to house it. The year I got my drivers’ license, it would’ve cost about as much as a good used car. In 2012, 8 gigs of storage is the kind of thing you give away as freebie conference schwag—like a logo-embossed pen, except a good deal smaller. If I decide to use it for something else now that I’m done installing the OS, it will easily accommodate about 4 feature-length films in high definition. IBM recently announced a breakthrough in storage technology that could increase current capacity by a factor of 100 within a few years, which means instead of just carting half our music libraries around with us in our pockets, we’ll have entire music libraries, and high-def video libraries to boot.

One of the features I noticed they’d added in Lion is Airdrop, which establishes an ad hoc peer-to-peer WiFi connection with other nearby Apple devices. This isn’t particularly useful for my desktop, since anyone who’s actually in my apartment is probably already on my home WiFi network, but one can imagine it being awfully handy for mobile devices. “What am I listening to? Hang on, I’ll beam it over.” For transfers outside physical proximity, the next generation wireless data standard recently approved by the International Telecommunications Union maxes out at about a Gigabit per second. In practical terms, that means about a minute to transmit  an uncompressed music CD (and much shorter for, say, MP3s at the bitrate you get from iTunes) or 90 seconds for a high-definition TV episode.

Existing online social networks, with near universal adoption in many social circles, already provide a trust infrastructure for limited sharing that will make these kinds of transfers almost impossible to police—or even reliably detect. In a world where every teenager in the country is carrying a pocket-sized server, and encrypted wireless VPN relays can run out of palm-sized cubes, an enforcement strategy based on raiding data centers is just going to look cute. Legislators who think “the Internet” means “the Web,” who are too fixated on the problems some lobbyist is complaining about right now to think two steps ahead, are in for a rude awakening. They’re in the grip of the enforcement fantasy: The misapprehension that technology is going to stay still long enough for traditional, targeted law enforcement approaches to effectively limit the scope and scale of copying.

That’s not to say that nothing can be done to avert a near-future world of largely unregulated and unregulable copying and sharing. If we were willing to implement  a comprehensive scheme of innovation-stifling technology mandates and pervasive surveillance so absolute as to  make the People’s Republic of China look like Burning Man, it could at least be delayed. But I assume that the United States is not yet prepared to completely betray its basic principles to safeguard the profitability of Friends reruns.

If we’re not willing to be China, though, then all these discussions about “what we’re going to do” about piracy are just the wonky equivalent of fanboy debates about whether Wolverine would beat Batman in a fight, for all the bearing they have on reality. What are we going to do that makes a long-term difference? Nothing. Anyone who wants to copy stuff without paying for it can do so easily, and it only gets easier and faster from here. Finding this morally outrageous or throwing a tantrum about the deep unfairness of it all won’t make it less true, though the tantrum might break a vase or two.

A slightly more Zen approach would be to “accept the things you cannot change,” as the coffee mug has it, and take the opportunity to step back and reevaluate. We have a legal structure for incentivizing creativity that makes copying and public performance the key points of regulatory intervention. There isn’t some deep moral reason that it’s these points and not others. There are lots of other ways to enjoy creative works without paying the creator, after all: Borrowing a copy, buying used, watching at a friends house, DVRing the broadcast and skipping all the commercials, incessantly singing (to yourself or with a friend) that catchy tune you overheard in the cab. Nobody tries to claim those are “stealing,” mainly because we’ve decided not to try to regulate those activities.

We decided to regulate copying instead, because copying was a lot easier and cheaper to regulate when we wrote the copyright statutes. Copying a book or record on a mass scale, unlike lending or singing in the shower, was not the kind of thing an ordinary person had the necessary equipment for—and the equipment tended to be bulky enough that you could usually track it down without having to pry into a lot of homes (and bathrooms). But the thing we decided to regulate because it was rare and expensive is now as cheap and ubiquitous as all the other stuff we didn’t regulate because it was cheap and ubiquitous. The good news is, most people are still glad to pay for the content they really like, if it’s provided in a convenient form and at a reasonable price, even when they can (or did!) easily copy it free. But maybe that’s not enough, and there are other points of regulatory intervention that will help creators internalize enough of the value of their output to make the investment worthwhile. That’s an actually productive subject of inquiry, but it’s not one anybody’s putting much effort into as long as they remain in the grips of the enforcement fantasy.

→ 14 Comments

Bookmark and Share