The philosopher Leo Strauss was perhaps best known for the view that great philosophical works—especially those produced in times when persecution for heretical views was commonplace—often concealed an “esoteric” message, intended only for an elite of truly “philosophical” readers, that was different from, and often quite at odds with, the surface meaning of the text. A somewhat cruder version of that view has often been associated with neoconservatives—many of whom count themselves admirers of Strauss. This was perhaps most clearly expressed by the late neocon grand don Irving Kristol in an interview on the subject of religious belief:
There are different kinds of truths for different kinds of people. There are truths appropriate for children; truths that are appropriate for students; truths that are appropriate for educated adults; and truths that are appropriate for highly educated adults, and the notion that there should be one set of truths available to everyone is a modern democratic fallacy.
Increasingly, I find myself thinking that it’s not really neoconservatives but social conservatives who are more prone to this view. Years ago, I wrote that the conservative case against gay marriage only really makes sense if seen through this lens. The reason their public arguments against marital equality so often seem incoherent is that the true rationale rests on the social desirability of a myth or “noble lie” that would cease to be effective if it were exposed by making the argument publicly.
Something similar is at work, I suspect, in some recent arguments over contraception. Many commentators have expressed understandable bafflement, for instance, at Rick Santorum’s attempt to explain what he means when he alludes to the “dangers of contraception”:
What I was talking about is we have a society — Charles Murray just wrote a book about this and it’s on the front page of “The New York Times” two days ago, which is the increasing number of children being born out of wedlock in America, teens who are sexually active.
What we’re seeing is a problem in our culture with respect to children being raised by children, children being raised out of wedlock, and the impact on society economically, the impact on society with respect to drug use and all — a host of other things when children have children.
On its face, this is nonsensical: How can contraception, of all things, be responsible for an increase in out of wedlock births?
The clue here is the reference to Murray’s book Coming Apart—which if not exactly a work of high sociological rigor, is interesting and a quick enough read to be worth a look. Murray’s core topic is what he believes to be a growing cultural gap between the educated elite and the rest of America. While the elite profess the most socially liberal attitudes, Murray argues that they hew to “conservative” norms in some crucial respects—such as generally bearing children in the context of stable marriages. Yet as Phoebe Maltz observes, this is hardly a thoroughgoing conservatism: Sure, the educated elite eventually settle down to marry and have children, but that’s routinely preceded by a decade or more of extramarital sexual activity enabled by contraception.
Here is one way to make sense of Santorum’s remarks. Elite sexual norms have transformed over the past half century, as a result of a combination of factors: Contraceptive technology; improved educational sorting that increases the frequency of romantic partnerships between highly educated professionals, and an economy that rewards longer periods of education and correspondingly delayed marriage and reproduction. Under the new norm, there’s no expectation of marriage or childbearing before one’s late-20s or early-30s, but also no expectation that people will abstain from sexual activity or romantic cohabitation until that point. There is no stigma against premarital sex, engaged in strictly for the enjoyment and emotional satisfaction of the participants. For the cognitive elite—who are generally reasonably good at impulse control and long-term planning, have the social and economic resources to provide a buffer against youthful misjudgments, and have powerful incentives to avoid derailing their ambitious career plans—this works out just fine.
Here is where the “esoteric” or Straussian social conservative argument comes into play: For those outside the cognitive elite (they would argue) this does not work out fine. Lacking both the incentives and the resources of the elite, the erosion of the stigma against premarital sex among the “lower classes” yields increased premarital childbearing, locking both generations into poverty and dependency. This outcome can only be avoided (the Straussian social conservative might argue) if the “lower orders” do not adopt the sexual norms that work perfectly well for the cognitive elite.
But nobody likes to be told they’re simply not capable of enjoying the same freedoms as the elite: The only publicly acceptable norm for a democratic polity is the rule that sex outside the confines of traditional marriage is “just wrong” or somehow “immoral.” This norm may not make sense, but since most people do not think deeply about the underlying ethical rationale for local norms, it will be widely accepted so long as it appears to be widely accepted—which is to say, so long as the cultural elite at least continue to give it lip service, whatever their private behavior. Contraception (in tandem with those other changes) makes it possible for the elite to visibly reject that norm, enjoying successful and happy lives in which a long sexually active young-adulthood precedes eventual marriage and reproduction. The argument that non-elites are incapable of successfully living according to the new elite norms is publicly unacceptable in a democratic society, and so those norms become widely accepted, with damaging results.
I am not, I should stress, endorsing this argument. But it is, at least, an intelligible argument. It is the only remotely coherent way I can see to make sense of Santorum’s purported link between the prevalence of contraception and rising non-marital births. It’s just not an argument social conservative elites can make openly—certainly not if they wish to retain any pretensions of populism. The best justification of “traditional values” is ultimately pragmatic and utilitarian, and their acceptance as general rules depends crucially on the idea that most people are not good at making ad hoc judgments by applying a pragmatic and utilitarian standard directly. But they’re only effective if this pragmatic foundation is not laid bare: If people simply accept the traditional rules as “what everyone knows to be right.”
This is, I suspect, why so many social conservative positions seem not just misguided but downright baffling and mysterious to others. It’s not that they lack any intelligible justification, it’s that social conservative elites (believe that they) cannot openly or publicly advance that justification without undermining their own ends.
Addendum: As a commenter reminds me, this argument is occasionally stated explicitly, as it was in a famous “No Guardrails” editorial in the Wall Street Journal in the early 1990s, which Radley Balko references in this 2003 Fox News article on… Rick Santorum! It’s also the basic idea underlying Gertrude Himmelfarb’s 1995 book One Nation, Two Cultures (the wife, as it happens, of Irving Kristol). But again, it’s rare for the argument to be made quite so explicitly in these terms, because it implicitly concedes that it’s not inherently immoral for the elite to deviate from conservative values, except insofar as doing so openly sets a bad example for non-elites who can’t handle that level of personal freedom.
It occurs to me that some of the confusion I mentioned in the previous post has to do with a certain ambiguity around the terms “responsible” and “responsibility.” In addition to to the simple causal sense of “responsible,” which is obviously linked to the others, we use it to mean “properly subject to moral praise and blame” but also something roughly like “prudent, exercising foresight and impulse control, mindful of long-term consequences.” The latter has no necessary connection to the former—a totally amoral robot could be programmed to behave “responsibly” in this sense—except insofar as praise and blame are popular mechanisms for inculcating habits of prudence in humans. (Their usefulness for this purpose need not have anything to do with whether people actually “deserve” the praise or blame in some morally deep sense: The repugnance of “victim blaming” is no guarantee that it doesn’t confer some social benefit.) It’s actually hard to briefly describe the relevant propensities in terms that don’t have some thick normative connotation. There’s not really a colloquial language for calling people imprudent or work-averse that doesn’t sound like moral criticism, and history suggests that any scrupulously neutral term coined for the purpose will just take on all the baggage of he old one fairly quickly. Still, if we could manage to adopt a more anthropologically detached way of talking about behavior, norms, and their social consequences, it might make conservatives and liberals less prone to talking past each other.
All this reminds me, incidentally, of a panel I had the great pleasure of sitting on with the social psychologist Philip Zimbardo, of “Stanford Prison Experiment” fame. He was, naturally, speaking about the “power of the situation”—how social context can make decent people behave appallingly. He notes that he, too, fell victim to it: Absorbed in his role as the detached experimenter, he went on enthusiastically gathering data as young men broke down under the psychological torment they were inflicting upon each other. He stopped only when his new girlfriend (and future wife) yanked him out of the situation by reacting with horror at what he was allowing to go on under his nose. Yet the content of this new situation, as I pointed out, was a bit of old fashioned individualistic moralizing: “You are responsible for allowing this to go on, and if that is the sort of person you are, I don’t want to be romantically involved with you!” From an external or scientific perspective, we can accurately say that his behavior was more determined by social context than any immutable character traits or some act of radical free will. But the context that generates morally desirable behavior will often be one that, from the internal perspective of practical reason, enjoins people to think of themselves as fundamentally responsible (and subject to praise and blame) for the consequences of their actions.
February 19th, 2012 · 7 Comments
Spotted by my eagle-eyed girlfriend this weekend at Whole Foods:
Ross Douthat and David Brooks both hit the same point in recent columns on Charles Murray’s new book Coming Apart. Here’s Brooks:
[Murray's] left-wing critics in the blogosphere have reverted to crude 1970s economic determinism: It’s all the fault of lost jobs. People who talk about behavior are blaming the victim. Anybody who talks about social norms is really saying that the poor are lazy.
Murray’s critics accuse him of essentially blaming the victim: the social breakdown he described may be real enough, they allow, but it’s an inevitable consequence of an economic system that Republicans have rigged to benefit the rich. In the liberal view, there’s nothing wrong with America’s working class that can’t be solved by taxing the wealthy and using the revenue to weave a stronger safety net.
Putting aside the question of who’s right, I think it’s accurate that folks who identify with the left tend to be skeptical of cultural or behavioral explanations for social problems, typically regarding them as a form of victim blaming. And given that conservatives often seem particularly concerned with cultivating a “culture of personal responsibility,” it’s not particularly surprising that such explanations are interpreted as entailing the view that the badly-off are, well, “personally responsible” for their plight. But this is, I think, a mistake.
When you’re talking about variations in outcomes within a socioeconomic group—about why some individuals seem to have done exceptionally better or worse than others similarly situated—it may make sense to talk about individual choice and responsibility. But when you’re talking about population level trends, this won’t do: If you want to know why an entire group of people are choosing to behave in a certain way, and especially why they’re collectively behaving conspicuously differently than they used to, the explanation is necessarily going to be something exogenous to any individual’s idiosyncratic choice. Whatever you think of 21st century American bourgeois mores, there aren’t many reports of Aztec royalty or !Kung bushmen spontaneously adopting them; people read from the locally available scripts. At the group level, “choice” isn’t an adequate explanation for anything, because it’s the pattern of choices that stands in need of explanation.
One hypothesis may be that certain macro-level changes are partially explained by a cultural shift away from norms of strong individual responsibility. But this is not at all the same as saying that we’re individually responsible for the cultural norms we absorb. Sometimes we arguably are—as when we opt to seek out and join a particular subculture—but more often we’re not. I didn’t decide to grow up speaking English. If you and your peers acquired cultural norms conducive to health, prosperity, stable families, law-abidingness, and so on, odds are you had the good fortune to grow up in an environment where those norms were both espoused and modeled—and you can scarcely claim responsibility for that. It is at least possible, however, that an important component of this salutary body of norms involves talking and acting as though individuals are responsible for how they fare on these various dimensions, whether or not this is true. It is false that economic success is a sign of predestined salvation, but if you believe Max Weber, it was economically advantageous for Calvinists to hold this false belief.
Fear of “victim blaming,” then, is not a particularly good reason to steer clear of cultural or behavioral accounts of social pathology. As far as any particular individual is concerned, cultural context is as much an immutable given as economic context. But there’s another motive that Douthat and Brooks both hint at: If you assume culture is not wholly determined by the economic and political facts, then there’s not a whole lot politics can do to remedy social problems rooted in culture, without resorting to some pretty grossly illiberal interventions. Invoking culture, then, can look like an excuse for not doing anything. And here, again, one can imagine the collectively advantageous belief being the false one. That is, it may be that serious social problems are primarily a function of culture, and therefore resistant to political or economic remedy, but that we will not be politically motivated to get the modest improvement achievable by economic means unless we falsely believe that economics are paramount and culture insignificant or epiphenomenal. I’m not convinced this is true, mind you, but it’s a possibility with a neat symmetry.
February 14th, 2012 · 7 Comments
At regular intervals—too short for it to even be amusing anymore—we now hear that debates over Internet regulation would be more productive if only people would get it through their thick skulls that the Internet is not some special free-for-all zone. There’s no reason it can or should remain magically exempt from the rules that apply everywhere else (we are reminded) and it is absurd and mysterious that some people (we are assured) believe otherwise.
This is a fair point. But what about all these hippy-dippy Real World anarchists who think meatspace can remain immune to the rules any well-managed virtual community understands to be essential? How is it, for instance, that citizens are physically capable of injuring each other, regardless of whether they’ve opted in to player-versus-player? And what fool designed it so that my image is visible to all other users in the same city, even if we aren’t friends? You’ve even apparently got to jump through a bunch of hoops to get something called a “restraining order” just to implement a simple user block!
What will actually make debates over Internet regulation more productive is universal recognition that the first paragraph is exactly as dumb as the second. (Possibly more so, since the second at least hints at some interesting possibilities.) You cannot implement an analogy. The rules that you’d want to apply if you could make it so just by wishing are not always the rules it is wise or feasible to attempt to actually put in place, once you’ve factored in the probable efficacy of that attempt and its unintended side-effects. Both of these, alas, are determined by annoyingly stubborn “facts” about the nature of the technological context in which you want to enforce the rules.
Not everyone understands the intricate technical details of how packet-switched digital networks function, and not everyone needs to. But if you truly don’t comprehend that “closing down an illegal shop” is not actually the same as—and in every possible way a pretty awful metaphor for—”getting thousands of ISPs to implement DNS filtering at the domain level,” you should quietly recuse yourself from Internet policy debates until you do understand the difference. And if you find yourself suggesting that Google “helped overthrow the head of an entire country in a weekend,” and therefore must simply lack willpower when they say they can’t automatically screen out trademark and copyright violations, perhaps you should think twice about sitting on committees that vote on Internet legislation. I say this, incidentally, as a pure policy wonk who hasn’t done anything remotely resembling “network administration” since I was a teenager running a dial-up BBS out of my bedroom. You can pick up enough to follow along at home without going to MIT.
Most World of Warcraft players at least understands that it’s only the code of the game that enables adolescent fantasies of magical omnipotence to be played out without consequence—and that the laws of physics are unlikely to be so obliging. They understand that their problems can’t be banished just by reading a few words from an enchanted parchment—even if we feel that’s how the world ought to work. I live in perpetual hope that legislators will someday grasp this point as well, and realize that not all resistance to regulation is born of a mean-spirited hatred of magic.
February 14th, 2012 · 7 Comments
When I heard that DC Comics was planning a series of prequels to Watchmen, my first reaction was the one seemingly shared by most fans of the seminal graphic novel: “For the love of God, why?” Satirists have had plenty of fun contemplating the schlocky derivatives that might be spun off Alan Moore’s masterpiece for the sake of a quick buck, but it had gone unsullied so long that many of us had begun to imagine that taste might have scored a rare victory over avarice this time around.
My second thought also appears to have been fairly common: Moore is probably the last person in any position to wax righteous on this point. With the conspicuous exception of V for Vendetta (and arguably the historical fiction From Hell) his most famous works have involved re-imagining characters created by others. Moore cut his teeth on Mick Anglo’s Marvelman (AKA Miracleman), and was introduced to American readers by his brilliant run writing Len Wein’s Swamp Thing. His Lost Girls cast beloved heroines of children’s literature in what Moore himself characterizes as “pornography,” while The League of Extraordinary Gentlemen mashes up classic pulp adventure stories to give us a 19th century Avengers, with Mr. Hyde, Captain Nemo, and Allan Quatermain filling in for the Hulk, Tony Stark, and Captain America. Watchmen itself was originally supposed to star a roster of Charlton comics characters that D.C. had acquired, including Steve Ditko creations Captain Atom and The Question (ultimately reimagined as Doctor Manhattan and Rorschach).
My third thought, however, was that looking at why he didn’t end up using those original Charlton characters—and instead giving us their vastly more interesting close cousins—might shed some light on why fans are so dismayed in this case by the quintessential comics ritual of passing an iconic character to a new generation of writers. The problem was that it quickly became clear to the honchos at DC that Watchmen would break their new toys. It would have ended with (spoiler alert) Thunderbolt as a mass murderer, The Question a bloody smear in the snow, Blue Beetle as an aging husband, and Captain Atom an autistic spacegod. These were not promising starting points for ongoing use of the characters.
The classic comics solution would have been to simply set the events of Watchmen in an alternate continuity or parallel timeline, leaving the mainstream versions of the characters untouched, but at the time, D.C. had just launched its Crisis on Infinite Earths, which was to climax with the compression (since undone) of D.C.’s increasingly cluttered multiverse into a single, more manageable universe. Even had that not been the case, though, it seems inevitable that either (1) the power of Watchmen would have been diluted by relegating it to a mere “What If” story about characters who continued to have “ordinary” superhero adventures in main continuity, or (2) more likely, the power of the Watchmen narrative, and the characters’ actions and attitudes within it, would have overwhelmed any later attempt to portray them differently.
This ultimately brings us to an important distinction between standard, self-contained fiction and what I’m inclined to call “Extended Universe” narratives. By “Extended Universe” fiction, I mean the growing body of fictional continuities that are understood to provide a unified context—though just how unified varies from case to case—for stories told across many years or decades by many authors working in many different media. The two major comic book universes—D.C.’s and Marvel’s—are obvious examples, as are the various tangled continuities of Star Wars, Star Trek, Doctor Who/Torchwood, and perhaps Joss Whedon’s “Buffyverse,” among many others. The larger background story is often referred to as a “mythos,” which seems appropriate enough, because one of the earliest forms of human narrative—myth—embodies just this kind of collaborative, cross-generational storytelling.
What these have in common is that they’re united—however unstably—by the idea of a single “official” or “canonical” continuity that isn’t necessarily tied directly to a single creator. If you look, by way of contrast, to the Sherlock Holmes stories, everyone agrees that the “canon” consists of the original novels and short stories written by Sir Arthur Conan Doyle. If Alan Moore or Nicholas Meyer or Michael Chabon or Guy Ritchie later decide to tell new stories involving Doyle’s characters, we understand these as new and independent tales, which don’t affect what “really” happened to Holmes in the original story. By contrast, if later writers decide, with the blessing of the official keepers of continuity, that James T. Kirk’s middle name is “Tiberius,” that Spider-Man’s parents were secret agents, or that Green Lantern rings are vulnerable to the color yellow because of a demonic impurity, then that is the “real” version of events and always was—even if the original authors of those characters hadn’t envisioned any such thing. This is why Star Wars fans reacted with such annoyance at George Lucas’ decision to explain The Force as a scientific rather than a spiritual phenomenon, generated by microscopic “midichlorians”: Because Lucas can issue an “authoritative” decree that changes how the events of the beloved original trilogy are to be interpreted and experienced.
Some such considerations appear to have transformed Moore’s own attitude toward film adaptations of his work. His initial reaction was professed indifference to movie versions: The comic is the comic, and stands on its own, untouched by however good or bad a movie might be. His shift from indifference to hostility may reflect a recognition that this isn’t really the case, at least when the movie version is perceived as The Movie Version—a unique, official cinematic rendering, as opposed to simply one of many possible motion picture interpretations. If it’s The Movie, then however Doctor Manhattan’s voice sounds there (for instance) is what the character “really” sounds like in many people’s minds.
This might all seem a little silly: Obviously none of this stuff “really” happened, and so individual fans are free to disregard the events of the Star Wars prequels if they so desire. For that matter, they’re free to hit “pause” fifteen minutes from the end of Jedi and imagine (or film) their own ending where Luke and Vader do join forces to overthrow the Emperor and rule the galaxy as father and son. But fiction becomes more credible and immersive when we abandon a measure of control—imbuing it with the same obstinate independence as physical reality. And that means, in part, surrendering to local conventions about what counts as a “true” event within the extended narrative, as opposed to (say) fanfic or authorized “alternate” versions. (The lines, of course, are blurry. Batman: The Animated Series took place outside mainstream D.C. continuity, but the character of Harley Quinn and the vastly-more-interesting revised origin of Mr. Freeze were both rapidly imported from the former into the latter.) And all this, I think, brings us back to what bugs people about prequels to Watchmen.
Whatever uncertainty there may have originally been for the audience, Alan Moore told a superbly self contained story with exactly the right amount of narrative negative space. Few authors have as Moore’s finely tuned sense of the unique powers of comics as “sequential art”—frozen images where the “action” is provided by what readers supply to fill that razor thin, infinite gap between frames. But this is true in some form or another whatever the medium, as every good storyteller knows. It’s what renders the sequence excerpted above—maybe my favorite in the whole of Watchmen, though competition’s stiff—so powerful. What if Jon had stayed to explain his remark? What if we’d continued to a tedious illustration of exactly what he meant, and how inevitably Veidt’s hard won peace would decay? It would be inane; the scene works because all that is left implicit.
Like its own central chapter “Fearful Symmetry,” this is a narrative in exquisite, delicate balance. And why is it constructed that way? Because Moore didn’t use the Charlton characters. Because the Charlton characters had to be good for further adventures, but the protagonists of Watchmen (who are not, nota bene Dan DiDio, called “The Watchmen”) could be single use. And everything about the reader’s experience of the book turns on this being the case—on Watchmen not being, as many superhero comics are, a chapter in a series that continues indefinitely. That “Nothing ends Adrian” gets its power because we understand that Manhattan’s denial actually is the end of our story—that there isn’t any “Stay tuned for issue 13, where we find out just what Jon meant!”
In other stories, the reverse is true: Batman defeats the Joker—again, and we knew he would–but never for good. He’ll escape from Arkham and the battle will start over, because the only final victory of order over chaos is the heat death of perfect, entropic equilibrium. Like a Wagner motif, it doesn’t have its full meaning in isolation, but only as one permutation in a series. Pick your favorite Batman/Joker story, and suppose it really ends there: Joker dies (for good!), Batman maybe beats up on ordinary thugs for a couple years before a silver-haired Bruce Wayne retires or finally succeeds in joining his parents. However masterfully written that last story is, it would be a cheat and an anticlimax. To say (as we accurately do) that their conflict is iconic is just to say that it transcends the specific: It’s always still happening now, whenever you’re reading. Nothing ever ends.
The real trouble, then, isn’t so much that nobody should ever do to Alan Moore characters what he did to Arthur Conan Doyle and Bram Stoker and Jules Verne’s characters. The trouble is that Moore told a self-contained story in a corporate and cultural context where the Extended Universe narrative is the norm, and where readers are primed to treat continuations of a story, if they have the proper institutional imprimatur, as “real” for that fictional context. Nobody reads The League of Extraordinary Gentlemen back into 40,000 Leagues Under the Sea in any comparable way.
The irony here is that you actually could have sequels and prequels and parallel versions of Watchmen without damaging the reader’s experience of the original. It’s just that D.C. Comics can’t be the one to do it, because the narrative worlds would be too close to keep them from colliding—a recipe, as every D.C. reader knows, for a Crisis.
February 11th, 2012 · 5 Comments
“Traveling Light in a Time of Digital Thievery,” Nicole Perlroth, New York Times 2/10/12:
When Kenneth G. Lieberthal, a China expert at the Brookings Institution, travels to that country, he follows a routine that seems straight from a spy film.
He leaves his cellphone and laptop at home and instead brings “loaner” devices, which he erases before he leaves the United States and wipes clean the minute he returns. In China, he disables Bluetooth and Wi-Fi, never lets his phone out of his sight and, in meetings, not only turns off his phone but also removes the battery, for fear his microphone could be turned on remotely.
“In China, business travelers take extreme precautions to avoid cyber-espionage,” Ellen Nakashima & William Wan, 9/26/11:
“I’ve been told that if you use an iPhone or BlackBerry, everything on it — contacts, calendar, e-mails — can be downloaded in a second. All it takes is someone sitting near you on a subway waiting for you to turn it on, and they’ve got it,” said Kenneth Lieberthal, a former senior White House official for Asia who is at the Brookings Institution. [...]
Travelers there often tote disposable cellphones and loaner laptops stripped of sensitive data. Some U.S. officials take no electronic gear. [...] Another common tactic is to remove batteries from cellphones, which makes digital tracking more difficult and prevents microphones from being activated remotely.
Hackers’ preferred modus operandi, security experts say, is to break into employees’ portable devices and leapfrog into employers’ networks — stealing secrets while leaving nary a trace. [...]
“If a company has significant intellectual property that the Chinese and Russians are interested in, and you go over there with mobile devices, your devices will get penetrated,” said Joel F. Brenner, formerly the top counterintelligence official in the office of the director of national intelligence.
On the eve of the 2008 Beijing Olympics, Joel Brenner, then the U.S. national counterintelligence executive, first issued government safety guidance to overseas travelers, with such tips as: “If you can do without the device, don’t take it.” Though no country was named, “it was really directed at countries like China and Russia,” Brenner said in a recent interview. [...]
What’s at stake is not only the security of your current communications, but the security of your secrets back home,” said Brenner, who advises clients on data security at the law firm Cooley LLP. “That’s the real danger.”
This is actually a good deal less egregious than some of the Times‘ other recent rewrites, and in any event, I don’t think there’s anything fundamentally wrong with looking at a competitor’s article and saying: “Hey, we should really do our own version of this”—especially when, as in this case, they’ve updated it with some interesting new information. But is it really so hard to just link back from the online version so the reader can see what you’re building on? It’s kinda standard practice in 2012—which makes not doing it seem like a conscious, petty refusal to admit that occasionally someone else gets to an interesting story first. The Post did a strong story, and it’s great that Times readers get the benefit of an updated version—just be adults and acknowledge that’s what you’re giving them.
This is probably the least interesting (because it should be so self-evident) and yet most important paragraph in a must-read Cory Doctorow essay:
In short, [proponents of more aggressive copyright enforcement] made unrealistic demands on reality and reality did not oblige them. Copying only got easier following the passage of these laws—copying will only ever get easier. Right now is as hard as copying will get. Your grandchildren will turn to you and say “Tell me again, Grandpa, about when it was hard to copy things in 2012, when you couldn’t get a drive the size of your fingernail that could hold every song ever recorded, every movie ever made, every word ever spoken, every picture ever taken, everything, and transfer it in such a short period of time you didn’t even notice it was doing it.”
I’ve found myself stressing this to reporters who call to ask about what we should do “instead” of SOPA and PIPA, because the framing of this entire debate remains mindblowingly shortsighted. In five years, regardless of anything Congress does now, the current round of garment rending over “rogue websites” is going to seems as comically quaint and irrelevant as old jeremiads against the libidinous excesses of jazz dancing and Elvis lyrics. The big, dumb, obvious technological fact that an awful lot of smart people seem reluctant to grok is this: Copying and sharing information is vastly cheaper and easier than it has ever been at any time in human history. It is also vastly more difficult and expensive than it will ever be again.
This weekend I finally upgraded to the latest version of Mac OSX, Lion. Owing to some weird decisions by Apple, I had to install it from physical media: An 8 gigabyte “thumb drive,” which is really rather misnamed, because it’s actually about the size of two wooden matchsticks. The year I was born, that amount of data storage space—without any software—would have cost more than the office building you’d need to house it. The year I got my drivers’ license, it would’ve cost about as much as a good used car. In 2012, 8 gigs of storage is the kind of thing you give away as freebie conference schwag—like a logo-embossed pen, except a good deal smaller. If I decide to use it for something else now that I’m done installing the OS, it will easily accommodate about 4 feature-length films in high definition. IBM recently announced a breakthrough in storage technology that could increase current capacity by a factor of 100 within a few years, which means instead of just carting half our music libraries around with us in our pockets, we’ll have entire music libraries, and high-def video libraries to boot.
One of the features I noticed they’d added in Lion is Airdrop, which establishes an ad hoc peer-to-peer WiFi connection with other nearby Apple devices. This isn’t particularly useful for my desktop, since anyone who’s actually in my apartment is probably already on my home WiFi network, but one can imagine it being awfully handy for mobile devices. “What am I listening to? Hang on, I’ll beam it over.” For transfers outside physical proximity, the next generation wireless data standard recently approved by the International Telecommunications Union maxes out at about a Gigabit per second. In practical terms, that means about a minute to transmit an uncompressed music CD (and much shorter for, say, MP3s at the bitrate you get from iTunes) or 90 seconds for a high-definition TV episode.
Existing online social networks, with near universal adoption in many social circles, already provide a trust infrastructure for limited sharing that will make these kinds of transfers almost impossible to police—or even reliably detect. In a world where every teenager in the country is carrying a pocket-sized server, and encrypted wireless VPN relays can run out of palm-sized cubes, an enforcement strategy based on raiding data centers is just going to look cute. Legislators who think “the Internet” means “the Web,” who are too fixated on the problems some lobbyist is complaining about right now to think two steps ahead, are in for a rude awakening. They’re in the grip of the enforcement fantasy: The misapprehension that technology is going to stay still long enough for traditional, targeted law enforcement approaches to effectively limit the scope and scale of copying.
That’s not to say that nothing can be done to avert a near-future world of largely unregulated and unregulable copying and sharing. If we were willing to implement a comprehensive scheme of innovation-stifling technology mandates and pervasive surveillance so absolute as to make the People’s Republic of China look like Burning Man, it could at least be delayed. But I assume that the United States is not yet prepared to completely betray its basic principles to safeguard the profitability of Friends reruns.
If we’re not willing to be China, though, then all these discussions about “what we’re going to do” about piracy are just the wonky equivalent of fanboy debates about whether Wolverine would beat Batman in a fight, for all the bearing they have on reality. What are we going to do that makes a long-term difference? Nothing. Anyone who wants to copy stuff without paying for it can do so easily, and it only gets easier and faster from here. Finding this morally outrageous or throwing a tantrum about the deep unfairness of it all won’t make it less true, though the tantrum might break a vase or two.
A slightly more Zen approach would be to “accept the things you cannot change,” as the coffee mug has it, and take the opportunity to step back and reevaluate. We have a legal structure for incentivizing creativity that makes copying and public performance the key points of regulatory intervention. There isn’t some deep moral reason that it’s these points and not others. There are lots of other ways to enjoy creative works without paying the creator, after all: Borrowing a copy, buying used, watching at a friends house, DVRing the broadcast and skipping all the commercials, incessantly singing (to yourself or with a friend) that catchy tune you overheard in the cab. Nobody tries to claim those are “stealing,” mainly because we’ve decided not to try to regulate those activities.
We decided to regulate copying instead, because copying was a lot easier and cheaper to regulate when we wrote the copyright statutes. Copying a book or record on a mass scale, unlike lending or singing in the shower, was not the kind of thing an ordinary person had the necessary equipment for—and the equipment tended to be bulky enough that you could usually track it down without having to pry into a lot of homes (and bathrooms). But the thing we decided to regulate because it was rare and expensive is now as cheap and ubiquitous as all the other stuff we didn’t regulate because it was cheap and ubiquitous. The good news is, most people are still glad to pay for the content they really like, if it’s provided in a convenient form and at a reasonable price, even when they can (or did!) easily copy it free. But maybe that’s not enough, and there are other points of regulatory intervention that will help creators internalize enough of the value of their output to make the investment worthwhile. That’s an actually productive subject of inquiry, but it’s not one anybody’s putting much effort into as long as they remain in the grips of the enforcement fantasy.
January 24th, 2012 · 4 Comments
I want to dilate slightly on a point I made in a recent post at the Cato blog thinking about the Megaupload takedown and online innovation:
The last innovation is always safe. That’s why it’s easy to claim concrete examples of the harm regulation might do are hyperbolic fearmongering: Nobody’s going to shut down YouTube or Twitter now, because we’ve already seen the incredible value creation they enable, even if they also make it a bit easier to infringe copyrights. And anyway, the success stories eventually get big enough to afford their own fancy lawyers. It’s the next platform that we risk strangling in the cradle, because every new medium starts out recapitulating old media content before it becomes truly generative. Early radio is full of people reading newspapers and books out loud. Early TV and film looks like what you get when someone points a camera at a stage play.
An important thing to understand about innovation that involves computing technology, annd in particular innovation in Web platforms, is that it’s user-driven to a much greater, much faster extent than we’re accustomed to. It is, of course, impossible to predict how a major new technology like the car or the cell phone will change society over the long term, but you have a pretty good idea what the technology is for at the time it’s introduced. I doubt Ford anticipated drive-in theaters, but he knew that the Model T was a machine for getting you from point A to point B.
Online innovation isn’t really like that. The most important new platforms of the Internet era have essentially opened up a space and seen what thousands or millions of users communicating with each other want to do with it. Think of the early phases of Twitter’s popularity, when there were plenty of pundits scoffing at this fad for letting people know what you had for breakfast—which at first was not a wholly inaccurate picture. Nobody anticipated that it would play a pivotal role in global anti-authoritarian revolutions, or that CNN would soon be mining it as a kind of realtime Zeitgeist thermometer during debates and primaries.
On the Internet, you don’t know what a new technology is for until you see what people do with it. Biologists have a saying: “Evolution is smarter than you are.” They mean that even a fundamentally blind process of trial and error, iterated over millions of years, will come up with better solutions to adaptive problems than even the cleverest primate product of that process. Smart tech entrepreneurs know how to take advantage of the corrolary: The Internet is smarter than you are. Online innovation is a collaborative process, and the most interesting uses the users find for your platform won’t neecessarily be the ones you intended. Indeed, that’s the guiding idea behind the “end to end principle” that has made the meta-platform of the Internet itself so incredibly generative. The pipes are, more or less, dumb: They know how to pass packets. Everything actually intereresting about the Internet arises from what people do on top of that dumb infrastructure, and it works because the pipes are basically open. A coder with a new idea doesn’t have to convince someone to enable the functionality they’re interested in implementing—a feed of 140 character messages? come on, who needs that?—they just try it, and the ideas that provide value spread like wildfire.
One corrolary of this is that the initial phase of deployment for a new platform is typically going to be the “what I had for breakfast” phase—the experimental phase when people are trying out a lot of different things and figuring out what the platform is good for. And very often, the first thing it will be good for is copyright infringement.
Consider YouTube for a minute. Seven years in, we see a dizzying array of creative purposes YouTube is good for. There’s professional quality programming produced specifically for YouTube, fantastically creative amateur mashups, lip dub videos, aspiring Biebers singing cover songs, amateur talk shows and game reviews and diaries, cute kittens and playful toddlers, kids showing off their light-saber skills, and a surprising number of people just unpacking the loot from their latest trip to the mall. We have an online video culture——a meme culture, if that’s not too redundant—that provides a thousand models for the kind of thing someone with a webcam and consumer video editing software might want to do with the platform… along with the realization that you, too, could even create a whole new genre.
In 2006—on YouTube, day one—none of that existed. We didn’t even have a mental model of what mass amateur video creation would look like, never mind the actual content to populate the platform. What did exist was a lot of copyrighted music and video sitting around on people’s hard drives. Needless to say, it would not have been feasible on day one for the platform to effectively pre-filter every user upload and determine which videos infringed someone’s copyright. A filter algorithm wouldn’t be able to tell the difference, up front, between Fair Use and simple infringement, even if it had amassed in advance an enormous and constantly updated database of content samples to compare uploads against. (And of course, when the platform isn’t limited to a specific type of content—like video or music—a filter can’t do much of anything with a compressed or encrypted file.) The only way for the platform to launch at all, in anything remotely like the form we know it, was to launch open.
I don’t clearly recall what all was on YouTube in those early days, but I’d wager the proportion of copyright infringing material was a good deal higher than it is today, because on day one, that’s most of what there was to upload. For those who figured out how to download or capture the underlying video files, though, those clips would provide the raw material for a lot of the creative, transformative fair use output that is now virtually synonymous with the platform.
We can even see something analogous in older industries. Japanese companies started by making cheap knock offs of American technology, then learned enough to start innovating themselves. One rather telling passage in a 2010 GAO report on counterfeiting and piracy suggests that the most important effect of “counterfeiting” is actually knowledge transfer: The thing that enables the copiers to rapidly transform themselves into legitimate competitors.
We can boil all this down to a few basic principles:
(1) In their early phases, new and open platforms for user-contributed content will tend to look a lot like tools for copyright infringement, because copying naturally precedes creation.
(2) Generative platforms need to “launch open” in order to fully harness the creative and innovative capabilities of a distributed user base.
(3) This creates a tradeoff between copyright enforcement and generativity. A regulatory regime that is more protective of one interest will tend to yield more “collateral damage” to the other.
(4) The collateral damage to copyright interests will always—and especially initially—be more visible than the damage to generativity. Copying that does happen is apparent, but there’s no way to detect user-driven innovation and creation that doesn’t happen. If you knew what to expect in advance, it wouldn’t be innovation.
(5) The value of existing intellectual property, its production mechanisms, and its distribution channels, is already apparent. The owners of those assets can readily identify themselves and each other, making it relatively easy to organize to advance their interests, and point to visible harms inflicted by open platforms. Beneficiaries of future generativity are less easy to organize because they don’t know who they are yet.
(6) This creates institutional and political bias in favor of protecting IP at the expense of generativity. The bias itself is undetectable because its primary manifestation is a relative absence of change.
(7) Attempts to strike a calculated balance between these interests on the basis of what we currently know about the uses of technology will tend to get it wrong. Hence SOPA, PIPA, PRO-IP, ACTA, and all the rest.
With Ron Paul’s now-infamous newsletters once again making headlines, I mulled whether I ought to revisit the issue, but ultimately decided that there wasn’t much to add to the long piece Dave Weigel and I wrote for Reason back in 2008, especially since I’d already elaborated in a couple blog postscripts written shortly after that article appeared.
Apparently, The New York Times agreed. On Monday, they ran a piece that amounts to a couple paragraphs of “fresh tops” aimed at trying to make the piece current, followed by a very light, very lazy rewrite of our article. It cites exactly the same essays and materials we did, takes for granted the identity of Paul’s chief ghostwriter and newsletter editor (which our article spent a fair amount of space publicly establishing for the first time), and even interviews exactly the same sources on the same subjects. (I’ll buy that any reporter would have phoned Ed Crane up; I’ll eat my left shoe if the authors had the first idea who Carol Moore or Mike Holmes were before they read our piece.)
Please don’t take my word for it, though: Compare for yourself. This isn’t a “follow-up” story. It’s a sloppy paraphrase whose authors expended the bare minimum effort of getting our sources to repeat quotes anew so they could use our material without citing the original source. Or very nearly: A few sentences from the very end, they acknowledge one tidbit was “first reported in Reason,” which is a rather brazen implicit deception, given that the same is true of almost everything else in the article. The sad thing is, if they’d been willing to open with a candid reference and link, they could’ve saved the time spent revisiting ground we covered and actually contributed something to the story.
Unlike Dave Weigel, I’m no longer a journalist, so I actually don’t care about being credited for long-ago reporting on a topic I had no intention of ever returning to. What I do care about is de facto deception of the audience by lazy journalists eager to pass off their regurgitation as reporting—which seems to be rather a habit at the Times. I imagine they get away with it because their scribes are normally lifting from people who aspire to work there one day—but since, again, I’m no longer a journalist, I don’t feel any particular qualms about pointing it out.
I still probably wouldn’t have bothered with a post just to lob a brickbat at some lazy journalism, but in this case it’s actually germane to the substance of the story. The implication, after all, is that even though the newsletters were a focus of national attention four years ago, Paul’s fellow travelers were content to gloss over this ugly history—quietly complicit in this pandering to racism—until the bold bloodhounds at the Times sniffed out the scoop. It looks rather different if the Times is just rehashing the highlights of what a libertarian magazine explored in greater details years ago.
As an ex-journalist myself, I get that it seldom makes sense to waste valuable column inches stroking the ego of every hack whose work you looked at before tackling a topic. The hack’s mom may care, but the average reader certainly doesn’t. When it’s an isolated factoid or a quote, I say lift and godspeed. When you’re doing little more than recapitulating an earlier article wholesale, however, and when it is actually directly relevant to the story that this topic has been exhaustively investigated and discussed within the very movement you’re writing about… well, in those cases, if you don’t have any professional scruples, at least have a little fucking shame.
Addendum: Just to clarify, I’m not annoyed about our reporting being “stolen”—you can’t “steal” public domain facts—or looking to get some kind of acknowledgement by name, which would be of no particular professional value to me at this point (and probably generate unwanted interview requests on a topic I’m happy to be done with). I’m annoyed that what I’d thought was a decent piece of writing and reporting got the equivalent of a rewrite by a stoned highschool student adapting a review essay for an overdue book report. So readers got this mangled account—including an incredibly confused idea of what the faultlines in contemporary libertarianism are about, assuming anyone cares about these internecine pissing contests—rather than a simple link to a more thorough treatment. While I appreciate the supportive comments, nobody should really be offended on my behalf here. Be offended that people who subscribe to the Paper of Record aren’t getting the quality of coverage they’re paying for because a couple of indolent hacks are too desperate to give the appearance of being real reporters to provide a reference and do original work.