February 14th, 2012 · 6 Comments
At regular intervals—too short for it to even be amusing anymore—we now hear that debates over Internet regulation would be more productive if only people would get it through their thick skulls that the Internet is not some special free-for-all zone. There’s no reason it can or should remain magically exempt from the rules that apply everywhere else (we are reminded) and it is absurd and mysterious that some people (we are assured) believe otherwise.
This is a fair point. But what about all these hippy-dippy Real World anarchists who think meatspace can remain immune to the rules any well-managed virtual community understands to be essential? How is it, for instance, that citizens are physically capable of injuring each other, regardless of whether they’ve opted in to player-versus-player? And what fool designed it so that my image is visible to all other users in the same city, even if we aren’t friends? You’ve even apparently got to jump through a bunch of hoops to get something called a “restraining order” just to implement a simple user block!
What will actually make debates over Internet regulation more productive is universal recognition that the first paragraph is exactly as dumb as the second. (Possibly more so, since the second at least hints at some interesting possibilities.) You cannot implement an analogy. The rules that you’d want to apply if you could make it so just by wishing are not always the rules it is wise or feasible to attempt to actually put in place, once you’ve factored in the probable efficacy of that attempt and its unintended side-effects. Both of these, alas, are determined by annoyingly stubborn “facts” about the nature of the technological context in which you want to enforce the rules.
Not everyone understands the intricate technical details of how packet-switched digital networks function, and not everyone needs to. But if you truly don’t comprehend that “closing down an illegal shop” is not actually the same as—and in every possible way a pretty awful metaphor for—”getting thousands of ISPs to implement DNS filtering at the domain level,” you should quietly recuse yourself from Internet policy debates until you do understand the difference. And if you find yourself suggesting that Google “helped overthrow the head of an entire country in a weekend,” and therefore must simply lack willpower when they say they can’t automatically screen out trademark and copyright violations, perhaps you should think twice about sitting on committees that vote on Internet legislation. I say this, incidentally, as a pure policy wonk who hasn’t done anything remotely resembling “network administration” since I was a teenager running a dial-up BBS out of my bedroom. You can pick up enough to follow along at home without going to MIT.
Most World of Warcraft players at least understands that it’s only the code of the game that enables adolescent fantasies of magical omnipotence to be played out without consequence—and that the laws of physics are unlikely to be so obliging. They understand that their problems can’t be banished just by reading a few words from an enchanted parchment—even if we feel that’s how the world ought to work. I live in perpetual hope that legislators will someday grasp this point as well, and realize that not all resistance to regulation is born of a mean-spirited hatred of magic.
February 14th, 2012 · 7 Comments
When I heard that DC Comics was planning a series of prequels to Watchmen, my first reaction was the one seemingly shared by most fans of the seminal graphic novel: “For the love of God, why?” Satirists have had plenty of fun contemplating the schlocky derivatives that might be spun off Alan Moore’s masterpiece for the sake of a quick buck, but it had gone unsullied so long that many of us had begun to imagine that taste might have scored a rare victory over avarice this time around.
My second thought also appears to have been fairly common: Moore is probably the last person in any position to wax righteous on this point. With the conspicuous exception of V for Vendetta (and arguably the historical fiction From Hell) his most famous works have involved re-imagining characters created by others. Moore cut his teeth on Mick Anglo’s Marvelman (AKA Miracleman), and was introduced to American readers by his brilliant run writing Len Wein’s Swamp Thing. His Lost Girls cast beloved heroines of children’s literature in what Moore himself characterizes as “pornography,” while The League of Extraordinary Gentlemen mashes up classic pulp adventure stories to give us a 19th century Avengers, with Mr. Hyde, Captain Nemo, and Allan Quatermain filling in for the Hulk, Tony Stark, and Captain America. Watchmen itself was originally supposed to star a roster of Charlton comics characters that D.C. had acquired, including Steve Ditko creations Captain Atom and The Question (ultimately reimagined as Doctor Manhattan and Rorschach).
My third thought, however, was that looking at why he didn’t end up using those original Charlton characters—and instead giving us their vastly more interesting close cousins—might shed some light on why fans are so dismayed in this case by the quintessential comics ritual of passing an iconic character to a new generation of writers. The problem was that it quickly became clear to the honchos at DC that Watchmen would break their new toys. It would have ended with (spoiler alert) Thunderbolt as a mass murderer, The Question a bloody smear in the snow, Blue Beetle as an aging husband, and Captain Atom an autistic spacegod. These were not promising starting points for ongoing use of the characters.
The classic comics solution would have been to simply set the events of Watchmen in an alternate continuity or parallel timeline, leaving the mainstream versions of the characters untouched, but at the time, D.C. had just launched its Crisis on Infinite Earths, which was to climax with the compression (since undone) of D.C.’s increasingly cluttered multiverse into a single, more manageable universe. Even had that not been the case, though, it seems inevitable that either (1) the power of Watchmen would have been diluted by relegating it to a mere “What If” story about characters who continued to have “ordinary” superhero adventures in main continuity, or (2) more likely, the power of the Watchmen narrative, and the characters’ actions and attitudes within it, would have overwhelmed any later attempt to portray them differently.
This ultimately brings us to an important distinction between standard, self-contained fiction and what I’m inclined to call “Extended Universe” narratives. By “Extended Universe” fiction, I mean the growing body of fictional continuities that are understood to provide a unified context—though just how unified varies from case to case—for stories told across many years or decades by many authors working in many different media. The two major comic book universes—D.C.’s and Marvel’s—are obvious examples, as are the various tangled continuities of Star Wars, Star Trek, Doctor Who/Torchwood, and perhaps Joss Whedon’s “Buffyverse,” among many others. The larger background story is often referred to as a “mythos,” which seems appropriate enough, because one of the earliest forms of human narrative—myth—embodies just this kind of collaborative, cross-generational storytelling.
What these have in common is that they’re united—however unstably—by the idea of a single “official” or “canonical” continuity that isn’t necessarily tied directly to a single creator. If you look, by way of contrast, to the Sherlock Holmes stories, everyone agrees that the “canon” consists of the original novels and short stories written by Sir Arthur Conan Doyle. If Alan Moore or Nicholas Meyer or Michael Chabon or Guy Ritchie later decide to tell new stories involving Doyle’s characters, we understand these as new and independent tales, which don’t affect what “really” happened to Holmes in the original story. By contrast, if later writers decide, with the blessing of the official keepers of continuity, that James T. Kirk’s middle name is “Tiberius,” that Spider-Man’s parents were secret agents, or that Green Lantern rings are vulnerable to the color yellow because of a demonic impurity, then that is the “real” version of events and always was—even if the original authors of those characters hadn’t envisioned any such thing. This is why Star Wars fans reacted with such annoyance at George Lucas’ decision to explain The Force as a scientific rather than a spiritual phenomenon, generated by microscopic “midichlorians”: Because Lucas can issue an “authoritative” decree that changes how the events of the beloved original trilogy are to be interpreted and experienced.
Some such considerations appear to have transformed Moore’s own attitude toward film adaptations of his work. His initial reaction was professed indifference to movie versions: The comic is the comic, and stands on its own, untouched by however good or bad a movie might be. His shift from indifference to hostility may reflect a recognition that this isn’t really the case, at least when the movie version is perceived as The Movie Version—a unique, official cinematic rendering, as opposed to simply one of many possible motion picture interpretations. If it’s The Movie, then however Doctor Manhattan’s voice sounds there (for instance) is what the character “really” sounds like in many people’s minds.
This might all seem a little silly: Obviously none of this stuff “really” happened, and so individual fans are free to disregard the events of the Star Wars prequels if they so desire. For that matter, they’re free to hit “pause” fifteen minutes from the end of Jedi and imagine (or film) their own ending where Luke and Vader do join forces to overthrow the Emperor and rule the galaxy as father and son. But fiction becomes more credible and immersive when we abandon a measure of control—imbuing it with the same obstinate independence as physical reality. And that means, in part, surrendering to local conventions about what counts as a “true” event within the extended narrative, as opposed to (say) fanfic or authorized “alternate” versions. (The lines, of course, are blurry. Batman: The Animated Series took place outside mainstream D.C. continuity, but the character of Harley Quinn and the vastly-more-interesting revised origin of Mr. Freeze were both rapidly imported from the former into the latter.) And all this, I think, brings us back to what bugs people about prequels to Watchmen.
Whatever uncertainty there may have originally been for the audience, Alan Moore told a superbly self contained story with exactly the right amount of narrative negative space. Few authors have as Moore’s finely tuned sense of the unique powers of comics as “sequential art”—frozen images where the “action” is provided by what readers supply to fill that razor thin, infinite gap between frames. But this is true in some form or another whatever the medium, as every good storyteller knows. It’s what renders the sequence excerpted above—maybe my favorite in the whole of Watchmen, though competition’s stiff—so powerful. What if Jon had stayed to explain his remark? What if we’d continued to a tedious illustration of exactly what he meant, and how inevitably Veidt’s hard won peace would decay? It would be inane; the scene works because all that is left implicit.
Like its own central chapter “Fearful Symmetry,” this is a narrative in exquisite, delicate balance. And why is it constructed that way? Because Moore didn’t use the Charlton characters. Because the Charlton characters had to be good for further adventures, but the protagonists of Watchmen (who are not, nota bene Dan DiDio, called “The Watchmen”) could be single use. And everything about the reader’s experience of the book turns on this being the case—on Watchmen not being, as many superhero comics are, a chapter in a series that continues indefinitely. That “Nothing ends Adrian” gets its power because we understand that Manhattan’s denial actually is the end of our story—that there isn’t any “Stay tuned for issue 13, where we find out just what Jon meant!”
In other stories, the reverse is true: Batman defeats the Joker—again, and we knew he would–but never for good. He’ll escape from Arkham and the battle will start over, because the only final victory of order over chaos is the heat death of perfect, entropic equilibrium. Like a Wagner motif, it doesn’t have its full meaning in isolation, but only as one permutation in a series. Pick your favorite Batman/Joker story, and suppose it really ends there: Joker dies (for good!), Batman maybe beats up on ordinary thugs for a couple years before a silver-haired Bruce Wayne retires or finally succeeds in joining his parents. However masterfully written that last story is, it would be a cheat and an anticlimax. To say (as we accurately do) that their conflict is iconic is just to say that it transcends the specific: It’s always still happening now, whenever you’re reading. Nothing ever ends.
The real trouble, then, isn’t so much that nobody should ever do to Alan Moore characters what he did to Arthur Conan Doyle and Bram Stoker and Jules Verne’s characters. The trouble is that Moore told a self-contained story in a corporate and cultural context where the Extended Universe narrative is the norm, and where readers are primed to treat continuations of a story, if they have the proper institutional imprimatur, as “real” for that fictional context. Nobody reads The League of Extraordinary Gentlemen back into 40,000 Leagues Under the Sea in any comparable way.
The irony here is that you actually could have sequels and prequels and parallel versions of Watchmen without damaging the reader’s experience of the original. It’s just that D.C. Comics can’t be the one to do it, because the narrative worlds would be too close to keep them from colliding—a recipe, as every D.C. reader knows, for a Crisis.
February 11th, 2012 · 5 Comments
“Traveling Light in a Time of Digital Thievery,” Nicole Perlroth, New York Times 2/10/12:
When Kenneth G. Lieberthal, a China expert at the Brookings Institution, travels to that country, he follows a routine that seems straight from a spy film.
He leaves his cellphone and laptop at home and instead brings “loaner” devices, which he erases before he leaves the United States and wipes clean the minute he returns. In China, he disables Bluetooth and Wi-Fi, never lets his phone out of his sight and, in meetings, not only turns off his phone but also removes the battery, for fear his microphone could be turned on remotely.
“In China, business travelers take extreme precautions to avoid cyber-espionage,” Ellen Nakashima & William Wan, 9/26/11:
“I’ve been told that if you use an iPhone or BlackBerry, everything on it — contacts, calendar, e-mails — can be downloaded in a second. All it takes is someone sitting near you on a subway waiting for you to turn it on, and they’ve got it,” said Kenneth Lieberthal, a former senior White House official for Asia who is at the Brookings Institution. [...]
Travelers there often tote disposable cellphones and loaner laptops stripped of sensitive data. Some U.S. officials take no electronic gear. [...] Another common tactic is to remove batteries from cellphones, which makes digital tracking more difficult and prevents microphones from being activated remotely.
Hackers’ preferred modus operandi, security experts say, is to break into employees’ portable devices and leapfrog into employers’ networks — stealing secrets while leaving nary a trace. [...]
“If a company has significant intellectual property that the Chinese and Russians are interested in, and you go over there with mobile devices, your devices will get penetrated,” said Joel F. Brenner, formerly the top counterintelligence official in the office of the director of national intelligence.
On the eve of the 2008 Beijing Olympics, Joel Brenner, then the U.S. national counterintelligence executive, first issued government safety guidance to overseas travelers, with such tips as: “If you can do without the device, don’t take it.” Though no country was named, “it was really directed at countries like China and Russia,” Brenner said in a recent interview. [...]
What’s at stake is not only the security of your current communications, but the security of your secrets back home,” said Brenner, who advises clients on data security at the law firm Cooley LLP. “That’s the real danger.”
This is actually a good deal less egregious than some of the Times‘ other recent rewrites, and in any event, I don’t think there’s anything fundamentally wrong with looking at a competitor’s article and saying: “Hey, we should really do our own version of this”—especially when, as in this case, they’ve updated it with some interesting new information. But is it really so hard to just link back from the online version so the reader can see what you’re building on? It’s kinda standard practice in 2012—which makes not doing it seem like a conscious, petty refusal to admit that occasionally someone else gets to an interesting story first. The Post did a strong story, and it’s great that Times readers get the benefit of an updated version—just be adults and acknowledge that’s what you’re giving them.
This is probably the least interesting (because it should be so self-evident) and yet most important paragraph in a must-read Cory Doctorow essay:
In short, [proponents of more aggressive copyright enforcement] made unrealistic demands on reality and reality did not oblige them. Copying only got easier following the passage of these laws—copying will only ever get easier. Right now is as hard as copying will get. Your grandchildren will turn to you and say “Tell me again, Grandpa, about when it was hard to copy things in 2012, when you couldn’t get a drive the size of your fingernail that could hold every song ever recorded, every movie ever made, every word ever spoken, every picture ever taken, everything, and transfer it in such a short period of time you didn’t even notice it was doing it.”
I’ve found myself stressing this to reporters who call to ask about what we should do “instead” of SOPA and PIPA, because the framing of this entire debate remains mindblowingly shortsighted. In five years, regardless of anything Congress does now, the current round of garment rending over “rogue websites” is going to seems as comically quaint and irrelevant as old jeremiads against the libidinous excesses of jazz dancing and Elvis lyrics. The big, dumb, obvious technological fact that an awful lot of smart people seem reluctant to grok is this: Copying and sharing information is vastly cheaper and easier than it has ever been at any time in human history. It is also vastly more difficult and expensive than it will ever be again.
This weekend I finally upgraded to the latest version of Mac OSX, Lion. Owing to some weird decisions by Apple, I had to install it from physical media: An 8 gigabyte “thumb drive,” which is really rather misnamed, because it’s actually about the size of two wooden matchsticks. The year I was born, that amount of data storage space—without any software—would have cost more than the office building you’d need to house it. The year I got my drivers’ license, it would’ve cost about as much as a good used car. In 2012, 8 gigs of storage is the kind of thing you give away as freebie conference schwag—like a logo-embossed pen, except a good deal smaller. If I decide to use it for something else now that I’m done installing the OS, it will easily accommodate about 4 feature-length films in high definition. IBM recently announced a breakthrough in storage technology that could increase current capacity by a factor of 100 within a few years, which means instead of just carting half our music libraries around with us in our pockets, we’ll have entire music libraries, and high-def video libraries to boot.
One of the features I noticed they’d added in Lion is Airdrop, which establishes an ad hoc peer-to-peer WiFi connection with other nearby Apple devices. This isn’t particularly useful for my desktop, since anyone who’s actually in my apartment is probably already on my home WiFi network, but one can imagine it being awfully handy for mobile devices. “What am I listening to? Hang on, I’ll beam it over.” For transfers outside physical proximity, the next generation wireless data standard recently approved by the International Telecommunications Union maxes out at about a Gigabit per second. In practical terms, that means about a minute to transmit an uncompressed music CD (and much shorter for, say, MP3s at the bitrate you get from iTunes) or 90 seconds for a high-definition TV episode.
Existing online social networks, with near universal adoption in many social circles, already provide a trust infrastructure for limited sharing that will make these kinds of transfers almost impossible to police—or even reliably detect. In a world where every teenager in the country is carrying a pocket-sized server, and encrypted wireless VPN relays can run out of palm-sized cubes, an enforcement strategy based on raiding data centers is just going to look cute. Legislators who think “the Internet” means “the Web,” who are too fixated on the problems some lobbyist is complaining about right now to think two steps ahead, are in for a rude awakening. They’re in the grip of the enforcement fantasy: The misapprehension that technology is going to stay still long enough for traditional, targeted law enforcement approaches to effectively limit the scope and scale of copying.
That’s not to say that nothing can be done to avert a near-future world of largely unregulated and unregulable copying and sharing. If we were willing to implement a comprehensive scheme of innovation-stifling technology mandates and pervasive surveillance so absolute as to make the People’s Republic of China look like Burning Man, it could at least be delayed. But I assume that the United States is not yet prepared to completely betray its basic principles to safeguard the profitability of Friends reruns.
If we’re not willing to be China, though, then all these discussions about “what we’re going to do” about piracy are just the wonky equivalent of fanboy debates about whether Wolverine would beat Batman in a fight, for all the bearing they have on reality. What are we going to do that makes a long-term difference? Nothing. Anyone who wants to copy stuff without paying for it can do so easily, and it only gets easier and faster from here. Finding this morally outrageous or throwing a tantrum about the deep unfairness of it all won’t make it less true, though the tantrum might break a vase or two.
A slightly more Zen approach would be to “accept the things you cannot change,” as the coffee mug has it, and take the opportunity to step back and reevaluate. We have a legal structure for incentivizing creativity that makes copying and public performance the key points of regulatory intervention. There isn’t some deep moral reason that it’s these points and not others. There are lots of other ways to enjoy creative works without paying the creator, after all: Borrowing a copy, buying used, watching at a friends house, DVRing the broadcast and skipping all the commercials, incessantly singing (to yourself or with a friend) that catchy tune you overheard in the cab. Nobody tries to claim those are “stealing,” mainly because we’ve decided not to try to regulate those activities.
We decided to regulate copying instead, because copying was a lot easier and cheaper to regulate when we wrote the copyright statutes. Copying a book or record on a mass scale, unlike lending or singing in the shower, was not the kind of thing an ordinary person had the necessary equipment for—and the equipment tended to be bulky enough that you could usually track it down without having to pry into a lot of homes (and bathrooms). But the thing we decided to regulate because it was rare and expensive is now as cheap and ubiquitous as all the other stuff we didn’t regulate because it was cheap and ubiquitous. The good news is, most people are still glad to pay for the content they really like, if it’s provided in a convenient form and at a reasonable price, even when they can (or did!) easily copy it free. But maybe that’s not enough, and there are other points of regulatory intervention that will help creators internalize enough of the value of their output to make the investment worthwhile. That’s an actually productive subject of inquiry, but it’s not one anybody’s putting much effort into as long as they remain in the grips of the enforcement fantasy.
January 24th, 2012 · 4 Comments
I want to dilate slightly on a point I made in a recent post at the Cato blog thinking about the Megaupload takedown and online innovation:
The last innovation is always safe. That’s why it’s easy to claim concrete examples of the harm regulation might do are hyperbolic fearmongering: Nobody’s going to shut down YouTube or Twitter now, because we’ve already seen the incredible value creation they enable, even if they also make it a bit easier to infringe copyrights. And anyway, the success stories eventually get big enough to afford their own fancy lawyers. It’s the next platform that we risk strangling in the cradle, because every new medium starts out recapitulating old media content before it becomes truly generative. Early radio is full of people reading newspapers and books out loud. Early TV and film looks like what you get when someone points a camera at a stage play.
An important thing to understand about innovation that involves computing technology, annd in particular innovation in Web platforms, is that it’s user-driven to a much greater, much faster extent than we’re accustomed to. It is, of course, impossible to predict how a major new technology like the car or the cell phone will change society over the long term, but you have a pretty good idea what the technology is for at the time it’s introduced. I doubt Ford anticipated drive-in theaters, but he knew that the Model T was a machine for getting you from point A to point B.
Online innovation isn’t really like that. The most important new platforms of the Internet era have essentially opened up a space and seen what thousands or millions of users communicating with each other want to do with it. Think of the early phases of Twitter’s popularity, when there were plenty of pundits scoffing at this fad for letting people know what you had for breakfast—which at first was not a wholly inaccurate picture. Nobody anticipated that it would play a pivotal role in global anti-authoritarian revolutions, or that CNN would soon be mining it as a kind of realtime Zeitgeist thermometer during debates and primaries.
On the Internet, you don’t know what a new technology is for until you see what people do with it. Biologists have a saying: “Evolution is smarter than you are.” They mean that even a fundamentally blind process of trial and error, iterated over millions of years, will come up with better solutions to adaptive problems than even the cleverest primate product of that process. Smart tech entrepreneurs know how to take advantage of the corrolary: The Internet is smarter than you are. Online innovation is a collaborative process, and the most interesting uses the users find for your platform won’t neecessarily be the ones you intended. Indeed, that’s the guiding idea behind the “end to end principle” that has made the meta-platform of the Internet itself so incredibly generative. The pipes are, more or less, dumb: They know how to pass packets. Everything actually intereresting about the Internet arises from what people do on top of that dumb infrastructure, and it works because the pipes are basically open. A coder with a new idea doesn’t have to convince someone to enable the functionality they’re interested in implementing—a feed of 140 character messages? come on, who needs that?—they just try it, and the ideas that provide value spread like wildfire.
One corrolary of this is that the initial phase of deployment for a new platform is typically going to be the “what I had for breakfast” phase—the experimental phase when people are trying out a lot of different things and figuring out what the platform is good for. And very often, the first thing it will be good for is copyright infringement.
Consider YouTube for a minute. Seven years in, we see a dizzying array of creative purposes YouTube is good for. There’s professional quality programming produced specifically for YouTube, fantastically creative amateur mashups, lip dub videos, aspiring Biebers singing cover songs, amateur talk shows and game reviews and diaries, cute kittens and playful toddlers, kids showing off their light-saber skills, and a surprising number of people just unpacking the loot from their latest trip to the mall. We have an online video culture——a meme culture, if that’s not too redundant—that provides a thousand models for the kind of thing someone with a webcam and consumer video editing software might want to do with the platform… along with the realization that you, too, could even create a whole new genre.
In 2006—on YouTube, day one—none of that existed. We didn’t even have a mental model of what mass amateur video creation would look like, never mind the actual content to populate the platform. What did exist was a lot of copyrighted music and video sitting around on people’s hard drives. Needless to say, it would not have been feasible on day one for the platform to effectively pre-filter every user upload and determine which videos infringed someone’s copyright. A filter algorithm wouldn’t be able to tell the difference, up front, between Fair Use and simple infringement, even if it had amassed in advance an enormous and constantly updated database of content samples to compare uploads against. (And of course, when the platform isn’t limited to a specific type of content—like video or music—a filter can’t do much of anything with a compressed or encrypted file.) The only way for the platform to launch at all, in anything remotely like the form we know it, was to launch open.
I don’t clearly recall what all was on YouTube in those early days, but I’d wager the proportion of copyright infringing material was a good deal higher than it is today, because on day one, that’s most of what there was to upload. For those who figured out how to download or capture the underlying video files, though, those clips would provide the raw material for a lot of the creative, transformative fair use output that is now virtually synonymous with the platform.
We can even see something analogous in older industries. Japanese companies started by making cheap knock offs of American technology, then learned enough to start innovating themselves. One rather telling passage in a 2010 GAO report on counterfeiting and piracy suggests that the most important effect of “counterfeiting” is actually knowledge transfer: The thing that enables the copiers to rapidly transform themselves into legitimate competitors.
We can boil all this down to a few basic principles:
(1) In their early phases, new and open platforms for user-contributed content will tend to look a lot like tools for copyright infringement, because copying naturally precedes creation.
(2) Generative platforms need to “launch open” in order to fully harness the creative and innovative capabilities of a distributed user base.
(3) This creates a tradeoff between copyright enforcement and generativity. A regulatory regime that is more protective of one interest will tend to yield more “collateral damage” to the other.
(4) The collateral damage to copyright interests will always—and especially initially—be more visible than the damage to generativity. Copying that does happen is apparent, but there’s no way to detect user-driven innovation and creation that doesn’t happen. If you knew what to expect in advance, it wouldn’t be innovation.
(5) The value of existing intellectual property, its production mechanisms, and its distribution channels, is already apparent. The owners of those assets can readily identify themselves and each other, making it relatively easy to organize to advance their interests, and point to visible harms inflicted by open platforms. Beneficiaries of future generativity are less easy to organize because they don’t know who they are yet.
(6) This creates institutional and political bias in favor of protecting IP at the expense of generativity. The bias itself is undetectable because its primary manifestation is a relative absence of change.
(7) Attempts to strike a calculated balance between these interests on the basis of what we currently know about the uses of technology will tend to get it wrong. Hence SOPA, PIPA, PRO-IP, ACTA, and all the rest.
With Ron Paul’s now-infamous newsletters once again making headlines, I mulled whether I ought to revisit the issue, but ultimately decided that there wasn’t much to add to the long piece Dave Weigel and I wrote for Reason back in 2008, especially since I’d already elaborated in a couple blog postscripts written shortly after that article appeared.
Apparently, The New York Times agreed. On Monday, they ran a piece that amounts to a couple paragraphs of “fresh tops” aimed at trying to make the piece current, followed by a very light, very lazy rewrite of our article. It cites exactly the same essays and materials we did, takes for granted the identity of Paul’s chief ghostwriter and newsletter editor (which our article spent a fair amount of space publicly establishing for the first time), and even interviews exactly the same sources on the same subjects. (I’ll buy that any reporter would have phoned Ed Crane up; I’ll eat my left shoe if the authors had the first idea who Carol Moore or Mike Holmes were before they read our piece.)
Please don’t take my word for it, though: Compare for yourself. This isn’t a “follow-up” story. It’s a sloppy paraphrase whose authors expended the bare minimum effort of getting our sources to repeat quotes anew so they could use our material without citing the original source. Or very nearly: A few sentences from the very end, they acknowledge one tidbit was “first reported in Reason,” which is a rather brazen implicit deception, given that the same is true of almost everything else in the article. The sad thing is, if they’d been willing to open with a candid reference and link, they could’ve saved the time spent revisiting ground we covered and actually contributed something to the story.
Unlike Dave Weigel, I’m no longer a journalist, so I actually don’t care about being credited for long-ago reporting on a topic I had no intention of ever returning to. What I do care about is de facto deception of the audience by lazy journalists eager to pass off their regurgitation as reporting—which seems to be rather a habit at the Times. I imagine they get away with it because their scribes are normally lifting from people who aspire to work there one day—but since, again, I’m no longer a journalist, I don’t feel any particular qualms about pointing it out.
I still probably wouldn’t have bothered with a post just to lob a brickbat at some lazy journalism, but in this case it’s actually germane to the substance of the story. The implication, after all, is that even though the newsletters were a focus of national attention four years ago, Paul’s fellow travelers were content to gloss over this ugly history—quietly complicit in this pandering to racism—until the bold bloodhounds at the Times sniffed out the scoop. It looks rather different if the Times is just rehashing the highlights of what a libertarian magazine explored in greater details years ago.
As an ex-journalist myself, I get that it seldom makes sense to waste valuable column inches stroking the ego of every hack whose work you looked at before tackling a topic. The hack’s mom may care, but the average reader certainly doesn’t. When it’s an isolated factoid or a quote, I say lift and godspeed. When you’re doing little more than recapitulating an earlier article wholesale, however, and when it is actually directly relevant to the story that this topic has been exhaustively investigated and discussed within the very movement you’re writing about… well, in those cases, if you don’t have any professional scruples, at least have a little fucking shame.
Addendum: Just to clarify, I’m not annoyed about our reporting being “stolen”—you can’t “steal” public domain facts—or looking to get some kind of acknowledgement by name, which would be of no particular professional value to me at this point (and probably generate unwanted interview requests on a topic I’m happy to be done with). I’m annoyed that what I’d thought was a decent piece of writing and reporting got the equivalent of a rewrite by a stoned highschool student adapting a review essay for an overdue book report. So readers got this mangled account—including an incredibly confused idea of what the faultlines in contemporary libertarianism are about, assuming anyone cares about these internecine pissing contests—rather than a simple link to a more thorough treatment. While I appreciate the supportive comments, nobody should really be offended on my behalf here. Be offended that people who subscribe to the Paper of Record aren’t getting the quality of coverage they’re paying for because a couple of indolent hacks are too desperate to give the appearance of being real reporters to provide a reference and do original work.
December 19th, 2011 · 8 Comments
Proponents of ever stronger and longer copyrights, supported by ever more draconian enforcement mechanisms, like to toss around terms like “piracy” and “theft” for the emotional reactions they provoke. This is not, as Matt Yglesias notes, an aid to clear thinking: Copyright infringement and theft are both illegal—along with jaywalking, murder, and speeding—but they’re otherwise quite different acts, which are quite properly treated very differently as a matter of law, and prioritized differently as a matter of enforcement practice. The most obvious reason the analogy fails is that “theft” centrally involves depriving the owner of the thing that’s stolen. Copying a CD or DVD for a friend—or letting them borrow your copy, for that matter—may occasionally displace a legitimate purchase, but it doesn’t leave the artist or rightsholder with any fewer copies than they had before. That’s not to say copyright infringement isn’t also problematic, or something the government needn’t worry about deterring. Copyright maximalists insist on “theft” instead of “copyright infringement,” however, mostly because they don’t want people thinking too hard about the myriad ways these offenses are different, and how they might therefore call for different policy responses.
But if the defining characteristic of theft is that it deprives the victim of something they were entitled to use and enjoy, then there are things that can accurately be described as “intellectual property theft.” When legislators—many of whom now support censoring the Internet to stop “piracy”—rewrote the copyright bargain with the Sonny Bono Copyright Term Exension Act, they retroactively extended the monopoly of rightsholders over existing works by 20 years. That retroactive extension, of course, did nothing to incentivize new creation. And since economists have estimated that the present value of a copyright monopoly was already barely distinguishable from the value of an unlimited term, it’s doubtful that even the prospective extension bought us much additional creativity. But it did mean that the general public would be denied, for another 20 years, the free use of works that had been slated to fall into the public domain under the original copyright bargain. That sounds more like “theft” of intellectual property—and not just theft from a particular creator or industry, but from the whole of the public.
When rightsholders engage in copyfraud, insisting that other creators beg for permission and pay licensing fees for “fair uses” copyright law allows—and when skittish lawyers make that insistence effective, creators and their audiences are deprived of a use of that intellectual property they’re entitled to. When overbroad DMCA notices sent by careless lawyers remove original creations that making novel transformative use of prior work from the public Internet, users are robbed of art they are entitled to enjoy.
The pillaging of the public domain is real “intellectual property theft.” How about a crackdown on that?
November 21st, 2011 · 7 Comments
I’ll lay off Occupy and turn to the exponentially more objectionable treatment of them by authorities after this, but this is sort of what I was talking about in the previous post:
Authorities removed protesters Saturday evening from an abandoned school in downtown Washington that had been entered by members or sympathizers of the Occupy D.C. movement…. Earlier, it appeared that about a dozen people went into the three-story building, unfurling a large black banner from the roof of the three-story building, and vowing to stay inside the school until it is converted for community use.
One protester elaborates:
City officials have said they hope to have the building privately developed. Protesters said it should be kept for the public, perhaps to reopen as a homeless shelter.
“This building is not surplus, and we won’t allow the city to give it away or turn it into a boutique hotel,” said Abigail DeRoberts, a member of Free Franklin.
For those unfamiliar with D.C. geography, this is a little like demanding that a soup kitchen using only solid gold utensils be opened on Park Avenue. This is a huge property overlooking a park in the middle of D.C.’s downtown business district, and it’s not especially surprising that the city’s elected representatives think it maybe makes more sense to sell or lease it, reap a continuous stream of tax revenue, and use those funds to support social services—including homeless shelters elsewhere in the city. Maybe there are strong arguments against this, and if “occupation” is a short-term stunt to draw attention to the issue so that the public can be more engaged in further deliberation, fine, that’s how protest works. If “we won’t allow the city” means “we’ll exert political pressure by mobilizing opposition and advancing arguments,” fantastic.
But if that’s a “won’t allow” in the sense of actually trying to seize the space for the use your group thinks is best, I find myself again thinking: Wait, who elected you? There are cases where even a formally democratic decision might be so invasive of fundamental liberties that people are justified in attempting to just directly block implementation of that decision. But choices about how to use a public building don’t really seem to fall in that category. This is a decision involving a whole bunch of stakeholders—including those dozens of community activists and various folks who hate boutique hotels, but also developers and business owners and their employees and customers, surrounding businesses that might benefit from a hotel, beneficiaries of other programs that rely on tax revenue, and a broader taxpaying base (most of whom are unlikely to personally invest a whole lot of time and thought on this particular question). I’m familiar enough with D.C. local government not to be excessively sanguine about their making the right call, but they’re the ones actually elected to balance all those interests. So when a few dozen—or even a few hundred—people say “no, we won’t allow it” on the premise that they speak for “the community,” I think it’s fair to say: “Wait, no, you don’t.” And again, I get worried when a group is so convinced that it represents the authentic voice of the people that it thinks it has a mandate to override, as inherently illegitimate, any political decision it doesn’t like.
Almost everything about the execution of yesterday’s eviction of protesters from Zuccotti Park was an outrage, from the interference with reporters seeking to cover the event, to the needless destruction of protesters’ property, to Mayor Mike Bloomberg’s stunningly lawless disregard for a court order restraining the city. But on the underlying question of whether the city must allow any group to set up a tent city in public space indefinitely, I think Doug Mataconis gets it right: There’s no First Amendment right to camp out in a park, and no reason to think that there’s anything constitutionally offensive about a content-neutral rule designed to ensure that public parks can continue to be used as, well, parks. People, of course, have every right to speak their mind in public (or, in this instance, quasi-public) space. But laying down dozens of tents and announcing that you and your friends intend to live there indefinitely always sounded suspiciously like an attempt to effectively privatize that public space.
I’ve always had a similar reaction to that hoary protest chant: “Whose Streets? Our Streets! Whose Park? Our Park!” Here we’re supposed to understand that “our” means “the people” as a whole. But protesters—even when they call themselves “The 99%”—comprise a pretty minuscule fraction of a percent of the population of a city the size of New York. In practice, “our” means “this particular group of people,” even if they aspire to represent a much larger group. We don’t put expressive rights to a vote, fortunately, but it does seem like a whole bunch of democratically elected city officials are under the impression that their constituents want their parks to remain usable for traditionally park-ish purposes. Maybe they’re wrong, of course, or maybe that’s a pretext offered to squelch a threat to their corporate paymasters. But it always seems presumptuous when soi-disant populist movements, left and right, declare that “we the people” want this or that.
For most of human history, we’ve spent our whole lives in social clusters of a few hundred people—we’re basically hardwired for groups of that size. That makes it easy to look at a throng of a few thousand out at a rally and tell yourself, as another familiar chant has it: “This is what democracy looks like.”
Except, of course, it isn’t really. Or at any rate, it’s only a tiny part of what democracy looks like.
A small group of people self-selected for their commitment to some set of shared goals and values may be able to pick a set of slogans to chant in unison, or resolve their limited disagreements by consensus process. But real democracy in a pluralist society involves deep and often ineradicable disagreement—and not just on the optimal uses of public parks and other commons. It’s true, of course, that concentrated and wealthy interests routinely capture the apparatus of government, and use it to serve ends inimical to the general good. But a frame that sets up an opposition between “the 99%” and “the 1%” —or, if you prefer, between “Washington/media elites” and “Real America”—suggests a vain hope that profound political differences are, at least in some spheres, an illusion manufactured by some small minority.
Against that background, it’s instructive that so many OWS organizers have cited Tahrir Square as an inspiration. In much of the Arab world, after all, the problem isn’t so much resolving democratic disagreement as getting to the point where there are regular, free elections whose results are respected. However broken our system might be, we’ve at least gotten that far. In that context, though, once protest has successfully drawn public attention to an issue, it seems like the next step should be to get on with the messy and prosaic business of debating and deliberating on concrete reforms with those who have different views. If the people all (or nearly all) want the same thing, but an oppressive authority refuses to act on that shared desire, debate and deliberation are beside the point: There’s nothing to do but throw your bodies on the gears until the rulers have no choice but to comply. My sense is that many of the OWS folks think that’s more or less the situation we are in. Spend a few weeks in a self-selected community, and perhaps it becomes possible to believe that 99% of us really are all on the same page—or at least, would be if we weren’t brainwashed by the 1%. This has long been a major strain in conservative thinking: Everyone would see that our views are just simple common sense—obviously correct!—if not for a liberal media cabal systematically lying to people all day. Dark as this sounds, it’s utopian in one sense: It implies we’d all agree but for the malign influence of this or that small but powerful group.
I’m neither cynical enough to believe that our deeply flawed democracy is a complete sham, nor optimistic enough to hope the appearance of fundamental political conflict is a stage production masking an underlying harmony. But if disagreement is real—if large numbers of my fellow citizens sincerely hold very different views about what policy is best—then protest, however vital as a consciousness raising tool, can only be a preparation for the more humdrum enterprise of convincing your neighbors with sustained arguments (or being convinced yourself), electing candidates, and all the rest. To imagine protest not as prologue to politics, but as a substitute for it, suggests a denial of the reality of pluralism, and an unwillingness to find out what democracy actually looks like.