I’m doing a deep dive into the pre-history of the Fourth Amendment, and am continuously amazed at how perfectly the condemnations of the “general warrants” that incensed the Framers apply to the programmatic authority granted the NSA under the FISA Amendments Act. The legal challenge to that statute is currently hung up on questions of standing, which will shortly be before the Supreme Court in the case of Amnesty v. Clapper. The government’s position is that only those who can prove they’ve actually been wiretapped are entitled to challenge the sweeping and highly discretionary surveillance programs authorized by the law—and since FISA surveillance is permanently secret by default, nobody can have that proof unless the government volunteers it.
Which is funny (if you have a morbid sense of humor), because in one of the three seminal cases historians regard as the inspiration for the Fourth Amendment, Entick v. Carrington, Lord Camden’s ruling cites this exact situation as a key reason why general warrants should be forbidden:
[The general warrant] is executed by messengers with or without a constable (for it can never be pretended, that such is necessary in point of law) in the presence or the absence of the party, as the messenger shall think fit, and without a witness to testify what passes at the time of the transaction; so that when the papers are gone, as the only witnesses are the trespassers, the party injured is left without proof.
If this injury falls upon an innocent person, he is as destitute of remedy as the guilty: and the whole transaction is so guarded against discovery, that if the officer should be disposed to carry off a bank bill he may do it with impunity, since there is no man capable of proving either the taker or the thing taken.
Because a general warrant, unlike a specific or particularized one, left it to the discretion of the executor who would be searched, it would give rise to an intolerable scenario where many innocent people might be subject to the exposure or seizure of their private papers, with no realistic prospect of remedy for the invasion. The Framers of the Bill of Rights, naturally, found this intolerable too—and thus scholars pretty much universally recognize that the primary purpose of the Fourth Amendment was to prohibit such general warrants.
If the government’s argument is allowed to fly, we have to conclude that the Framers utterly failed in their attempt to remedy the kind of problem Lord Camden identified, because the problem just reappears at a higher level. General warrants, abhorrent for a host of reasons Camden and his contemporaries identified, are formally prohibited. But if you want to enlist the courts to enforce that prohibition, by challenging a law that rather blatantly establishes a system of general warrants, you’ve got to prove you were actually searched under one of those general warrants. When the digital papers are acquired, as the only witnesses are the eavesdroppers, the party injured is left without proof, and the whole transaction is so guarded against discovery that there is no man capable of proving either the interceptor or the thing intercepted.
Take a deep breath and appreciate the chutzpah involved here for a second. The government is effectively saying they’re immune from Fourth Amendment challenges because they have managed to create the exact, explicitly described scenario the Fourth Amendment was written to prevent. Maybe next they can pass a law making it illegal to publicly complain that your First Amendment rights have been violated.
September 4th, 2012 · 3 Comments
In the previous post, I suggested that thoughtful and educated adults rarely engage for very long in purely verbal disputes (which, alas, is not the same as saying such disputes are rare). When people don’t disagree substantively, but are only using words differently, a few minutes of argument should usually make this fact apparent to all parties. But smart people do often get mired in disputes that are partly verbal and partly substantive. Because it’s evident to everyone that there is a real disagreement—the dispute clearly is not merely semantic—the parties may overlook semantic components of the disagreement, and as a result fail to make headway or argue productively.
This is, I think, why you’ll sometimes see arguments between libertarians, conservatives, and progressives get stuck on what appear to be semantic points: Who has the “true” conception of “freedom”? Whose understanding of “coercion” is correct? People are reluctant here to simply eliminate the contested term and stipulate more precise ones that reflect their divergent conceptions, in part because these are terms with thick normative connotations and great rhetorical power. Members of our political and linguistic community tend to regard “freedom” as presumptively valuable, and “coercion” as presumptively bad, which makes us wary of abandoning (even for the sake of clarity in argument) a definition of “freedom” that tracks our convictions about what’s valuable, or one of “coercion” that maps onto our views about what practices and institutions are morally objectionable. To concede an opponent’s definition, we may fear, opens the door to sneaky arguments that trade on the normative connotations the term has in ordinary usage.
But this isn’t the only reason we might be reluctant to employ the “method of elimination” and simply stipulate new, purely descriptive terms corresponding to our divergent conceptions. Moral reasoning and moral argument often proceed from shared intuitions about core types of cases. The conviction that torturing innocent people for fun is wrong is, for most of us, more certain and stable than our commitment to any abstract principle or theory that purports to explain why this is the case, and any theory that fails to condemn such conduct will be, for that reason alone, unacceptable. We have many “thick” terms—terms which are partly descriptive and partly normative—precisely because we live in linguistic communities that share strong moral intuitions about types of conduct that are admirable or contemptible. So if the term “coercion” has a negative connotation, and it seems to comport with ordinary, colloquial usage to characterize a particular relationship as “coercive,” that’s a data point providing at least prima facie evidence that it’s in tension with our shared moral intuitions, and has features we would normally regard as objectionable without special justification. That’s hardly dispositive in itself—thick terms like “unchivalrous” can fall out of fashion because we come to see that they’re bound up with mistaken collective value judgments—but it’s not unreasonable to be wary of banishing, by neologism, whatever collective wisdom may be embedded in our ordinary linguistic practice. We can think of ordinary usage as establishing a burden of proof on someone who proposes introducing a distinction our colloquial usage fails to make, or ignoring one that it does typically reflect.
Whether for these reasons or out of pure stubbornness or misguided essentialism, then, we often choose to draw our battle lines around our words, arguing over whose conception of the contested term is “correct,” in the sense of tracking what is morally significant. This debate, though it is in one sense about language, is substantive rather than merely semantic. But in practice, it also means disputants are routinely trying to keep quite a few figurative balls in the air simultaneously. Sticking with the example of the debate over “coercion” in the workplace, the bones of contention may include:
- Which senses of “coercion” track our ordinary linguistic practice
- Which senses of “coercion” track normatively significant features of an action or system, or presumptively require special justification
- Whether our ordinary or colloquial use of “coercion” applies to a particular case or example
- Whether one or more narrower senses apply to that case
- Whether a particular case is morally objectionable or problematic
- Whether that particular case is objectionable (or acceptable) in virtue of fitting (or not fitting) one or more senses of “coercive,” or for some other reason.
Just to dilate briefly on that last point: An action or system may be coercive in a sense that presumptively demands special justification, but also be covered by such a justification. Provided we recognize the category of “justified coercion,” it would be a mistake to infer from a judgment that a case is morally acceptable on the whole that it must not involve (a normatively significant sense of) coercion. Conversely, a case may be morally objectionable without involving coercion, or for reasons distinct from whatever coercion it involves. If I make a fully-informed choice, from an attractive range of offers, to accept a job that involves obtaining supervisory approval each time I use the bathroom—and have sufficient resources that I could quit without serious hardship—it seems like a stretch to describe me as a victim of “coercion” in any ordinary sense. Nevertheless, if this requirement serves no significant business purpose, it seems open to moral criticism as a pointless intrusion on my dignity, autonomy, and privacy. Not every moral criticism can or need be shoehorned into the rubric of coercion.
In any event, when the points of contention are multiplied in this way, even very smart people are susceptible to confusion. Because the overall disagreement is clearly substantive and not merely verbal, and because more specific points of contention within that overall debate may arise for either semantic or substantive reasons, or for both at once, it will not always be obvious to all the parties which are the substantive and semantic components. That means it’s important to bear in mind that even the most manifestly substantive disagreement—a disagreement that is obviously not “purely verbal”—can still be partly verbal, and it often requires close attention to sort out which part is which.
For many people, philosophy is the ultimate exemplar of a useless discipline—or at the very least, high in the top ten. I recall that on the first day of my high school drivers’ ed class, our instructor began by asserting that this was probably the most practical subject most of us would study, in contrast to subjects like philosophy, which might be very nice to mull over on a lazy afternoon, but wouldn’t have much bearing on our daily lives. (Even at the time, something about this rang false. I didn’t then have the presence of mind to suggest that we do philosophy constantly—though mostly subconsciously, and therefore badly.) Perhaps I’m just rationalizing four expensive years, but readers will not be surprised to hear that I think otherwise. One reason the utility of studying philosophy isn’t always apparent, though, is that unless you go on to teach philosophy, you’re more likely to make routine use of the methods and strategies, the mental toolbox, that the training imparts than you are to deal explicitly with the traditional subject matter of philosophy. In the interest of not annoying our fellow humans more than necessary, most of us don’t call explicit attention to this or use philosophy jargon when we’re doing it—and often these strategies become so ingrained that their deployment is basically automatic. Since many of these strategies are useful in a wide variety of discursive contexts, I thought it might be nice to do an occasional series of posts highlighting useful ones when they occur to me. Maybe I’ve been doing this for years without quite noticing it—but giving these posts a common title might itself be helpful. If nothing else, I can turn them into a Kindle Single someday.
Let’s start with precisification—a process by which we try to dissolve merely semantic disagreements (and a surprising proportion of our most heated disagreements are at least partly semantic) by stipulating more precise definitions for contested terms that may be vague or ambiguous. I tried this in a recent post on a debate over “coercion” in the workplace that was playing out between the bloggers at Crooked Timber and Bleeding Heart Libertarians. Instead of arguing about which types of situations, actions, or social systems were “coercive” simpliciter, I suggested it would be more fruitful to eliminate the contested, morally-freighted term and replace it with a series of more precise terms, such as “exploitative coercion” and “baseline-sensitive coercion.” The disputants might continue to disagree about which forms of coercion were morally significant—serious ethical disagreements among educated people are seldom wholly semantic—but we could at least hope that there would be less disagreement about which precisified conception of coercion might be properly applied to different situations, or at least that the persisting disagreement would be more illuminating.
For this reason, I sometimes think of this as the elimination strategy: If disagreement seems to hang crucially on some contested or normatively loaded term, eliminate that term and see how it alters the conversation. (This does have its own pitfalls, because some terms are intrinsically normative, and cannot be usefully eliminated. The attempt to reduce normative terms like “good” or “wrong” to strictly descriptive conditions has come to be known as the “naturalistic fallacy,” though disagreement about whether it is actually a fallacy persists.) Elimination of this sort—not to be confused with “eliminationism,” which sometimes refers to the substantive denial of the existence of a class of entities, like moral facts or mental states—serves a couple of helpful functions. First, it forces everyone to make it clear to both themselves and others just what they mean. Every student of philosophy, starting with the hapless Athenians interrogated by Socrates, has had the disconcerting experience of discovering how hard it can be to clearly explain concepts they use every day, and had thought they understood completely. Second, it can clear away some of the emotional and normative baggage that inevitably attaches to terms like “liberty” or “coercion” even when we pledge with all sincerity to use them in some neutral and descriptive sense.
The late novelist Robert Anton Wilson advocated a rather extreme version of this strategy by urging the use of “E-Prime,” or English written without any form of the word “is.” Aside from being grammatically awkward, that’s probably neither necessary nor sufficient, even if it might be a useful exercise on occasion: You can make essentialist errors without using the word “is,” but happily, you can also avoid them while retaining it. The key move is to remember that saying “X is a Y” either asserts strict logical identity—which is only true in the trivial case when you’re asserting that an entity is identical to itself, or that one entity is known by two names—or, more often, is shorthand for ascribing a property that’s almost always more conditional or relational than the straightforward grammar of “is” implies. The Coke can on my desk “is” red, which is to say, under conditions of “normal” illumination, it will emit light of a wavelength that produces a phenomenal experience of “red” in a “normal” human observer when viewed through air, vacuum, clear glass, and so on. Any scientifically literate person will quickly acknowledge that this is what they “really” mean, implicitly, when they say “the Coke can is red.” Obviously nobody wants to lay all that out explicitly in normal conversation, both because it’s incredibly clunky and because there’s not much practical value, outside of dorm room bong sessions, in perpetually reminding ourselves that the “red” isn’t really in the can, but a mental property generated by an object-observer interaction that occurs within specific parameters. But in more theoretical discussions, even when at some level we know better, the beguiling simplicity of “is” grammar can obscure details that do make a difference.
So, for instance, you may recall a little flurry of debate a while back over the Republican rhetorical trope of characterizing Social Security as a Ponzi scheme, and the ensuing boomlet of essays and blog posts vehemently insisting that obviously the program is or is not an instance of one. A more productive frame might have been: In what respects can Social Security be meaningfully analogized to the classic Ponzi scheme, in what respects does that analogy break down, and on which dimensions would these similarities render the two susceptible to the same concerns or objections? That’s not a frame that lends itself to catchy slogans, and probably any thoughtful person who participated in that debate would readily agree that this was the real question under dispute all along. But I suspect you get a different and more instructive dialogue lingering a bit in that matrix of similarities and differences, rather than seeing it as a brief waystation on the road to the crucial all-things-considered verdict on whether it ultimately “is” or “isn’t.” If you don’t like this particular example; pick one of your own—there’s no shortage.
So that’s the first handy can-opener in our intellectual Swiss Army Knife: Haunted by a sneaking suspicion that a seemingly substantive debate is getting tangled in the weeds of semantics? Drop the contested term, and make up as many as you need to say what everyone means. If the hang up depends on what the meaning of the word “is” is, give up on trying to determine whether X is essentially Y and look at all the varied dimensions of similarity and difference to see which actually matter; whether they add up to an overall equivalence probably won’t. These moves will seldom entirely dissolve a serious disagreement—thoughtful people will generally notice if a difference in views is merely and wholly semantic before they waste much time arguing about it—but they’ll often make the disagreement more productive.
Update: My ever-astute commenters note two related resources: David Chalmers’ essay “Verbal Disputes” (which similarly recommends what he calls the “method of elimination”) and a sharp post at Less Wrong cataloging “37 Ways That Words Can Be Wrong.”
Barack Obama’s recent “Elizabeth Warren Moment” at a speech in Roanoke has been getting plenty of attention, though the focus of much of the criticism seems misplaced. Here’s the full relevant passage:
Look, if you’ve been successful, you didn’t get there on your own. You didn’t get there on your own. I’m always struck by people who think, ‘Well, it must be because I was just so smart.’ There are a lot of smart people out there. ‘It must be because I worked harder than everybody else.’ Let me tell you something, there are a whole bunch of hardworking people out there.
If you were successful, somebody along the line gave you some help. There was a great teacher somewhere in your life. Somebody helped to create this unbelievable American system that we have that allowed you to thrive. Somebody invested in roads and bridges. If you’ve got a business, you didn’t build that. Somebody else made that happen. The Internet didn’t get invented on its own. Government research created the Internet so that all the companies could make money off the Internet. The point is, that when we succeed, we succeed because of our individual initiative, but also because we do things together.
As others have noted, it’s not entirely clear what’s being referred to in the “you didn’t build that” line that’s gotten most of the attention: It could mean “the business”—in which case it seems obvious that what he means is “you didn’t build that alone“—or it could equally plausibly mean “this incredible American system” and “roads and bridges” and all the other things that made it possible for that business to be successful.
Either way, it seems undeniable as a self-contained descriptive point: No man is an island, and the wealth and success we enjoy are all profoundly dependent on a context of social cooperation that makes it possible. In 15,000 B.C., you’d have been dirt poor however smart and hardworking you were.
Indeed, arguably it doesn’t go far enough, as it suggests some sharp distinction between things that involve help from “somebody else,” on the one hand, and on the other traits like being “so smart” or “hardworking,” which presumably each individual really is responsible for. But as John Rawls would argue, that’s hardly true either: If you’re “so smart,” well, “you didn’t build that,” ultimately. You aren’t responsible for the genetic endowment that enables high-level cognitive processing, the nutrition that fed your developing infant brain, or for the vast store of inherited knowledge that allowed you to take calculus for granted, rather than re-deriving it from scratch (once you’d invented “writing” and “numbers”). If you’re hardworking, then it’s a good thing that your upbringing and education imbued you with a work ethic, and that your brain chemistry (with or without aid from modern pharmaceuticals) is well-calibrated for sustained focus and impulse control.
All that’s true enough, but what is the point supposed to be? That we need to “do things together” to succeed? Well, obviously. But as Aaron Powell and Jason Brennan rightly ask, why should we assume that “we” and “together” has to mean “through government”? Why can’t “we” do things “together” by… well, forming businesses? Clubs? Civic organizations? Churches? If we’re assigning credit for past achievements—and implicitly, the debt we owe for them—why the federal government and not, say, our fellow citizens directly, or state and municipal authorities, or the whole of humanity engaged in mutually enriching global trade?
Of course, there are solid arguments why certain things we build together—roads, for one—will generally not be adequately supplied unless we do them through government. But as Aaron Powell points out, if we limit ourselves to these kinds of examples, we arguably end up with a pretty libertarian conception of government. Does Obama think he has to make the argument against anarcho-capitalism? I’m all for a more philosophical approach to modern political discourse, but starting from a foundational justification of the state in terms of provision of essential public goods seems to me to be taking it a bit far: Even we minarchist libertarians are already on board with that, and I hadn’t thought the anarchists a significant enough force in the current electoral debate to require an extended refutation. If that’s the justificatory strategy he wants to embrace, of course, I’ll take it—and look forward to the radical reduction in the size and cost of government.
Maybe, however, the point is more along the lines of the Nagel/Murphy “Myth of Ownership” argument: Since you didn’t earn whatever wealth you have all by yourself, without external help, you can’t really claim to deserve or be entitled to it—it’s a matter of luck you’re not one of those smart, hardworking people who didn’t get rich, after all—and so “we” (apparently meaning “the government”) get to take back however much “we” think is appropriate.
But this one proves rather too much doesn’t it? You didn’t assemble your own DNA, or design your own reproductive system—your parents, and before them eons of evolution, built that. If you have religious or beliefs and practices, you didn’t build that—you inherited them from a whole tradition of thought, transmitted through institutions made of other people, in books written by other people, created using printing technology invented by other people. You have views on politics you want to express? You didn’t build those alone either—and you probably even want to do it over a computer network that the government subsidized a very different and primitive early form of decades ago.
As it turns out, we generally think we are entitled to control, or have rights over, a whole lot of things that are not (as Robert Nozick put it) “deserved all the way down” in the sense that we’re completely responsible for them—since, at the end of the day, nothing is “deserved all the way down” in that sense. It’s not that the “you didn’t build that” argument is wrong as a factual matter—it’s that it’s true about everything, and therefore doesn’t get you much of anything.
Here’s a little puzzle for the lawyers out there. It’s pretty obviously copyright infringement to buy a legitimate copy of a CD, make a copy, and then give away or sell the copy. But how about the reverse? Millions of people in their 20s and late-30s have large CD collections acquired back in Ye Olden Tymes, when Australopithecus purchased music encoded on shiny plastic discs rather than our modern formats of MP3 and vinyl. Most of those shiny plastic discs are gathering dust in a closet somewhere, having long since been ripped to the hard drives and iPods from which we actually play them. Would it be legal to give away or sell those mouldering original discs while retaining the digital copy you actually listen to? And could the buyer then repeat the process, and on indefinitely?
Intuitively, the answer is: Of course not! That would be indistinguishable, in its practical effect, from the infringing act of selling or distributing an unauthorized copy. It would defy common sense for it to make a difference whether you keep the copy or the original. And when it comes to computer programs, the copyright statute is quite explicit: Archival copies have to be destroyed as soon as one ceases to be the lawful owner of the original purchased copy. On face, you’d think the same principle would have to apply to sound recordings.
Except it’s not obvious from the statute that it does, technically, and a little quick Googling didn’t turn up any cases directly on point. Here’s the probably-too-cute-by-half argument for why there might be a loophole here. The RIAA has been notoriously cagey about this—but even their attorneys have argued before the Supreme Court that the ubiquitous practice of copying a CD you’ve purchased to you hard drive (and from there to your iPod) is a protected Fair Use that doesn’t infringe the exclusive rights of the copyright holder. The Ninth Circuit implicitly relied on the idea that noncommercial copying for personal use is noninfringing in RIAA v. Diamond, which established (over the recording industry’s objections) that MP3 players are, in fact, perfectly legal.
Now the tricky bit. If the CD owner genuinely makes that copy purely for the purpose of personal use—not as step one in a larger plan to later sell the CD—that’s a lawful copy under the statute. The statute still limits the further copying, sale, and distribution of that lawful copy for anything other than personal use. But “retention” isn’t one of the exclusive rights protected by copyright. Except for the provision covering software backups, there’s nothing explicitly suggesting that a lawful copy can become unlawful without some further triggering action—again, copying, sale, distribution, rental, and so on. Now flip back to the original CD copy sitting in a closet. The owner has a clear right under the First Sale doctrine to sell that particular copy without authorization from the copyright owner. And I can’t find anything explicit in the statute requiring anything be done about lawful copies previously made when the original CD is legally sold or gifted. Obviously if you’re trying to follow the spirit of the law, you’d delete those lawful personal copies when you transferred ownership of your original copy—though that might actually be a trickier than it sounds in a household where music is shared among several people with iPods and laptops. But technically, “not deleting a copy” doesn’t appear to be an action that violates an exclusive right of reproduction, distribution, or display. And all the other allowances for personal use hinge on whether a particular copy was lawfully made—not lawfully retained. (It’s implicit in the whole idea of a “backup copy” that one would continue to use it if the original is lost or destroyed, of course, so it can’t be that continued use of the music is contingent on physical possession of the original CD.)
Assuming the appropriate intention on the user’s part, then, there’s no infringement when the personal copy is made—unless we’re prepared to accept that essentially every legal CD owner is either an infringer or listens to their digital copies only at the sufferance of the RIAA, which could revoke permission at any time. Under the First Sale Doctrine, there’s no infringement when the original is sold. And there’s no infringement when the copy is retained, because “not deleting something” isn’t generally an act of infringement. Your gut says this overall pattern of conduct has to be infringing—even if in practice it would be impossible to police—but it’s hard to actually pinpoint when the infringement happens if you hew strictly to the statute’s definition of infringement.
Since the result is absurd on face, I’m pretty sure this line of argument can’t be right. I’m just not sure where it goes wrong if we’re applying the actual statute and case law, as opposed to our intuitive feelings about what makes sense. Any IP lawyers want to throw me a life vest?
As I seem to have inadvertently sparked this Crooked Timber essay by Chris Bertram, Corey Robin and Alex Gourevitch (B/R/G) on workplace coercion, I feel like I should probably say something about it, though Matt Zwolinski and Roderick Long already have good responses you should probably just read instead. Some assorted thoughts:
First, there is a perennial risk in arguments like this of getting mired in a semantic debate about how we colloquially use the term “coercion,” or worse, what more strict definition we think it should have in the context of political theory discussions. That conversation is not interesting or useful, and is sometimes best avoided by just stipulating multiple more precise terms, however artificial it might seem. Off the top of my head, some different ways in which people use “coercion”:
- Moralized Coercion: This is the canonical “your money or your life” scenario where X threatens Y with consequence C unless C performs action A, where it’s clear Y has a rights claim against the imposition of C by X, or at least, the imposition of C would be in some way wrong in itself. (Maybe we should call this “attempted coercion,” since Y might not care about C or consider it especially bad: The threat will actually be coercive when Y wants to avoid C strongly enough.) This is probably the most common sense of the word, but not particularly helpful when the right to impose C is precisely what’s in dispute. If we want a sense of “coercion” that’s useful for identifying which acts (or systems) are wrong, we need a descriptive definition where the natural properties that make an action (or system) coercive are also properties that make it unjust. Typically, however, we use the term “coercive” more broadly than that, to cover situations that most of us would not consider wrong or unjust—so at best, a well tailored descriptive definition might help to pick out actions that are presumptively unjust.
- Physical Coercion: Maybe the least controversial descriptive use of “coercion” covers cases where physical force is used or threatened in a direct and imminent way. A bouncer ejecting a belligerent patron from a pub—or announcing that he is about to unless the patron leaves quietly—is fairly obviously employing coercion, though in many cases this will be justified coercion. But cases in this set are, by and large, wrong by default and require some special justification—typically involving (at minimum) some prior wrong act by the person at whom force is targeted.
- Subjective Coercion: Maybe the broadest sense in which we apply the term “coercive” is to characterize any situation in which Y feels as though they have no reasonable choice but to perform A in order to avoid C. If we want to strengthen this definition, we could impose a double subjective requirement: X threatens C with the intentionthat Y feel this way.As Zwolinski observes, however, there are also plenty of situations that might qualify as “coercive” in this sense which most of us would not regard as even presumptively wrong, or at least not wrong in a way that would justify a legal prohibition. Examples might include: “If you won’t sleep with me, I won’t continue our romantic relationship,” or “I’ll give you a million dollars if you do A.” One intuitive problem with drawing categorical normative conclusions from the subjective sense of “coercion” is that it doesn’t distinguish between threats and highly attractive offers.
- Baseline-Sensitive Coercion: If we want to build in that distinction, we might restrict the application of “coercive” to cases where (X intends that) C will make Y worse off than Y would have been if X and Y had not interacted at all, at least within some relevant domain. (Specifying what counts as the relevant domain will, of course, be a tricky problem in itself.) This would cover many cases of blackmail, where X threatens to reveal some highly embarrassing fact about Y unless Y does A. Many people regard such threats as wrong, or at least morally questionable, even when X has learned this fact by legitimate means, and when it would not otherwise be wrong to either publicize or keep silent about it. Whether we think cases covered by this use are wrong, however, may depend on what we think of X’s intentions: Pressuring someone to get their drinking under control by threatening to disclose their embarrassing behavior to a spouse will probably seem less problematic than using the same threat to extract a cash payment. Note that like the physical criterion, it does not necessarily depend on the idea of leaving the threatened party with “no choice” but to comply: A threat might be coercive in these senses even if many people would choose to accept the consequence rather than comply with the demeand.
- Coercion as Exploitation of Objectively Unreasonable Alternatives:
Coercion that employs a baseline—and so distinguishes between threatened harms and conditional benefits—covers a lot of the territory. But there are also cases involving conditional benefits—cases in which the involvement of X leaves Y better off—that are often called coercive. We might imagine X comes across Y stranded on an island or injured in a crevasse deep in the woods, with no other hope of rescue, and offers assistance only on the condition that Y first signs over his life savings. If we think (as most people do) that there’s a general moral duty to rescue in those kinds of extreme circumstances, these cases are already covered by coercion in the moralized sense. But few think there’s a general duty to provide employment. For the kinds of cases the Crooked Timber folks are talking about, rather, the idea has to be that when the alternative to employment is sufficiently bad—regardless of whether the employer bears any responsibility for this being the case—it is unreasonable or unfair to take advantage of this by imposing certain terms, even if the underlying offer of employment is not obligatory. This isn’t obviously inconsistent with common usage, but is hard to provide general conditions for without circularity, because it builds in the normative concept of a “reasonable alternative.”
With all that out of the way, how can we best make sense of the intuition that certain kinds of demands that might be made of employees are, indeed, coercive in a morally impermissible way, and distinguish these from permissible conditions of employment? Most of us, for instance, would regard it as clearly coercive and wrong for a boss to demand sexual favors from an employee on pain of termination, but not to demand that the employee show up on time and perform ordinary work tasks competently.
One approach is to simply come up with a list of objectively reasonable and permissible conditions of employment, and another list of objectively unreasonable and impermissible conditions. But our intuitions about what kinds of demands are coercive are typically fairly context dependent. Most of us would regard it as coercive for a boss to demand that employees disrobe in the workplace in order to keep their jobs—but perhaps not if the workplace is a strip club, artist’s studio, or adult film set. (I assume we don’t think these are inherently impermissible categories of work.)
The obvious way to account for these reactions, I think, is suggested by Jessica Flanigan—and rejected a bit too quickly by B/R/G: A demand is coercive when it clearly falls beyond the bounds of what the employee has implicitly agreed to do when accepting a position, but the same demand may be non-coercive when it was a reasonably foreseeable part of the job, knowingly accepted along with the position. This fits well with the baseline-sensitive sense of coercion, because accepting a job typically entails declining other offers or ending one’s search for alternative forms of work, and generally arranging one’s life in ways that make it more costly to be forced to abruptly leave it than it would have been to seek alternative work in the first place.
The baseline-dependent sense of coercion then seems applicable, because (as B/R/G detail at some length) the opportunity costs incurred in accepting a job offer render the employee worse off, in at least one important respect, if they lose the job than they would have been had they instead declined it with full knowledge of the conditions that would be imposed. Something like this is, I think, the only intelligible way to explain our sense that it is coercive to demand (on pain of firing) that an existing employee take on a series of laborious tasks unrelated to their original job description, even where it would not be coercive to offer someone a job that consisted of employing those same tasks.
As B/R/G rightly point out, there are difficulties applying this basic idea to non-extreme cases, because it’s infeasible to explicitly spell out every little detail of what’s involved in a job in advance, and often difficult to determine what types of things were implicitly included or excluded. He may also be right that counting only the most “radical” changes in job description as coercive would wrongly exclude too many cases. But none of this really invalidates the basic idea, which at some level seems like it simply has to be right—or at least part of the correct account—assuming we agree that whether an employer demand is coercive will sometimes depend on whether it’s part of the job the employee agreed to do. Even when the job description is quite loose—as for “personal assistant,” say—there’s almost always some rough domain of tasks that are understood to be included, and others that are excluded. If the argument is just that the law can help codify these expectations—make clear what is presumptively excluded when not implicitly excluded—and otherwise check employer attempts to pull a Darth Vader, libertarians shouldn’t necessarily object in principle, as long as this isn’t a pretext for micromanaging people’s genuinely uncoerced agreements.
Of course, even if we accept this account, it just shifts the focus to the question of whether the initial acceptance of the job was truly voluntary, or itself coerced in some way. But note that if this is the locus of coercion, the laundry list of particular objectionable demands employers might make is rather beside the point. If I am coerced into doing a job, the specific tasks I’m asked to perform, or the rules governing bathroom breaks, might make the overall situation more or less unpleasant, but not more or less coercive. When acceptance is voluntary, conversely, whatever forms of monitoring or regimentation a job might entail are part of the totality of perks and burdens employees evaluate when deciding whether to take (or keep) a job. Constraints one person finds intolerably onerous (sit indoors at a desk under fluorescent lights for eight hours) another won’t mind at all; if they’re voluntarily accepted, there’s just not going to be any fact of the matter about which set of job constraints is “more coercive.”
So what determines when acceptance is voluntary? The trouble here is that it’s not obviously possible to settle this without answering foundational questions of political philosophy and just resource distribution. In this case, again, the concept of “coercion” would not be of much help in resolving theoretical disagreements, because it would depend directly on the differing background theoretical judgments. Note that we’re subtly shifting from the idea of coercion as something imposed intentionally by some particular person or entity to “coercion” as the antonym of a kind of idealized voluntariness—which is inevitable as we get to systemic questions, but also much more complicated. The typical employer has very little control over what a prospective employee’s alternatives look like, and so the relevant “coercion” is no longer something internal to any particular employer’s job offer. One virtue of the baseline-sensitive sense of coercion is that it’s relatively self-contained: To distinguish between a coercive threat and an offer, you ask whether its target is made worse off or merely no better off if they refuse. For particular offers within a social system, that means “better or worse off than they were before.”
But that won’t work if we’re trying to ask the question at the level of the social system as a whole. You’re better off complying with the dictator who monopolizes food production if the alternative is starving—but you’re better off still not having a food monopolist. Assuming we’re not talking about a monopoly employer, though, or about attempts to change the agreed terms of employment, the question is no longer really whether the employer is coercive, but whether the system is coercive. Which entails the question: Compared to what?
For B/R/G, a minimal condition for acceptance of employment to be voluntary is that the employee have a “reasonable alternative to work.” At a population level, though, there is no real alternative to work, reasonable or otherwise: The resources we require to survive have to be produced—or at the very least gathered and processed—before they can be consumed, which means (at least at our current technological level) some of us have to work. If the requirement to do work of any kind is coercive, it’s coercion imposed by nature. So it seems a little too stringent to say conditions on paid labor are coercive unless they would be accepted by people who could live comfortably without doing (commercially compensated) work of any kind. Your baseline for assessing voluntariness has to at least be physically possible when universalized. This makes it similarly problematic to use any kind of pre-property state of nature for the baseline, since it seems unlikely that anything remotely like modern population sizes could be sustained under those conditions. But then it seems we’re stuck with the baseline: “a modern economy under a just distribution, whatever that looks like.”
But maybe we can avoid the population-level question by just saying, informally, that “reasonable alternatives” just means being able to meet one’s basic needs, or maintain some threshold standard of living, without being subject to the will of any particular boss or small group of employers. This is more or less Hayek’s idea: It’s not “coercive” to have to find some willing buyer for your labor, so long as there isn’t any one person or cabal whose will you’re subject to. The more “good enough” options exist, the freer the choice to work for any particular employer. Where we put the threshold, however, will dramatically affect how often this condition is satisfied.
The implied threshold for Bertram seems to depend on expectations set by one’s current standard of living, which is intuitive in one sense but rather odd in another. A worker with a mortgage on a suburban house, several thousand in credit card debt, and a couple of college-bound kids would probably not consider an alternative job to be a “reasonable” or serious option unless it paid enough to support those obligations. This matches how people ordinarily talk when they consider their “realistic” alternatives, but it has the unattractive effect of making “coercion” partly a function of voluntarily assumed obligations—such that one person might be “coerced” into keeping a job that is voluntary for someone else by virtue of having taken on more debt.
Trying to develop this sense of “coercion” satisfactorily, in a way that has any normative bite, seems unpromising. It’s either unattractively subjective or ends up reducing to the question of what an economically just distribution looks like. That isn’t to say it can be entirely avoided: If we want to explain why (for instance) the denizens of Rothbard Island—where a single person or corporation owns all resources and is the monopoly employer—are subject to “coercion,” this is probably the sense we have to appeal to. But whatever story we tell there isn’t really going to be about employment as such, but about why it’s unjust for resources to be so concentrated in the first place.
Since everyone does seem agreed at least on the baseline-dependent account, however, there’s probably fruitful work do be done analyzing what kinds of changes in work conditions or requirements amount to an unfair attempt to leverage the opportunity costs assumed by the worker under the original terms of employment.
I’ve been reading Cryptography Engineering by Bruce Schneier, Niels Ferguson, and Tadayoshi Kohno, on the theory that someone who writes about privacy and surveillance as much as I do ought to have somewhat more detailed understanding of how modern cryptosystems work, even if I’m never going to be competent to work with the actual code. At one point, the authors mention a potential problem with certain kinds of ciphers. Stream ciphers work by combining a secret cryptographic key with a (supposedly) unique number—a random shared string or a “nonce“—to generate a “keystream.” The keystream is then XORed with the plaintext message to produce the encrypted ciphertext.
For the non-computer-geeks: that just means that for every bit in the sequence of ones and zeroes that makes up the plaintext, if the keystream has the same value in that position, then the corresponding bit of the ciphertext will get written as a 0, and if they have different values in that position, the corresponding bit of the ciphertext gets written as a 1. (This corresponds to the logical operation “exclusive or”: It outputs a 1, meaning “true,” just in case one or the other but not both of the inputs is true.) So, for instance, the capital letter “A” is normally encoded as the binary string: 01000001. A lowercase “z” is represented as 01111010. If you XOR them together, you get: 00111011. If you XOR in the “z” again, you get “A” back out… but that assumes you know at least one of the two original pieces of the puzzle: There’s a vast number of different ways to XOR two bytes together to produce 00111011.
In theory, there should be no way to reverse the process without knowing the keystream, which requires knowing the cryptographic key. But there’s a loophole: If the designer of the system messes up and allows that supposedly-unique “nonce” to be reused, then you end up with two messages that have been encrypted (XORed) with the same keystream. That doesn’t tell you what the keystream is. But if an attacker knows which two messages have been encrypted with the same keystream, he can just XOR those two cyphertexts together. The result is to mathematically cancel out the key, and give you the same result as if you’d just XORed the two original plaintexts together. Once you’ve got this, Schneier et. al. warn, an attacker will often be able to easily reverse the process and decompose that into the two original messages—provided the original messages aren’t just random gibberish, but something that exhibits patterns, like written English. But they didn’t bother explaining exactly how this could be done, so I ended up spending 15 minutes doodling on a legal pad trying to suss out how an attack would work. Even some of my geekier friends seemed to think it wasn’t possible when I floated the question on Twitter—and for some cases, it won’t be. For instance, if the two original messages are identical—meaning they have the same value at every bit position—then the result of XORing them is always going to be a string of zeroes, which makes it obvious the two initial messages were identical, but doesn’t give you any hint at the content of the messages.
Special cases aside, though, there definitely are some generally viable strategies for decomposing a file generated by XORing two messages—let’s assume they’re ordinary written English in ASCII characterformat—back into the original pair of texts. How would you go about it? I’ll update the post with the solutions that I came up with (or found online) later this weekend.
Update: I’m pleased, though not at all surprised, to see that I have a bunch of very smart readers who came up with basically all the strategies I did, and in some cases stated them with a good deal more sophistication than I could have. You’re probably better off just reading the comments, but I’ll summarize the basic ideas below the fold.
[Read more →]
The only possible explanation is that Ridley Scott has a Duke Brothers–style bet running with George Lucas: Who can produce the most crushingly disappointing prequel to a beloved classic of late-70s science fiction cinema? There’s no other way to account for the tedious, incoherent two hour train wreck that is Prometheus—a film whose powerhouse ensemble cast and stunning visuals ultimately fail to rescue it from a script that feels like it was fished from a dumpster of rejected SyFy Original Movie treatments.
How bad is it? The one and only character who doesn’t feel like a robot is, in fact, a robot. The protagonists are so flat and irritating that you end up rooting for the aliens to wipe out humanity—just as long as these guys die first. Nothing about this movie makes sense—not in the cool David Lynchian “makes you think” way, but rather in a “you didn’t think very much about this script, did you?” way. It’s what happens when nobody tells you your apres-bong dorm conversations were not, in fact, super deep. A partial list of plot holes and problems, which will contain many, many spoilers, just in case you’re still planning on throwing away your hard earned money. If you’re really that intrigued by the premise, though, I’d recommend just watching Stargate again.
[Read more →]
Under current law, this blog post will remain under copyright until 70 years after my death—which if I’m lucky means a century or more from the date of authorship. That’s an insanely long time when you consider that most economic studies have shown there’s almost no marginal incentive effect on production once you extend copyright terms much beyond the original span: 14 years renewable once, or 28 years total. Why would we needlessly lock away our own culture for so long?
One popular answer is the Mickey Mouse Theory. Though the effective commercial lifespan of the vast majority of copyrighted works is just a few years, a very few—like some of Disney’s iconic properties—continue to be immensely profitable for much longer. The owners of these properties then throw gobs of money at Congress, which ritualistically serves up a retroactive extension whenever these come within spitting distance of the public domain in order to protect their cash cows (or mice, as the case may be).
No doubt there’s something to that. Yet if that were the sole concern, you’d think the content industries would prefer a renewal structure that maxed out at the same term. The cost of renewing the registration of their profitable (or potentially profitable) works would be trivial for the labels and studios, but they’d also gain access to orphan works that nobody was making any use of. Our system, by contrast, seems perversely designed not just to provide extended protection for revenue-generating works, but to guarantee a minimal public domain.
Here’s an alternative hypothesis: Insanely long copyright terms are how the culture industries avoid competing with their own back catalogs. Imagine that we still had a copyright term that maxed out at 28 years, the regime the first Americans lived under. The shorter term wouldn’t in itself have much effect on output or incentives to create. But it would mean that, today, every book, song, image, and movie produced before 1984 was freely available to anyone with an Internet connection. Under those conditions, would we be anywhere near as willing to pay a premium for the latest release? In some cases, no doubt. But when the baseline is that we already have free, completely legal access to every great album, film, or novel produced before the mid-80s—more than any human being could realistically watch, read, or listen to in a lifetime—I wouldn’t be surprised if our consumption patterns became a good deal less neophilic, or at the very least, prices on new releases had to drop substantially to remain competitive.
If that’s right, there’s a perverse sense in which retroactive extensions for absurd lengths of time might actually, obliquely, serve copyright’s constitutional imperative to “promote the progress of science and useful arts”: Not by directly increasing the present value of newly produced works, but by shrinking the pool of free alternatives to the newest works. (Of course, any law restricting non-consumptive, non-commercial entertainment would have a similar effect.) If that’s true, though, it’s not enough in itself to justify the longer terms: The question is whether the marginal new content is actually worth losing universal free access to the older material. For reasons unclear to me, there often seems to be an undefended assumption that more newer stuff, whatever the quality, outweighs wider access to existing content at any conceivable margin. I’m not sure how you’d go about quantifying that, but it strikes me as wildly implausible on face.
Boing Boing’s Xeni Jardin recently unleashed some righteous fury on the “steaming turd of an opening line in David Streitfeld’s otherwise serviceable New York Times piece about […] gender discrimination in Silicon Valley.” The offending line: “Men invented the Internet.”
The thing is—and hold the rotten fruit, I’m going somewhere with this—that’s actually pretty accurate. The putative counterexamples Jardin offers are so strained that they end up reinforcing rather than refuting the claim. Grace Hopper was an important computing pioneer, but her achievements have to do with the creation of early programming languages (notably a predecessor to COBOL), not the Internet. Radia Perlman wrote a key networking protocol, and should probably be more widely known for it, but she did it in 1985, well after the invention of ARPANET and TCP/IP. Ada Lovelace, by far the most famous of this trio, is also the one with the weakest claim to an original contribution: She was basically a press agent for Charles Babbage, and her reputation as the “first computer programmer” is based on her single significant paper, published in 1843, which included a description of an algorithm actually written by Babbage. If we’re really talking about the 15 or 20 people who could most reasonably be called “inventors of the Internet”—as opposed to “people who did a cool thing related to computers”—we are, in fact, talking about a bunch of guys. If we go with the broader “cool thing with a computer,” we’re no longer exclusively talking about guys, but until the last few decades, it’s still pretty disproportionate.
The correct takeaway from this, however, is not “herp derp, women can’t do math.” It’s that the social costs of sexism are really, really high. If, despite massive cultural and institutional barriers, significant numbers of women were making important contributions at the highest level all along, but denied credit, that would obviously be grossly unfair to the women in question. But it would be sort of a wash from the perspective of overall social utility: The allocation of credit is different, but society still gets the benefit of the brightest women’s contributions. The grimmer alternative is not that the wrong people get the credit, but that important innovations just don’t happen because the pool of brainpower available to tackle important social goals is needlessly halved—the potential female counterparts of Vint Cerf and Bob Kahn never got the opportunity to accelerate the progress of the Internet because, at the time, hostile institutions froze them out, or antiquated norms of femininity deterred them from obtaining STEM educations in the first place. That’s a much, much bigger loss.
It’s natural that we want to look for inspiration to the members of marginalized groups whose incredible achievements required surmounting equally incredible obstacles, but overselling the success stories can also subtly reinforce the complacent view that Genius Always Finds A Way, regardless of social arrangements, even if it’s not properly recognized until much later. The depressing reality is that it very often doesn’t. And the deeper the roots of the inequality—the more culturally entrenched it is—the longer we should expect inequality in achievement to persist even when the most obvious formal barriers have been eliminated. It’s worth pausing to belatedly recognize the neglected heroines who did overcome the odds, but insisting that there’s been some hidden parity of contributions all along actually seems to risk underselling the gravity of the collective harm we’ve done ourselves. Sexism has consequences—and it has left all of us vastly worse off.