As I seem to have inadvertently sparked this Crooked Timber essay by Chris Bertram, Corey Robin and Alex Gourevitch (B/R/G) on workplace coercion, I feel like I should probably say something about it, though Matt Zwolinski and Roderick Long already have good responses you should probably just read instead. Some assorted thoughts:
First, there is a perennial risk in arguments like this of getting mired in a semantic debate about how we colloquially use the term “coercion,” or worse, what more strict definition we think it should have in the context of political theory discussions. That conversation is not interesting or useful, and is sometimes best avoided by just stipulating multiple more precise terms, however artificial it might seem. Off the top of my head, some different ways in which people use “coercion”:
- Moralized Coercion: This is the canonical “your money or your life” scenario where X threatens Y with consequence C unless C performs action A, where it’s clear Y has a rights claim against the imposition of C by X, or at least, the imposition of C would be in some way wrong in itself. (Maybe we should call this “attempted coercion,” since Y might not care about C or consider it especially bad: The threat will actually be coercive when Y wants to avoid C strongly enough.) This is probably the most common sense of the word, but not particularly helpful when the right to impose C is precisely what’s in dispute. If we want a sense of “coercion” that’s useful for identifying which acts (or systems) are wrong, we need a descriptive definition where the natural properties that make an action (or system) coercive are also properties that make it unjust. Typically, however, we use the term “coercive” more broadly than that, to cover situations that most of us would not consider wrong or unjust—so at best, a well tailored descriptive definition might help to pick out actions that are presumptively unjust.
- Physical Coercion: Maybe the least controversial descriptive use of “coercion” covers cases where physical force is used or threatened in a direct and imminent way. A bouncer ejecting a belligerent patron from a pub—or announcing that he is about to unless the patron leaves quietly—is fairly obviously employing coercion, though in many cases this will be justified coercion. But cases in this set are, by and large, wrong by default and require some special justification—typically involving (at minimum) some prior wrong act by the person at whom force is targeted.
- Subjective Coercion: Maybe the broadest sense in which we apply the term “coercive” is to characterize any situation in which Y feels as though they have no reasonable choice but to perform A in order to avoid C. If we want to strengthen this definition, we could impose a double subjective requirement: X threatens C with the intentionthat Y feel this way.As Zwolinski observes, however, there are also plenty of situations that might qualify as “coercive” in this sense which most of us would not regard as even presumptively wrong, or at least not wrong in a way that would justify a legal prohibition. Examples might include: “If you won’t sleep with me, I won’t continue our romantic relationship,” or “I’ll give you a million dollars if you do A.” One intuitive problem with drawing categorical normative conclusions from the subjective sense of “coercion” is that it doesn’t distinguish between threats and highly attractive offers.
- Baseline-Sensitive Coercion: If we want to build in that distinction, we might restrict the application of “coercive” to cases where (X intends that) C will make Y worse off than Y would have been if X and Y had not interacted at all, at least within some relevant domain. (Specifying what counts as the relevant domain will, of course, be a tricky problem in itself.) This would cover many cases of blackmail, where X threatens to reveal some highly embarrassing fact about Y unless Y does A. Many people regard such threats as wrong, or at least morally questionable, even when X has learned this fact by legitimate means, and when it would not otherwise be wrong to either publicize or keep silent about it. Whether we think cases covered by this use are wrong, however, may depend on what we think of X’s intentions: Pressuring someone to get their drinking under control by threatening to disclose their embarrassing behavior to a spouse will probably seem less problematic than using the same threat to extract a cash payment. Note that like the physical criterion, it does not necessarily depend on the idea of leaving the threatened party with “no choice” but to comply: A threat might be coercive in these senses even if many people would choose to accept the consequence rather than comply with the demeand.
- Coercion as Exploitation of Objectively Unreasonable Alternatives:
Coercion that employs a baseline—and so distinguishes between threatened harms and conditional benefits—covers a lot of the territory. But there are also cases involving conditional benefits—cases in which the involvement of X leaves Y better off—that are often called coercive. We might imagine X comes across Y stranded on an island or injured in a crevasse deep in the woods, with no other hope of rescue, and offers assistance only on the condition that Y first signs over his life savings. If we think (as most people do) that there’s a general moral duty to rescue in those kinds of extreme circumstances, these cases are already covered by coercion in the moralized sense. But few think there’s a general duty to provide employment. For the kinds of cases the Crooked Timber folks are talking about, rather, the idea has to be that when the alternative to employment is sufficiently bad—regardless of whether the employer bears any responsibility for this being the case—it is unreasonable or unfair to take advantage of this by imposing certain terms, even if the underlying offer of employment is not obligatory. This isn’t obviously inconsistent with common usage, but is hard to provide general conditions for without circularity, because it builds in the normative concept of a “reasonable alternative.”
With all that out of the way, how can we best make sense of the intuition that certain kinds of demands that might be made of employees are, indeed, coercive in a morally impermissible way, and distinguish these from permissible conditions of employment? Most of us, for instance, would regard it as clearly coercive and wrong for a boss to demand sexual favors from an employee on pain of termination, but not to demand that the employee show up on time and perform ordinary work tasks competently.
One approach is to simply come up with a list of objectively reasonable and permissible conditions of employment, and another list of objectively unreasonable and impermissible conditions. But our intuitions about what kinds of demands are coercive are typically fairly context dependent. Most of us would regard it as coercive for a boss to demand that employees disrobe in the workplace in order to keep their jobs—but perhaps not if the workplace is a strip club, artist’s studio, or adult film set. (I assume we don’t think these are inherently impermissible categories of work.)
The obvious way to account for these reactions, I think, is suggested by Jessica Flanigan—and rejected a bit too quickly by B/R/G: A demand is coercive when it clearly falls beyond the bounds of what the employee has implicitly agreed to do when accepting a position, but the same demand may be non-coercive when it was a reasonably foreseeable part of the job, knowingly accepted along with the position. This fits well with the baseline-sensitive sense of coercion, because accepting a job typically entails declining other offers or ending one’s search for alternative forms of work, and generally arranging one’s life in ways that make it more costly to be forced to abruptly leave it than it would have been to seek alternative work in the first place.
The baseline-dependent sense of coercion then seems applicable, because (as B/R/G detail at some length) the opportunity costs incurred in accepting a job offer render the employee worse off, in at least one important respect, if they lose the job than they would have been had they instead declined it with full knowledge of the conditions that would be imposed. Something like this is, I think, the only intelligible way to explain our sense that it is coercive to demand (on pain of firing) that an existing employee take on a series of laborious tasks unrelated to their original job description, even where it would not be coercive to offer someone a job that consisted of employing those same tasks.
As B/R/G rightly point out, there are difficulties applying this basic idea to non-extreme cases, because it’s infeasible to explicitly spell out every little detail of what’s involved in a job in advance, and often difficult to determine what types of things were implicitly included or excluded. He may also be right that counting only the most “radical” changes in job description as coercive would wrongly exclude too many cases. But none of this really invalidates the basic idea, which at some level seems like it simply has to be right—or at least part of the correct account—assuming we agree that whether an employer demand is coercive will sometimes depend on whether it’s part of the job the employee agreed to do. Even when the job description is quite loose—as for “personal assistant,” say—there’s almost always some rough domain of tasks that are understood to be included, and others that are excluded. If the argument is just that the law can help codify these expectations—make clear what is presumptively excluded when not implicitly excluded—and otherwise check employer attempts to pull a Darth Vader, libertarians shouldn’t necessarily object in principle, as long as this isn’t a pretext for micromanaging people’s genuinely uncoerced agreements.
Of course, even if we accept this account, it just shifts the focus to the question of whether the initial acceptance of the job was truly voluntary, or itself coerced in some way. But note that if this is the locus of coercion, the laundry list of particular objectionable demands employers might make is rather beside the point. If I am coerced into doing a job, the specific tasks I’m asked to perform, or the rules governing bathroom breaks, might make the overall situation more or less unpleasant, but not more or less coercive. When acceptance is voluntary, conversely, whatever forms of monitoring or regimentation a job might entail are part of the totality of perks and burdens employees evaluate when deciding whether to take (or keep) a job. Constraints one person finds intolerably onerous (sit indoors at a desk under fluorescent lights for eight hours) another won’t mind at all; if they’re voluntarily accepted, there’s just not going to be any fact of the matter about which set of job constraints is “more coercive.”
So what determines when acceptance is voluntary? The trouble here is that it’s not obviously possible to settle this without answering foundational questions of political philosophy and just resource distribution. In this case, again, the concept of “coercion” would not be of much help in resolving theoretical disagreements, because it would depend directly on the differing background theoretical judgments. Note that we’re subtly shifting from the idea of coercion as something imposed intentionally by some particular person or entity to “coercion” as the antonym of a kind of idealized voluntariness—which is inevitable as we get to systemic questions, but also much more complicated. The typical employer has very little control over what a prospective employee’s alternatives look like, and so the relevant “coercion” is no longer something internal to any particular employer’s job offer. One virtue of the baseline-sensitive sense of coercion is that it’s relatively self-contained: To distinguish between a coercive threat and an offer, you ask whether its target is made worse off or merely no better off if they refuse. For particular offers within a social system, that means “better or worse off than they were before.”
But that won’t work if we’re trying to ask the question at the level of the social system as a whole. You’re better off complying with the dictator who monopolizes food production if the alternative is starving—but you’re better off still not having a food monopolist. Assuming we’re not talking about a monopoly employer, though, or about attempts to change the agreed terms of employment, the question is no longer really whether the employer is coercive, but whether the system is coercive. Which entails the question: Compared to what?
For B/R/G, a minimal condition for acceptance of employment to be voluntary is that the employee have a “reasonable alternative to work.” At a population level, though, there is no real alternative to work, reasonable or otherwise: The resources we require to survive have to be produced—or at the very least gathered and processed—before they can be consumed, which means (at least at our current technological level) some of us have to work. If the requirement to do work of any kind is coercive, it’s coercion imposed by nature. So it seems a little too stringent to say conditions on paid labor are coercive unless they would be accepted by people who could live comfortably without doing (commercially compensated) work of any kind. Your baseline for assessing voluntariness has to at least be physically possible when universalized. This makes it similarly problematic to use any kind of pre-property state of nature for the baseline, since it seems unlikely that anything remotely like modern population sizes could be sustained under those conditions. But then it seems we’re stuck with the baseline: “a modern economy under a just distribution, whatever that looks like.”
But maybe we can avoid the population-level question by just saying, informally, that “reasonable alternatives” just means being able to meet one’s basic needs, or maintain some threshold standard of living, without being subject to the will of any particular boss or small group of employers. This is more or less Hayek’s idea: It’s not “coercive” to have to find some willing buyer for your labor, so long as there isn’t any one person or cabal whose will you’re subject to. The more “good enough” options exist, the freer the choice to work for any particular employer. Where we put the threshold, however, will dramatically affect how often this condition is satisfied.
The implied threshold for Bertram seems to depend on expectations set by one’s current standard of living, which is intuitive in one sense but rather odd in another. A worker with a mortgage on a suburban house, several thousand in credit card debt, and a couple of college-bound kids would probably not consider an alternative job to be a “reasonable” or serious option unless it paid enough to support those obligations. This matches how people ordinarily talk when they consider their “realistic” alternatives, but it has the unattractive effect of making “coercion” partly a function of voluntarily assumed obligations—such that one person might be “coerced” into keeping a job that is voluntary for someone else by virtue of having taken on more debt.
Trying to develop this sense of “coercion” satisfactorily, in a way that has any normative bite, seems unpromising. It’s either unattractively subjective or ends up reducing to the question of what an economically just distribution looks like. That isn’t to say it can be entirely avoided: If we want to explain why (for instance) the denizens of Rothbard Island—where a single person or corporation owns all resources and is the monopoly employer—are subject to “coercion,” this is probably the sense we have to appeal to. But whatever story we tell there isn’t really going to be about employment as such, but about why it’s unjust for resources to be so concentrated in the first place.
Since everyone does seem agreed at least on the baseline-dependent account, however, there’s probably fruitful work do be done analyzing what kinds of changes in work conditions or requirements amount to an unfair attempt to leverage the opportunity costs assumed by the worker under the original terms of employment.
I’ve been reading Cryptography Engineering by Bruce Schneier, Niels Ferguson, and Tadayoshi Kohno, on the theory that someone who writes about privacy and surveillance as much as I do ought to have somewhat more detailed understanding of how modern cryptosystems work, even if I’m never going to be competent to work with the actual code. At one point, the authors mention a potential problem with certain kinds of ciphers. Stream ciphers work by combining a secret cryptographic key with a (supposedly) unique number—a random shared string or a “nonce“—to generate a “keystream.” The keystream is then XORed with the plaintext message to produce the encrypted ciphertext.
For the non-computer-geeks: that just means that for every bit in the sequence of ones and zeroes that makes up the plaintext, if the keystream has the same value in that position, then the corresponding bit of the ciphertext will get written as a 0, and if they have different values in that position, the corresponding bit of the ciphertext gets written as a 1. (This corresponds to the logical operation “exclusive or”: It outputs a 1, meaning “true,” just in case one or the other but not both of the inputs is true.) So, for instance, the capital letter “A” is normally encoded as the binary string: 01000001. A lowercase “z” is represented as 01111010. If you XOR them together, you get: 00111011. If you XOR in the “z” again, you get “A” back out… but that assumes you know at least one of the two original pieces of the puzzle: There’s a vast number of different ways to XOR two bytes together to produce 00111011.
In theory, there should be no way to reverse the process without knowing the keystream, which requires knowing the cryptographic key. But there’s a loophole: If the designer of the system messes up and allows that supposedly-unique “nonce” to be reused, then you end up with two messages that have been encrypted (XORed) with the same keystream. That doesn’t tell you what the keystream is. But if an attacker knows which two messages have been encrypted with the same keystream, he can just XOR those two cyphertexts together. The result is to mathematically cancel out the key, and give you the same result as if you’d just XORed the two original plaintexts together. Once you’ve got this, Schneier et. al. warn, an attacker will often be able to easily reverse the process and decompose that into the two original messages—provided the original messages aren’t just random gibberish, but something that exhibits patterns, like written English. But they didn’t bother explaining exactly how this could be done, so I ended up spending 15 minutes doodling on a legal pad trying to suss out how an attack would work. Even some of my geekier friends seemed to think it wasn’t possible when I floated the question on Twitter—and for some cases, it won’t be. For instance, if the two original messages are identical—meaning they have the same value at every bit position—then the result of XORing them is always going to be a string of zeroes, which makes it obvious the two initial messages were identical, but doesn’t give you any hint at the content of the messages.
Special cases aside, though, there definitely are some generally viable strategies for decomposing a file generated by XORing two messages—let’s assume they’re ordinary written English in ASCII characterformat—back into the original pair of texts. How would you go about it? I’ll update the post with the solutions that I came up with (or found online) later this weekend.
Update: I’m pleased, though not at all surprised, to see that I have a bunch of very smart readers who came up with basically all the strategies I did, and in some cases stated them with a good deal more sophistication than I could have. You’re probably better off just reading the comments, but I’ll summarize the basic ideas below the fold.
[Read more →]
The only possible explanation is that Ridley Scott has a Duke Brothers–style bet running with George Lucas: Who can produce the most crushingly disappointing prequel to a beloved classic of late-70s science fiction cinema? There’s no other way to account for the tedious, incoherent two hour train wreck that is Prometheus—a film whose powerhouse ensemble cast and stunning visuals ultimately fail to rescue it from a script that feels like it was fished from a dumpster of rejected SyFy Original Movie treatments.
How bad is it? The one and only character who doesn’t feel like a robot is, in fact, a robot. The protagonists are so flat and irritating that you end up rooting for the aliens to wipe out humanity—just as long as these guys die first. Nothing about this movie makes sense—not in the cool David Lynchian “makes you think” way, but rather in a “you didn’t think very much about this script, did you?” way. It’s what happens when nobody tells you your apres-bong dorm conversations were not, in fact, super deep. A partial list of plot holes and problems, which will contain many, many spoilers, just in case you’re still planning on throwing away your hard earned money. If you’re really that intrigued by the premise, though, I’d recommend just watching Stargate again.
[Read more →]
Under current law, this blog post will remain under copyright until 70 years after my death—which if I’m lucky means a century or more from the date of authorship. That’s an insanely long time when you consider that most economic studies have shown there’s almost no marginal incentive effect on production once you extend copyright terms much beyond the original span: 14 years renewable once, or 28 years total. Why would we needlessly lock away our own culture for so long?
One popular answer is the Mickey Mouse Theory. Though the effective commercial lifespan of the vast majority of copyrighted works is just a few years, a very few—like some of Disney’s iconic properties—continue to be immensely profitable for much longer. The owners of these properties then throw gobs of money at Congress, which ritualistically serves up a retroactive extension whenever these come within spitting distance of the public domain in order to protect their cash cows (or mice, as the case may be).
No doubt there’s something to that. Yet if that were the sole concern, you’d think the content industries would prefer a renewal structure that maxed out at the same term. The cost of renewing the registration of their profitable (or potentially profitable) works would be trivial for the labels and studios, but they’d also gain access to orphan works that nobody was making any use of. Our system, by contrast, seems perversely designed not just to provide extended protection for revenue-generating works, but to guarantee a minimal public domain.
Here’s an alternative hypothesis: Insanely long copyright terms are how the culture industries avoid competing with their own back catalogs. Imagine that we still had a copyright term that maxed out at 28 years, the regime the first Americans lived under. The shorter term wouldn’t in itself have much effect on output or incentives to create. But it would mean that, today, every book, song, image, and movie produced before 1984 was freely available to anyone with an Internet connection. Under those conditions, would we be anywhere near as willing to pay a premium for the latest release? In some cases, no doubt. But when the baseline is that we already have free, completely legal access to every great album, film, or novel produced before the mid-80s—more than any human being could realistically watch, read, or listen to in a lifetime—I wouldn’t be surprised if our consumption patterns became a good deal less neophilic, or at the very least, prices on new releases had to drop substantially to remain competitive.
If that’s right, there’s a perverse sense in which retroactive extensions for absurd lengths of time might actually, obliquely, serve copyright’s constitutional imperative to “promote the progress of science and useful arts”: Not by directly increasing the present value of newly produced works, but by shrinking the pool of free alternatives to the newest works. (Of course, any law restricting non-consumptive, non-commercial entertainment would have a similar effect.) If that’s true, though, it’s not enough in itself to justify the longer terms: The question is whether the marginal new content is actually worth losing universal free access to the older material. For reasons unclear to me, there often seems to be an undefended assumption that more newer stuff, whatever the quality, outweighs wider access to existing content at any conceivable margin. I’m not sure how you’d go about quantifying that, but it strikes me as wildly implausible on face.
Boing Boing’s Xeni Jardin recently unleashed some righteous fury on the “steaming turd of an opening line in David Streitfeld’s otherwise serviceable New York Times piece about [...] gender discrimination in Silicon Valley.” The offending line: “Men invented the Internet.”
The thing is—and hold the rotten fruit, I’m going somewhere with this—that’s actually pretty accurate. The putative counterexamples Jardin offers are so strained that they end up reinforcing rather than refuting the claim. Grace Hopper was an important computing pioneer, but her achievements have to do with the creation of early programming languages (notably a predecessor to COBOL), not the Internet. Radia Perlman wrote a key networking protocol, and should probably be more widely known for it, but she did it in 1985, well after the invention of ARPANET and TCP/IP. Ada Lovelace, by far the most famous of this trio, is also the one with the weakest claim to an original contribution: She was basically a press agent for Charles Babbage, and her reputation as the “first computer programmer” is based on her single significant paper, published in 1843, which included a description of an algorithm actually written by Babbage. If we’re really talking about the 15 or 20 people who could most reasonably be called “inventors of the Internet”—as opposed to “people who did a cool thing related to computers”—we are, in fact, talking about a bunch of guys. If we go with the broader “cool thing with a computer,” we’re no longer exclusively talking about guys, but until the last few decades, it’s still pretty disproportionate.
The correct takeaway from this, however, is not “herp derp, women can’t do math.” It’s that the social costs of sexism are really, really high. If, despite massive cultural and institutional barriers, significant numbers of women were making important contributions at the highest level all along, but denied credit, that would obviously be grossly unfair to the women in question. But it would be sort of a wash from the perspective of overall social utility: The allocation of credit is different, but society still gets the benefit of the brightest women’s contributions. The grimmer alternative is not that the wrong people get the credit, but that important innovations just don’t happen because the pool of brainpower available to tackle important social goals is needlessly halved—the potential female counterparts of Vint Cerf and Bob Kahn never got the opportunity to accelerate the progress of the Internet because, at the time, hostile institutions froze them out, or antiquated norms of femininity deterred them from obtaining STEM educations in the first place. That’s a much, much bigger loss.
It’s natural that we want to look for inspiration to the members of marginalized groups whose incredible achievements required surmounting equally incredible obstacles, but overselling the success stories can also subtly reinforce the complacent view that Genius Always Finds A Way, regardless of social arrangements, even if it’s not properly recognized until much later. The depressing reality is that it very often doesn’t. And the deeper the roots of the inequality—the more culturally entrenched it is—the longer we should expect inequality in achievement to persist even when the most obvious formal barriers have been eliminated. It’s worth pausing to belatedly recognize the neglected heroines who did overcome the odds, but insisting that there’s been some hidden parity of contributions all along actually seems to risk underselling the gravity of the collective harm we’ve done ourselves. Sexism has consequences—and it has left all of us vastly worse off.
Ross Douthat thinks so. Responding to my previous post on this, he writes:
I have indeed read my Euthyphro, and my response is basically the conventional Christian (and Jewish) response, which is that the dilemma Plato raises is a false one. Virtue is not something that’s commanded by God, the way a magistrate (or a whimsical alien overlord) might issue a legal code, but something that’s inherent to the Christian conception of the divine nature. God does not establish morality; he embodies it. He does not set standards; he is the standard. And even when he issues principles or precepts through revelation (as in the Ten Commandments or the Sermon on the Mount) he isn’t legislating in the style of Hammurabi or Solon. Instead, he’s revealing something about his own nature and inviting us to conform ourselves to the standards that it sets. [....]
Certainly no serious Christian moralist has ever suggested that moral problems are “a black box” that “we don’t need to worry our pretty little heads about” because God will always tell us what to do. Rather, the possibility of God’s existence — and with it, the possibility that moral laws no less than physical laws correspond to an actual reality, or Reality — is what makes those problems genuinely meaningful and interesting (as opposed to just innings in an “ethics game”) and lends the project of moral reasoning its coherence. The idea of God doesn’t replace secular moral reasoning, in other words, but it grounds this reasoning in something more durable than just aesthetic preference.
This, I think, helps illustrate my original point quite nicely. Ross evidently thinks this counts as some sort of explanation of how there might be moral truths. I think it is a classic virtus dormativa—a series of grammatically well-formed strings masquerading as propositions. It’s not much of an explanation to say Zeus causes thunderstorms unless you have an account of how Zeus does it.
My claim had never been, for what it’s worth, that God is a “black box” because it removes the need for moral deliberation about which specific acts are right; it’s a black box because saying “God” or “divine nature” or whatever doesn’t actually solve—or even make a gesture in the direction of solving—the question of how there could be normative facts or properties. If God is the standard, why ought we accept the standard to emulate it? How could a natural fact about God—even if you call it a “supernatural” fact, whatever that distinction amounts to—constitute a reason? If the fact that some action will cause suffering isn’t adequate motivation to avoid it without something further, why is the fact that the divine nature abhors suffering (or sin, or whatever we think) supposed to do any better? Why do we imagine someone could (rationally?) greet the first fact with a shrug, but not the second? Why is it more meaningful and interesting for moral rules to “correspond to reality” than to exist in some sort of “ethics game”? Are “meaningful” and “interesting” also natural properties, or just part of a meaningfulness-and-interestingness game? Every canonical modern metaethical question can be repeated with undiminished force after we accept (arguendo) everything Ross says here.
Even if you were fully convinced of the existence of God, you’d still be better off consulting Moore or Mackie or Parfit or Nagel on these questions than any theologian. Many modern moral writers think they can show how (and that) objective moral truths exist, and reasonable people can disagree about how successful these accounts are. But every one of the more prominent ones is vastly more persuasive and sophisticated than this “God is the standard” stuff, again, even taking for granted the existence of a God. Some, like Parfit, assert that there are objective moral truths, while allowing they can’t (yet) give a fully adequate metaphysical account of what this entails. But neither does the theologian, beyond slapping a name on the question and confusedly calling it an answer. Parfit at least has some interesting things to say on the matter because he recognizes the need for a further argument—one that might, if nothing else, undermine our sense that nothing could possibly fit the bill.
One might suggest there is no further explanation to be had: That it’s a brute fact that normative properties are part of the furniture of reality (or Reality)—which is what the appeal to God amounts to, with a boatload of other baggage tacked on for good measure. But that answer works as well for the secular moralist, and parsimony mitigates against unnecessarily stapling that answer to a lot of not-particularly-plausible Middle Eastern folklore. If the answer has something to do with it being in our own (divinely created) nature to respond to these facts as reasons, then again, God’s role in the explanation is either redundant or question begging. If it’s enough for us to be configured with a disposition to accept certain facts as reasons, then Simon Blackburn’s quasi-realism, or an argument along the lines of Nagel’s in The Possibility of Altruism works well enough on its own. If it’s the divine origin of that disposition that’s supposed to make this account satisfying, we’re back to circularity.
There is, incidentally, a pretty intriguing analogy here with the contemporary mind/body problem. There, too, we have a domain of phenomena—specifically, well… phenomena—that fit uneasily with a materialistic picture of the world. If the eliminativist solution—asserting that the apparent domain is merely illusory—is unattractive in ethics, it is quite literally unbelievable with respect to experiential phenomena. An alternative in both spheres is to show that the tension is illusory: That natural facts just are phenomenal or normative facts, and to explain why this mistakenly seems conceptually impossible to us. A third option—in the modern era most prominently taken by Chalmers in the mental realm, Moore in the moral—is to allow that we need a metaphysically distinct domain of properties and facts whose relation to natural properties and facts remains somewhat obscure to us (perhaps in part because we’re fuzzy on exactly what we mean by “natural”). It is still more obscure, however, what we are supposed to believe is added to this third option by appending: “Oh, and God!” or “also… Souls!” All it really seems to do, if we find ourselves compelled to take option three, is to be satisfied with a maximally obscure understanding of the relationship—and obscure the obscurity with religious language.
In that sense, looking to something like a “divine nature” for answers to these questions is worse than superfluous; it’s a dangerous red herring. It’s like proposing an obscure form of matter as constituting the truth of logical propositions, or that the property of being morally right is identical with the property of being cerulean blue. It is not what the right answer to the question could possibly look like—even if we harbor no doubts about the existence of matter or cerulean—and tends to distract us from the effort to frame the question in a way that might be answered satisfactorily. It’s not that the theologians had a coherent picture that their secular successors have failed to replicate; it’s that only in light of the modest progress made in the past century does it become obvious how complete the failure of the preceding millennia really was. But as Socrates taught us at the outset, the recognition of ignorance should be welcomed as the beginning of wisdom.
Update: A commenter inadvertently illustrates my point more succinctly with the following—apparently sincere—argument for how theism “solves” the is-ought gap:
1. Everything God says is true
2. God says I ought to love my neighbor as myself
3. Therefore ‘I ought to love my neighbor as myself’ is true.
4. Therefore, I ought to love my neighbor as myself. (this follows from standard inference rules of deontological logic).
If we’re accepting that as valid, though, this is needlessly cumbersome. We can skip to:
1. “I ought to love my neighbor as myself” is true.
You can, of course, plug any proposition you like into the first version of the “proof,” including “square circles exist” and “colorless green ideas sleep furiously.” If the question is how
there could be normative truths, neither “proof” actually tells you that; both simply assert that there are
such truths. God isn’t doing any work here; it’s just providing cover for smuggling in the premise.
Matt Yglesias points to recent research questioning the oft-asserted link between early childbearing and later poverty:
Kearney and Levine used data on miscarriages to isolate the impact of giving birth from background characteristics that may contribute to a decision to give birth. When used this way as a statistical control, the negative consequences of teen childbirth appear to be small and short-lived. Young women who gave birth and young women who miscarried have similarly bleak economic outcomes. Similarly, when you compare teen mothers not to the general population but to their own sisters who aren’t teen moms “the differences are quite modest.”
Part of this is unsurprising: In their excellent book Promises I Can Keep (which I reviewed forever ago) Kathryn Edin and Maria Kefalas chronicle in some detail how single motherhood is often a powerful source of meaning and identity for poor young women whose already unattractive economic prospects make the opportunity costs seem relatively slight. So it’s hardly news that there’s causation in that direction—from poverty to early, often unmarried childbearing. But it should be quite shocking that there’s no further effect, that the causation here is unidirectional, rather than a feedback loop. Can it really be the case, as this summary seems to suggest, that the added burden of raising a child alone makes so little difference in long-term economic prospects? It’s implausible enough on face that it’s tempting to look for ways the effect might be obscured, rather than nonexistent.
Unfortunately, the article itself is paywalled, so all I’ve got to go on is Matt’s precis. But the obvious question is whether this is just picking up the arbitrariness of distinguishing between single “teen” and “young non-teen” pregnancies. We mostly talk about the former because, well, it can sound creepy to suggest that adult reproduction is a problem. But in terms of the effect on education and career, becoming a single mom at 21 is probably pretty disruptive too, even if it’s not an obstacle to finishing high school. (In some cases it might even be more so, since the teen parent is more likely to still be living at home, with family resources and support networks to draw upon—and a child old enough for school when it comes time to enter the work force.) If that describes a large enough subset of the comparison group—the sisters and the women who miscarry—then it might just tell us that the effects of single teen parenthood are similar to those of single young parenthood more generally. If that’s what’s going on, it would be an important reminder that nothing magical actually happens when the odometer rolls over, but wouldn’t really hit the question of whether delaying childrearing to (say) the early 30s rather than the early 20s would reduce poverty.
I was naturally pleased to hear the New York Times had sent a reporter to cover the panel on “Freedom and the Panopticon” I moderated at the PEN World Voices Festival this weekend—but my jaw dropped a little at this bizarre paragraph in the writeup by the Times’ Larry Rohter:
The panel’s moderator was Julian Sanchez, a research fellow at the Washington-based Cato Institute, a libertarian advocacy organization whose donors include some of the country’s biggest corporations. His opening remarks and subsequent questions focused on the emergence of “the surveillance state,” largely glossing over the role that corporations play in the creation and maintenance of schemes of surveillance, and so it fell to other participants, like Catherine Crump of the American Civil Liberties Union, the Scottish science fiction novelist Ken MacLeod and Ms. Adamesteanu, to bring corporations into the discussion.
So first, I have to ask: Which specific “country’s biggest corporations” are those, exactly? Was there some research behind that insinuation, or are we just sort of ad libbing here? Because last I checked, corporate contributions were a little under 2% of Cato’s annual budget, as I could’ve told the reporter if he’d bothered to walk up after the panel and ask. Offhand, I don’t know whether we get any money from the companies most involved in collecting personal data—I try to ignore funding precisely to avoid any possible subconscious influence—but I’m betting the reporter didn’t bother to check. The authors on the panel, of course, are published by corporations, and need their books sold at Amazon and other large retail outlets, which probably makes them a good deal more financially dependent on corporate goodwill than I am, but this oddly didn’t provoke any speculative tangents about why they failed to go after Amazon by name.
Second, do we really need to start fabricating ulterior motives to explain why a researcher who specializes on national security spying, in an introduction that was primarily about literary metaphors, might focus on government surveillance when selecting examples? With a panel consisting of a lawyer who sues the government for a living, two novelists who lived in and wrote about communist surveillance states, and a science fiction writer whose latest book was centrally about government monitoring of the citizenry? Really? This is a big mystery that can only be unraveled by following the money? I guess I could have explicitly cashed out my remarks on the inadequacy of Orwell’s metaphor in the modern context by preempting what Catherine had planned to say about the explosion of private data gathering—which I agreed with completely, for what it’s worth—but it was, you know, a panel.
I’ll link my prepared remarks when they’re up at PEN, at which point readers can judge for themselves, but I’m pretty confident that they wouldn’t have inspired any commentary if they’d been delivered verbatim by someone from the ACLU. This is just a reporter injecting a glaringly irrelevant aside for the purpose of taking a cheap potshot at a think tank he obviously doesn’t care for. Which I normally try to be a little more Zen about, but geez, is it really such a psychological impossibility for some people to type the words “Cato Institute,” in any context, for any reason, without ginning up some horseshit pretext to editorialize?
There’s a widespread sense—of debatable historical accuracy, but widespread all the same—that we’re living in an era of especially pronounced political polarization, with a correspondingly poor ratio of tribal slogan slinging to meaningful democratic deliberation. One possible explanation for this is that the massive explosion of our media ecosystem makes it increasingly possible for us to construct ideologically congenial “filter bubbles” that provide us a rich enough stream of information to occupy all our available media consumption time. There are so many blogs, publications, podcasts, radio programs, Twitter feeds, and radio broadcasts that we can have a superficial impression of great variety, while only ever encountering information tailored to reinforce our preexisting worldview—conditions under which we know the median member of the group tends to adopt more extreme views over time. Our filter algorithms, as Eli Pariser argues at the link above, are increasingly doing this for us automatically, so that we may not even be aware of the echo chambers we’re constructing around ourselves. I assume most readers are reasonably familiar with this story.
But there’s another plausible story I think we can tell grounded in the ways information technology enables disagreement. Plenty has been said about the salutary effects of a more egalitarian media ecosystem, in which it’s far easier for anyone to weigh in on matters of public concern; to “talk back” to politicians, journalists, pundits, and others who once monopolized the microphone; or to become at least part-time pundits themselves.
The upside of this is that it’s much less likely that a smart contribution or novel perspective gets missed because it’s excluded by gatekeepers. The downside is that there’s a lot more crap to ignore. Any commenter on politics or public affairs whose audience reaches a certain size gets a level of feedback—via email, Twitter, blog posts and comments—that would have been unthinkable for any but the few most prominent public intellectuals a generation ago. Much of it is insightful and constructive. A whole lot spans the gamut from rude and ill-informed to semi-literate and vulgar. If the pundit is a woman, multiply that latter category by 10 and add a heaping spoonful of unsolicited sexual fantasies.
If, like me, you’re more wonky than partisan, and not especially well known outside a niche audience of folks who follow your issue space closely, this is a minor irritant. But my sense is that if you’re a little more famous (which is a low bar) and a bit more of a generalist (my issue space tends to work as an education filter), it can become an onslaught. You’d expect this to have an effect on the kind of people who end up being generalist political commentators, both by filtering the pool of people who find that an attractive vocation (or avocation) and inculcating certain habits and dispositions over time.
The nice way to say this is that selects for pundits who have a thick skin—or forces them to quickly develop one. The less nice way to say it is that it forces you to stop giving a shit what other people think. Maybe not universally——you’ll pick out a domain of people whose criticisms are allowed to slip through the armor—but by default.
Probably it always took a healthy ego to presume to hold forth on a wide array of public issues, confident that your thoguhts are relevant and interesting to the general populace, or at least the audience for political commentary. But in a media space this dense, it probably takes a good deal more.
If the type and volume of criticism we find online were experienced in person, we’d probably think we were witnessing some kind of est/Maoist reeducation session designed to break down the psyche so it could be rebuilt from scratch. The only way not to find this overwhelming and demoralized over any protracted period of time is to adopt a reflexive attitude that these are not real people whose opinions matter in any way. Which, indeed, seems to be a pretty widespread attitude. Scan the comments at one of the more partisan political blogs and you get a clear sense that the “other side” consists not so much of people with different ideas, but an inscrutable alien species. I think it’s self-evident that this is an unhealthy development in a democracy, but it may be a coping strategy that our media ecosystem is forcing on us—at least until we find a better one.
In an exchange at Slate with Will Saletan, Ross Douthat writes:
[T]he more purely secular liberalism has become, the more it has spent down its Christian inheritance—the more its ideals seem to hang from what Christopher Hitchens’ Calvinist sparring partner Douglas Wilson has called intellectual “skyhooks,” suspended halfway between our earth and the heaven on which many liberals have long since given up. Say what you will about the prosperity gospel and the cult of the God Within and the other theologies I criticize in Bad Religion, but at least they have a metaphysically coherent picture of the universe to justify their claims. Whereas much of today’s liberalism expects me to respect its moral fervor even as it denies the revelation that once justified that fervor in the first place. It insists that it is a purely secular and scientific enterprise even as it grounds its politics in metaphysical claims. (You will not find the principle of absolute human equality in evolutionary theory, or universal human rights anywhere in physics.) It complains that Christian teachings on homosexuality do violence to gay people’s equal dignity—but if the world is just matter in motion, whence comes this dignity? What justifies and sustains it? Why should I grant it such intense, almost supernatural respect?
Now, I know Ross has read his Euthyphro, but since he talks here as though he hasn’t, I’ll go ahead and make the obvious point: Invoking God doesn’t actually get you very far in ethics, because ascribing “goodness” to a deity or its laws is meaningless unless there’s some independent criterion for this. At best, God gets you two things: First, a plausible prudential internal motivation to behave “morally” (because God will punish you if you don’t), though of the same formal sort as the motivation you might have to obey a powerful state or a whimsical alien overlord. Second, a potential form of “expert validation” for independent moral truths we lack direct epistemic access to, as when we accept certain propositions on the grounds that mathematicians or scientists have confirmed them, even if most of us are incapable of comprehending the detailed proof. But invoking God doesn’t solve any of the problems that secular moral philosophers grapple with—it’s essentially just a way of gesturing at a black box, wherein we’re assured the answer lies, and asserting that we needn’t worry our pretty little heads about it.
If divine commandments are not supposed to be mere arbitrary rules we obey out of fear, then every question Ross thinks confronts the secular moralist reappears within a theistic framework. Why does being made in the image of God, whatever that entails, imbue people with dignity? Why would it obligate us to treat them (or refrain from treating them) in certain ways? Why should we believe that supernatural properties can supply us with the appropriate sort of reasons if natural properties cannot? As with cosmological questions, appealing to God defers the questions rather than answering them. In the moral case, one might add, it seems to do so in a rather unattractive way: It turns out that the reasons we have to respect other persons are rather like the reasons we have to respect property—flowing not from anything intrinsic to the object, but from the consideration due some third party who is the real source of value.
One way to highlight what’s wrong with this picture is by reflecting on the familiar but confused idea implicit in the observation: “You will not find the principle of absolute human equality in evolutionary theory, or universal human rights anywhere in physics.” Nor an account of plate tectonics in calculus, a diagnosis of schizophrenia in game theory, or the concept of Turing completeness in astronomy. It is not some kind of contingent disappointment that physics and biology have not discovered dutyons mixed in somewhere with the bosons and protons, or failed to detect the Rights Field generated by the human body: The kinds of facts studied by the natural sciences are, more or less by definition, not normative facts. But the same goes for supernatural facts. If there is a God, we still need ethics to get us across the gap to ought. Facts about the divine, if we had any, would just join natural facts in the pool of data for secular moral philosophy. [Addendum: This graf is not meant to take a position on the more contentious question of whether any natural facts—including facts about mental states—could be normative facts.]
Ross is certainly correct that we owe a debt to thinkers in the Christian tradition—who in turn owe one to pagan thinkers of ancient Greece and Rome—but it’s far less clear that the value of their contributions rests crucially on their theistic metaphysical trappings. Aquinas thought that moral law could be derived by human reason from reflection on natural facts. John Locke may have peppered his political philosophy with a generous dose of theology, but it’s not at all obvious that what was always most interesting and compelling in his arguments requires supernatural support. For that matter, Newton was famously quite devout, and thought the physical laws he described ordained by God. But it turns out that F=MA even after you reject that premise: Physical law (like moral law?) does not require a lawgiver. None of which is to deny there’s plenty of hard problems left for modern moral philosophers to solve, but they’re mostly problems that were obscured rather than seriously addressed by theology.
Ross closes with a pitch to modern liberals who wish to preserve ideals like human rights, suggesting that “for all its strange claims and confounding commandments, [Christianity] might still provide a better home for humankind than whatever destination our civilization is headed for.” This gets us to the odd circularity that’s always at the heart of moralized defenses of religion. The notion seems to be that someone not (yet) convinced of Christian doctrine would have strong reasons—strong humanistic reasons—to hope for a world in which human dignity and individual rights are respected. But then why aren’t these reasons enough to do the job on their own? If Christian doctrine is true, then external considerations are irrelevant to the truth of whatever normative beliefs it supports. If it is false, and our moral beliefs are unsustainable without this false premise, then we should be glad to be rid of false and unjustifiable beliefs. If we think it would be awful to discard those beliefs, then that awfulness is sufficient reason to hang onto them without any religious scaffolding. The only coherent version of this argument is that people who don’t think about it very hard will more readily believe that the religious version of this story provides reasons to respect rights, and comport themselves accordingly. If that were true, it might lead us to hope most people continue to hold the relevant false beliefs, but such pessimism seems premature.