I’ve been reading Cryptography Engineering by Bruce Schneier, Niels Ferguson, and Tadayoshi Kohno, on the theory that someone who writes about privacy and surveillance as much as I do ought to have somewhat more detailed understanding of how modern cryptosystems work, even if I’m never going to be competent to work with the actual code. At one point, the authors mention a potential problem with certain kinds of ciphers. Stream ciphers work by combining a secret cryptographic key with a (supposedly) unique number—a random shared string or a “nonce“—to generate a “keystream.” The keystream is then XORed with the plaintext message to produce the encrypted ciphertext.
For the non-computer-geeks: that just means that for every bit in the sequence of ones and zeroes that makes up the plaintext, if the keystream has the same value in that position, then the corresponding bit of the ciphertext will get written as a 0, and if they have different values in that position, the corresponding bit of the ciphertext gets written as a 1. (This corresponds to the logical operation “exclusive or”: It outputs a 1, meaning “true,” just in case one or the other but not both of the inputs is true.) So, for instance, the capital letter “A” is normally encoded as the binary string: 01000001. A lowercase “z” is represented as 01111010. If you XOR them together, you get: 00111011. If you XOR in the “z” again, you get “A” back out… but that assumes you know at least one of the two original pieces of the puzzle: There’s a vast number of different ways to XOR two bytes together to produce 00111011.
In theory, there should be no way to reverse the process without knowing the keystream, which requires knowing the cryptographic key. But there’s a loophole: If the designer of the system messes up and allows that supposedly-unique “nonce” to be reused, then you end up with two messages that have been encrypted (XORed) with the same keystream. That doesn’t tell you what the keystream is. But if an attacker knows which two messages have been encrypted with the same keystream, he can just XOR those two cyphertexts together. The result is to mathematically cancel out the key, and give you the same result as if you’d just XORed the two original plaintexts together. Once you’ve got this, Schneier et. al. warn, an attacker will often be able to easily reverse the process and decompose that into the two original messages—provided the original messages aren’t just random gibberish, but something that exhibits patterns, like written English. But they didn’t bother explaining exactly how this could be done, so I ended up spending 15 minutes doodling on a legal pad trying to suss out how an attack would work. Even some of my geekier friends seemed to think it wasn’t possible when I floated the question on Twitter—and for some cases, it won’t be. For instance, if the two original messages are identical—meaning they have the same value at every bit position—then the result of XORing them is always going to be a string of zeroes, which makes it obvious the two initial messages were identical, but doesn’t give you any hint at the content of the messages.
Special cases aside, though, there definitely are some generally viable strategies for decomposing a file generated by XORing two messages—let’s assume they’re ordinary written English in ASCII characterformat—back into the original pair of texts. How would you go about it? I’ll update the post with the solutions that I came up with (or found online) later this weekend.
Update: I’m pleased, though not at all surprised, to see that I have a bunch of very smart readers who came up with basically all the strategies I did, and in some cases stated them with a good deal more sophistication than I could have. You’re probably better off just reading the comments, but I’ll summarize the basic ideas below the fold.
[Read more →]
The only possible explanation is that Ridley Scott has a Duke Brothers–style bet running with George Lucas: Who can produce the most crushingly disappointing prequel to a beloved classic of late-70s science fiction cinema? There’s no other way to account for the tedious, incoherent two hour train wreck that is Prometheus—a film whose powerhouse ensemble cast and stunning visuals ultimately fail to rescue it from a script that feels like it was fished from a dumpster of rejected SyFy Original Movie treatments.
How bad is it? The one and only character who doesn’t feel like a robot is, in fact, a robot. The protagonists are so flat and irritating that you end up rooting for the aliens to wipe out humanity—just as long as these guys die first. Nothing about this movie makes sense—not in the cool David Lynchian “makes you think” way, but rather in a “you didn’t think very much about this script, did you?” way. It’s what happens when nobody tells you your apres-bong dorm conversations were not, in fact, super deep. A partial list of plot holes and problems, which will contain many, many spoilers, just in case you’re still planning on throwing away your hard earned money. If you’re really that intrigued by the premise, though, I’d recommend just watching Stargate again.
[Read more →]
Under current law, this blog post will remain under copyright until 70 years after my death—which if I’m lucky means a century or more from the date of authorship. That’s an insanely long time when you consider that most economic studies have shown there’s almost no marginal incentive effect on production once you extend copyright terms much beyond the original span: 14 years renewable once, or 28 years total. Why would we needlessly lock away our own culture for so long?
One popular answer is the Mickey Mouse Theory. Though the effective commercial lifespan of the vast majority of copyrighted works is just a few years, a very few—like some of Disney’s iconic properties—continue to be immensely profitable for much longer. The owners of these properties then throw gobs of money at Congress, which ritualistically serves up a retroactive extension whenever these come within spitting distance of the public domain in order to protect their cash cows (or mice, as the case may be).
No doubt there’s something to that. Yet if that were the sole concern, you’d think the content industries would prefer a renewal structure that maxed out at the same term. The cost of renewing the registration of their profitable (or potentially profitable) works would be trivial for the labels and studios, but they’d also gain access to orphan works that nobody was making any use of. Our system, by contrast, seems perversely designed not just to provide extended protection for revenue-generating works, but to guarantee a minimal public domain.
Here’s an alternative hypothesis: Insanely long copyright terms are how the culture industries avoid competing with their own back catalogs. Imagine that we still had a copyright term that maxed out at 28 years, the regime the first Americans lived under. The shorter term wouldn’t in itself have much effect on output or incentives to create. But it would mean that, today, every book, song, image, and movie produced before 1984 was freely available to anyone with an Internet connection. Under those conditions, would we be anywhere near as willing to pay a premium for the latest release? In some cases, no doubt. But when the baseline is that we already have free, completely legal access to every great album, film, or novel produced before the mid-80s—more than any human being could realistically watch, read, or listen to in a lifetime—I wouldn’t be surprised if our consumption patterns became a good deal less neophilic, or at the very least, prices on new releases had to drop substantially to remain competitive.
If that’s right, there’s a perverse sense in which retroactive extensions for absurd lengths of time might actually, obliquely, serve copyright’s constitutional imperative to “promote the progress of science and useful arts”: Not by directly increasing the present value of newly produced works, but by shrinking the pool of free alternatives to the newest works. (Of course, any law restricting non-consumptive, non-commercial entertainment would have a similar effect.) If that’s true, though, it’s not enough in itself to justify the longer terms: The question is whether the marginal new content is actually worth losing universal free access to the older material. For reasons unclear to me, there often seems to be an undefended assumption that more newer stuff, whatever the quality, outweighs wider access to existing content at any conceivable margin. I’m not sure how you’d go about quantifying that, but it strikes me as wildly implausible on face.
Boing Boing’s Xeni Jardin recently unleashed some righteous fury on the “steaming turd of an opening line in David Streitfeld’s otherwise serviceable New York Times piece about [...] gender discrimination in Silicon Valley.” The offending line: “Men invented the Internet.”
The thing is—and hold the rotten fruit, I’m going somewhere with this—that’s actually pretty accurate. The putative counterexamples Jardin offers are so strained that they end up reinforcing rather than refuting the claim. Grace Hopper was an important computing pioneer, but her achievements have to do with the creation of early programming languages (notably a predecessor to COBOL), not the Internet. Radia Perlman wrote a key networking protocol, and should probably be more widely known for it, but she did it in 1985, well after the invention of ARPANET and TCP/IP. Ada Lovelace, by far the most famous of this trio, is also the one with the weakest claim to an original contribution: She was basically a press agent for Charles Babbage, and her reputation as the “first computer programmer” is based on her single significant paper, published in 1843, which included a description of an algorithm actually written by Babbage. If we’re really talking about the 15 or 20 people who could most reasonably be called “inventors of the Internet”—as opposed to “people who did a cool thing related to computers”—we are, in fact, talking about a bunch of guys. If we go with the broader “cool thing with a computer,” we’re no longer exclusively talking about guys, but until the last few decades, it’s still pretty disproportionate.
The correct takeaway from this, however, is not “herp derp, women can’t do math.” It’s that the social costs of sexism are really, really high. If, despite massive cultural and institutional barriers, significant numbers of women were making important contributions at the highest level all along, but denied credit, that would obviously be grossly unfair to the women in question. But it would be sort of a wash from the perspective of overall social utility: The allocation of credit is different, but society still gets the benefit of the brightest women’s contributions. The grimmer alternative is not that the wrong people get the credit, but that important innovations just don’t happen because the pool of brainpower available to tackle important social goals is needlessly halved—the potential female counterparts of Vint Cerf and Bob Kahn never got the opportunity to accelerate the progress of the Internet because, at the time, hostile institutions froze them out, or antiquated norms of femininity deterred them from obtaining STEM educations in the first place. That’s a much, much bigger loss.
It’s natural that we want to look for inspiration to the members of marginalized groups whose incredible achievements required surmounting equally incredible obstacles, but overselling the success stories can also subtly reinforce the complacent view that Genius Always Finds A Way, regardless of social arrangements, even if it’s not properly recognized until much later. The depressing reality is that it very often doesn’t. And the deeper the roots of the inequality—the more culturally entrenched it is—the longer we should expect inequality in achievement to persist even when the most obvious formal barriers have been eliminated. It’s worth pausing to belatedly recognize the neglected heroines who did overcome the odds, but insisting that there’s been some hidden parity of contributions all along actually seems to risk underselling the gravity of the collective harm we’ve done ourselves. Sexism has consequences—and it has left all of us vastly worse off.
Ross Douthat thinks so. Responding to my previous post on this, he writes:
I have indeed read my Euthyphro, and my response is basically the conventional Christian (and Jewish) response, which is that the dilemma Plato raises is a false one. Virtue is not something that’s commanded by God, the way a magistrate (or a whimsical alien overlord) might issue a legal code, but something that’s inherent to the Christian conception of the divine nature. God does not establish morality; he embodies it. He does not set standards; he is the standard. And even when he issues principles or precepts through revelation (as in the Ten Commandments or the Sermon on the Mount) he isn’t legislating in the style of Hammurabi or Solon. Instead, he’s revealing something about his own nature and inviting us to conform ourselves to the standards that it sets. [....]
Certainly no serious Christian moralist has ever suggested that moral problems are “a black box” that “we don’t need to worry our pretty little heads about” because God will always tell us what to do. Rather, the possibility of God’s existence — and with it, the possibility that moral laws no less than physical laws correspond to an actual reality, or Reality — is what makes those problems genuinely meaningful and interesting (as opposed to just innings in an “ethics game”) and lends the project of moral reasoning its coherence. The idea of God doesn’t replace secular moral reasoning, in other words, but it grounds this reasoning in something more durable than just aesthetic preference.
This, I think, helps illustrate my original point quite nicely. Ross evidently thinks this counts as some sort of explanation of how there might be moral truths. I think it is a classic virtus dormativa—a series of grammatically well-formed strings masquerading as propositions. It’s not much of an explanation to say Zeus causes thunderstorms unless you have an account of how Zeus does it.
My claim had never been, for what it’s worth, that God is a “black box” because it removes the need for moral deliberation about which specific acts are right; it’s a black box because saying “God” or “divine nature” or whatever doesn’t actually solve—or even make a gesture in the direction of solving—the question of how there could be normative facts or properties. If God is the standard, why ought we accept the standard to emulate it? How could a natural fact about God—even if you call it a “supernatural” fact, whatever that distinction amounts to—constitute a reason? If the fact that some action will cause suffering isn’t adequate motivation to avoid it without something further, why is the fact that the divine nature abhors suffering (or sin, or whatever we think) supposed to do any better? Why do we imagine someone could (rationally?) greet the first fact with a shrug, but not the second? Why is it more meaningful and interesting for moral rules to “correspond to reality” than to exist in some sort of “ethics game”? Are “meaningful” and “interesting” also natural properties, or just part of a meaningfulness-and-interestingness game? Every canonical modern metaethical question can be repeated with undiminished force after we accept (arguendo) everything Ross says here.
Even if you were fully convinced of the existence of God, you’d still be better off consulting Moore or Mackie or Parfit or Nagel on these questions than any theologian. Many modern moral writers think they can show how (and that) objective moral truths exist, and reasonable people can disagree about how successful these accounts are. But every one of the more prominent ones is vastly more persuasive and sophisticated than this “God is the standard” stuff, again, even taking for granted the existence of a God. Some, like Parfit, assert that there are objective moral truths, while allowing they can’t (yet) give a fully adequate metaphysical account of what this entails. But neither does the theologian, beyond slapping a name on the question and confusedly calling it an answer. Parfit at least has some interesting things to say on the matter because he recognizes the need for a further argument—one that might, if nothing else, undermine our sense that nothing could possibly fit the bill.
One might suggest there is no further explanation to be had: That it’s a brute fact that normative properties are part of the furniture of reality (or Reality)—which is what the appeal to God amounts to, with a boatload of other baggage tacked on for good measure. But that answer works as well for the secular moralist, and parsimony mitigates against unnecessarily stapling that answer to a lot of not-particularly-plausible Middle Eastern folklore. If the answer has something to do with it being in our own (divinely created) nature to respond to these facts as reasons, then again, God’s role in the explanation is either redundant or question begging. If it’s enough for us to be configured with a disposition to accept certain facts as reasons, then Simon Blackburn’s quasi-realism, or an argument along the lines of Nagel’s in The Possibility of Altruism works well enough on its own. If it’s the divine origin of that disposition that’s supposed to make this account satisfying, we’re back to circularity.
There is, incidentally, a pretty intriguing analogy here with the contemporary mind/body problem. There, too, we have a domain of phenomena—specifically, well… phenomena—that fit uneasily with a materialistic picture of the world. If the eliminativist solution—asserting that the apparent domain is merely illusory—is unattractive in ethics, it is quite literally unbelievable with respect to experiential phenomena. An alternative in both spheres is to show that the tension is illusory: That natural facts just are phenomenal or normative facts, and to explain why this mistakenly seems conceptually impossible to us. A third option—in the modern era most prominently taken by Chalmers in the mental realm, Moore in the moral—is to allow that we need a metaphysically distinct domain of properties and facts whose relation to natural properties and facts remains somewhat obscure to us (perhaps in part because we’re fuzzy on exactly what we mean by “natural”). It is still more obscure, however, what we are supposed to believe is added to this third option by appending: “Oh, and God!” or “also… Souls!” All it really seems to do, if we find ourselves compelled to take option three, is to be satisfied with a maximally obscure understanding of the relationship—and obscure the obscurity with religious language.
In that sense, looking to something like a “divine nature” for answers to these questions is worse than superfluous; it’s a dangerous red herring. It’s like proposing an obscure form of matter as constituting the truth of logical propositions, or that the property of being morally right is identical with the property of being cerulean blue. It is not what the right answer to the question could possibly look like—even if we harbor no doubts about the existence of matter or cerulean—and tends to distract us from the effort to frame the question in a way that might be answered satisfactorily. It’s not that the theologians had a coherent picture that their secular successors have failed to replicate; it’s that only in light of the modest progress made in the past century does it become obvious how complete the failure of the preceding millennia really was. But as Socrates taught us at the outset, the recognition of ignorance should be welcomed as the beginning of wisdom.
Update: A commenter inadvertently illustrates my point more succinctly with the following—apparently sincere—argument for how theism “solves” the is-ought gap:
1. Everything God says is true
2. God says I ought to love my neighbor as myself
3. Therefore ‘I ought to love my neighbor as myself’ is true.
4. Therefore, I ought to love my neighbor as myself. (this follows from standard inference rules of deontological logic).
If we’re accepting that as valid, though, this is needlessly cumbersome. We can skip to:
1. “I ought to love my neighbor as myself” is true.
You can, of course, plug any proposition you like into the first version of the “proof,” including “square circles exist” and “colorless green ideas sleep furiously.” If the question is how
there could be normative truths, neither “proof” actually tells you that; both simply assert that there are
such truths. God isn’t doing any work here; it’s just providing cover for smuggling in the premise.
Matt Yglesias points to recent research questioning the oft-asserted link between early childbearing and later poverty:
Kearney and Levine used data on miscarriages to isolate the impact of giving birth from background characteristics that may contribute to a decision to give birth. When used this way as a statistical control, the negative consequences of teen childbirth appear to be small and short-lived. Young women who gave birth and young women who miscarried have similarly bleak economic outcomes. Similarly, when you compare teen mothers not to the general population but to their own sisters who aren’t teen moms “the differences are quite modest.”
Part of this is unsurprising: In their excellent book Promises I Can Keep (which I reviewed forever ago) Kathryn Edin and Maria Kefalas chronicle in some detail how single motherhood is often a powerful source of meaning and identity for poor young women whose already unattractive economic prospects make the opportunity costs seem relatively slight. So it’s hardly news that there’s causation in that direction—from poverty to early, often unmarried childbearing. But it should be quite shocking that there’s no further effect, that the causation here is unidirectional, rather than a feedback loop. Can it really be the case, as this summary seems to suggest, that the added burden of raising a child alone makes so little difference in long-term economic prospects? It’s implausible enough on face that it’s tempting to look for ways the effect might be obscured, rather than nonexistent.
Unfortunately, the article itself is paywalled, so all I’ve got to go on is Matt’s precis. But the obvious question is whether this is just picking up the arbitrariness of distinguishing between single “teen” and “young non-teen” pregnancies. We mostly talk about the former because, well, it can sound creepy to suggest that adult reproduction is a problem. But in terms of the effect on education and career, becoming a single mom at 21 is probably pretty disruptive too, even if it’s not an obstacle to finishing high school. (In some cases it might even be more so, since the teen parent is more likely to still be living at home, with family resources and support networks to draw upon—and a child old enough for school when it comes time to enter the work force.) If that describes a large enough subset of the comparison group—the sisters and the women who miscarry—then it might just tell us that the effects of single teen parenthood are similar to those of single young parenthood more generally. If that’s what’s going on, it would be an important reminder that nothing magical actually happens when the odometer rolls over, but wouldn’t really hit the question of whether delaying childrearing to (say) the early 30s rather than the early 20s would reduce poverty.
I was naturally pleased to hear the New York Times had sent a reporter to cover the panel on “Freedom and the Panopticon“ I moderated at the PEN World Voices Festival this weekend—but my jaw dropped a little at this bizarre paragraph in the writeup by the Times’ Larry Rohter:
The panel’s moderator was Julian Sanchez, a research fellow at the Washington-based Cato Institute, a libertarian advocacy organization whose donors include some of the country’s biggest corporations. His opening remarks and subsequent questions focused on the emergence of “the surveillance state,” largely glossing over the role that corporations play in the creation and maintenance of schemes of surveillance, and so it fell to other participants, like Catherine Crump of the American Civil Liberties Union, the Scottish science fiction novelist Ken MacLeod and Ms. Adamesteanu, to bring corporations into the discussion.
So first, I have to ask: Which specific “country’s biggest corporations” are those, exactly? Was there some research behind that insinuation, or are we just sort of ad libbing here? Because last I checked, corporate contributions were a little under 2% of Cato’s annual budget, as I could’ve told the reporter if he’d bothered to walk up after the panel and ask. Offhand, I don’t know whether we get any money from the companies most involved in collecting personal data—I try to ignore funding precisely to avoid any possible subconscious influence—but I’m betting the reporter didn’t bother to check. The authors on the panel, of course, are published by corporations, and need their books sold at Amazon and other large retail outlets, which probably makes them a good deal more financially dependent on corporate goodwill than I am, but this oddly didn’t provoke any speculative tangents about why they failed to go after Amazon by name.
Second, do we really need to start fabricating ulterior motives to explain why a researcher who specializes on national security spying, in an introduction that was primarily about literary metaphors, might focus on government surveillance when selecting examples? With a panel consisting of a lawyer who sues the government for a living, two novelists who lived in and wrote about communist surveillance states, and a science fiction writer whose latest book was centrally about government monitoring of the citizenry? Really? This is a big mystery that can only be unraveled by following the money? I guess I could have explicitly cashed out my remarks on the inadequacy of Orwell’s metaphor in the modern context by preempting what Catherine had planned to say about the explosion of private data gathering—which I agreed with completely, for what it’s worth—but it was, you know, a panel.
I’ll link my prepared remarks when they’re up at PEN, at which point readers can judge for themselves, but I’m pretty confident that they wouldn’t have inspired any commentary if they’d been delivered verbatim by someone from the ACLU. This is just a reporter injecting a glaringly irrelevant aside for the purpose of taking a cheap potshot at a think tank he obviously doesn’t care for. Which I normally try to be a little more Zen about, but geez, is it really such a psychological impossibility for some people to type the words “Cato Institute,” in any context, for any reason, without ginning up some horseshit pretext to editorialize?
There’s a widespread sense—of debatable historical accuracy, but widespread all the same—that we’re living in an era of especially pronounced political polarization, with a correspondingly poor ratio of tribal slogan slinging to meaningful democratic deliberation. One possible explanation for this is that the massive explosion of our media ecosystem makes it increasingly possible for us to construct ideologically congenial “filter bubbles” that provide us a rich enough stream of information to occupy all our available media consumption time. There are so many blogs, publications, podcasts, radio programs, Twitter feeds, and radio broadcasts that we can have a superficial impression of great variety, while only ever encountering information tailored to reinforce our preexisting worldview—conditions under which we know the median member of the group tends to adopt more extreme views over time. Our filter algorithms, as Eli Pariser argues at the link above, are increasingly doing this for us automatically, so that we may not even be aware of the echo chambers we’re constructing around ourselves. I assume most readers are reasonably familiar with this story.
But there’s another plausible story I think we can tell grounded in the ways information technology enables disagreement. Plenty has been said about the salutary effects of a more egalitarian media ecosystem, in which it’s far easier for anyone to weigh in on matters of public concern; to “talk back” to politicians, journalists, pundits, and others who once monopolized the microphone; or to become at least part-time pundits themselves.
The upside of this is that it’s much less likely that a smart contribution or novel perspective gets missed because it’s excluded by gatekeepers. The downside is that there’s a lot more crap to ignore. Any commenter on politics or public affairs whose audience reaches a certain size gets a level of feedback—via email, Twitter, blog posts and comments—that would have been unthinkable for any but the few most prominent public intellectuals a generation ago. Much of it is insightful and constructive. A whole lot spans the gamut from rude and ill-informed to semi-literate and vulgar. If the pundit is a woman, multiply that latter category by 10 and add a heaping spoonful of unsolicited sexual fantasies.
If, like me, you’re more wonky than partisan, and not especially well known outside a niche audience of folks who follow your issue space closely, this is a minor irritant. But my sense is that if you’re a little more famous (which is a low bar) and a bit more of a generalist (my issue space tends to work as an education filter), it can become an onslaught. You’d expect this to have an effect on the kind of people who end up being generalist political commentators, both by filtering the pool of people who find that an attractive vocation (or avocation) and inculcating certain habits and dispositions over time.
The nice way to say this is that selects for pundits who have a thick skin—or forces them to quickly develop one. The less nice way to say it is that it forces you to stop giving a shit what other people think. Maybe not universally——you’ll pick out a domain of people whose criticisms are allowed to slip through the armor—but by default.
Probably it always took a healthy ego to presume to hold forth on a wide array of public issues, confident that your thoguhts are relevant and interesting to the general populace, or at least the audience for political commentary. But in a media space this dense, it probably takes a good deal more.
If the type and volume of criticism we find online were experienced in person, we’d probably think we were witnessing some kind of est/Maoist reeducation session designed to break down the psyche so it could be rebuilt from scratch. The only way not to find this overwhelming and demoralized over any protracted period of time is to adopt a reflexive attitude that these are not real people whose opinions matter in any way. Which, indeed, seems to be a pretty widespread attitude. Scan the comments at one of the more partisan political blogs and you get a clear sense that the “other side” consists not so much of people with different ideas, but an inscrutable alien species. I think it’s self-evident that this is an unhealthy development in a democracy, but it may be a coping strategy that our media ecosystem is forcing on us—at least until we find a better one.
In an exchange at Slate with Will Saletan, Ross Douthat writes:
[T]he more purely secular liberalism has become, the more it has spent down its Christian inheritance—the more its ideals seem to hang from what Christopher Hitchens’ Calvinist sparring partner Douglas Wilson has called intellectual “skyhooks,” suspended halfway between our earth and the heaven on which many liberals have long since given up. Say what you will about the prosperity gospel and the cult of the God Within and the other theologies I criticize in Bad Religion, but at least they have a metaphysically coherent picture of the universe to justify their claims. Whereas much of today’s liberalism expects me to respect its moral fervor even as it denies the revelation that once justified that fervor in the first place. It insists that it is a purely secular and scientific enterprise even as it grounds its politics in metaphysical claims. (You will not find the principle of absolute human equality in evolutionary theory, or universal human rights anywhere in physics.) It complains that Christian teachings on homosexuality do violence to gay people’s equal dignity—but if the world is just matter in motion, whence comes this dignity? What justifies and sustains it? Why should I grant it such intense, almost supernatural respect?
Now, I know Ross has read his Euthyphro, but since he talks here as though he hasn’t, I’ll go ahead and make the obvious point: Invoking God doesn’t actually get you very far in ethics, because ascribing “goodness” to a deity or its laws is meaningless unless there’s some independent criterion for this. At best, God gets you two things: First, a plausible prudential internal motivation to behave “morally” (because God will punish you if you don’t), though of the same formal sort as the motivation you might have to obey a powerful state or a whimsical alien overlord. Second, a potential form of “expert validation” for independent moral truths we lack direct epistemic access to, as when we accept certain propositions on the grounds that mathematicians or scientists have confirmed them, even if most of us are incapable of comprehending the detailed proof. But invoking God doesn’t solve any of the problems that secular moral philosophers grapple with—it’s essentially just a way of gesturing at a black box, wherein we’re assured the answer lies, and asserting that we needn’t worry our pretty little heads about it.
If divine commandments are not supposed to be mere arbitrary rules we obey out of fear, then every question Ross thinks confronts the secular moralist reappears within a theistic framework. Why does being made in the image of God, whatever that entails, imbue people with dignity? Why would it obligate us to treat them (or refrain from treating them) in certain ways? Why should we believe that supernatural properties can supply us with the appropriate sort of reasons if natural properties cannot? As with cosmological questions, appealing to God defers the questions rather than answering them. In the moral case, one might add, it seems to do so in a rather unattractive way: It turns out that the reasons we have to respect other persons are rather like the reasons we have to respect property—flowing not from anything intrinsic to the object, but from the consideration due some third party who is the real source of value.
One way to highlight what’s wrong with this picture is by reflecting on the familiar but confused idea implicit in the observation: “You will not find the principle of absolute human equality in evolutionary theory, or universal human rights anywhere in physics.” Nor an account of plate tectonics in calculus, a diagnosis of schizophrenia in game theory, or the concept of Turing completeness in astronomy. It is not some kind of contingent disappointment that physics and biology have not discovered dutyons mixed in somewhere with the bosons and protons, or failed to detect the Rights Field generated by the human body: The kinds of facts studied by the natural sciences are, more or less by definition, not normative facts. But the same goes for supernatural facts. If there is a God, we still need ethics to get us across the gap to ought. Facts about the divine, if we had any, would just join natural facts in the pool of data for secular moral philosophy. [Addendum: This graf is not meant to take a position on the more contentious question of whether any natural facts—including facts about mental states—could be normative facts.]
Ross is certainly correct that we owe a debt to thinkers in the Christian tradition—who in turn owe one to pagan thinkers of ancient Greece and Rome—but it’s far less clear that the value of their contributions rests crucially on their theistic metaphysical trappings. Aquinas thought that moral law could be derived by human reason from reflection on natural facts. John Locke may have peppered his political philosophy with a generous dose of theology, but it’s not at all obvious that what was always most interesting and compelling in his arguments requires supernatural support. For that matter, Newton was famously quite devout, and thought the physical laws he described ordained by God. But it turns out that F=MA even after you reject that premise: Physical law (like moral law?) does not require a lawgiver. None of which is to deny there’s plenty of hard problems left for modern moral philosophers to solve, but they’re mostly problems that were obscured rather than seriously addressed by theology.
Ross closes with a pitch to modern liberals who wish to preserve ideals like human rights, suggesting that “for all its strange claims and confounding commandments, [Christianity] might still provide a better home for humankind than whatever destination our civilization is headed for.” This gets us to the odd circularity that’s always at the heart of moralized defenses of religion. The notion seems to be that someone not (yet) convinced of Christian doctrine would have strong reasons—strong humanistic reasons—to hope for a world in which human dignity and individual rights are respected. But then why aren’t these reasons enough to do the job on their own? If Christian doctrine is true, then external considerations are irrelevant to the truth of whatever normative beliefs it supports. If it is false, and our moral beliefs are unsustainable without this false premise, then we should be glad to be rid of false and unjustifiable beliefs. If we think it would be awful to discard those beliefs, then that awfulness is sufficient reason to hang onto them without any religious scaffolding. The only coherent version of this argument is that people who don’t think about it very hard will more readily believe that the religious version of this story provides reasons to respect rights, and comport themselves accordingly. If that were true, it might lead us to hope most people continue to hold the relevant false beliefs, but such pessimism seems premature.
This recent xkcd comic implies that Tumblr is on its way to outpacing blogs in popularity or cultural relevance. I’m not at all sure that’s what the graph in question shows, though.
Presumably in the early days of the mass Internet you had a much higher proportion of novice users entering search terms like “Buffy website” or “game review website,” because the whole idea of a website was novel enough to seem like it needed to be included in the specification of what you were looking for—but over time people would have realized this was superfluous.
Something a bit similar has probably happened with blogs, partly out of this sort of familiarity (people realize it’s redundant to search for “Instapundit blog” or “Gawker blog” for example) but also partly because we’ve integrated blogs into the media ecosystem so fully that they’re much less of a discrete entity from, well, a website. Most major newspapers and magazines now run (or have acquired) at least one, and more often several blogs, with writers producing content on the same site in various forms. The distinctive of the form also seems less important as more traditional reported news stories are, quite incidentally, delivered in a “blog” form. So what people are now likely to think, and link, is “Writing over at Slate, Dave Weigel argues…” without splitting hairs about whether the particular item appeared as part of the Weigel blog or was classified as full-fledged Slate article.
In other words, we’ve all finally gotten it through our heads that all those panels on “blogging versus journalism” were based on a weird category error: Blogging was essentially just a particular kind of display format for writing, which could be pro or amateur, journalism or opinion, a personal essay or a corporate press release. So we understand that whether a piece of content happens to be delivered in a blog format is probably one of the least relevant things about it. That’s especially the case now that so much of our media consumption is driven by social media and aggregators—which means you’re apt to click straight through to a particular item without even noting the context in which it’s delivered, even on sites that do still maintain some kind of meaningful segregation of “articles” and “blog posts.”
As a practical matter, moreover, the ubiquity and integration of blogs means that “blog” is a much less useful search term for narrowing down your results: When everyone casually references blog posts, but actual blogs at publications are often not actually named as blogs (at The Atlantic, for instance, they’re called “Voices” and “Channels”) it’s as likely to distort your results as get you to what you’re looking for in many cases.
Tumblr, by contrast, is still ultimately one domain, and distinctive enough that if you saw something on a Tumblr, you’re apt to remember that it was a Tumblr, both from contextual clues about the site itself, and because there are still some very characteristic types of content that we associate with Tumblrs. So including “Tumblr” in your search terms is actually a really good way to quickly narrow your results so that you find that new Tumblr about MadMen/funny animated GIFs/stupid things people Tweet, as opposed to other kinds of sites which will have different types of characteristic content.
OK, so why dwell at such length on a doodle? Because there’s a general point here about how to interpret trends in online activity—whether it’s Google, Twitter references, Facebook likes, or whatever. The frequency trend over time can’t actually be interpreted straightforwardly without thinking a little bit about both broader changes in the media ecosystem you’re examining and how changing user behavior fits into the specific purposes of the technology you’re tracking. With search, the question isn’t just “are people interested in term X?” but also “is term X a useful filter for generating relevant results given the current universe of content being indexed?” You could, for instance, see a spike in searches for terms like “band” or “music”—not because people are suddenly more interested in bands or music, but because a bunch of popular bands have recently chosen annoyingly generic names like Cults, Tennis, and Girls. (For the same reason, you’d expect a lot more people to search “Girls HBO” than “The Sopranos HBO” or “Game of Thrones HBO”—just looking at the incidence of HBO would give you a misleading picture of people’s interest in HBO programming.)
In the other direction, there’s the familiar snowball effect, perhaps easiest to note in realtime on Twitter: Once a term is trending on Twitter, you can rest assured its incidence will absolutely explode, through a combination of people reacting to the fact (“Oh no, why is Dick Clark Trending? Did he die?” or “Who’s this Dick Clark guy?”) or self-consciously including it in tweets as an attention-grabbing or spamming mechanism, since users are more likely to do searches on terms they see are trending. In principle, then, you could have a set of terms with very similar frequencies in the Twitter population—until one breaks into the trending list by a tiny initial margin and then rapidly breaks away from the pack.
We’ve got such unprecedented tools for quantifying the zeitgeist in realtime that it’s become pretty common to use these sorts of metrics as a rough-and-ready estimate of public interest in various topics over time. Probably most of the time, that works out fine—but it can also give wildly misleading results if we don’t pause to think about how other factors, like context and user purposes, tend to affect the trends.