Boing Boing’s Xeni Jardin recently unleashed some righteous fury on the “steaming turd of an opening line in David Streitfeld’s otherwise serviceable New York Times piece about [...] gender discrimination in Silicon Valley.” The offending line: “Men invented the Internet.”
The thing is—and hold the rotten fruit, I’m going somewhere with this—that’s actually pretty accurate. The putative counterexamples Jardin offers are so strained that they end up reinforcing rather than refuting the claim. Grace Hopper was an important computing pioneer, but her achievements have to do with the creation of early programming languages (notably a predecessor to COBOL), not the Internet. Radia Perlman wrote a key networking protocol, and should probably be more widely known for it, but she did it in 1985, well after the invention of ARPANET and TCP/IP. Ada Lovelace, by far the most famous of this trio, is also the one with the weakest claim to an original contribution: She was basically a press agent for Charles Babbage, and her reputation as the “first computer programmer” is based on her single significant paper, published in 1843, which included a description of an algorithm actually written by Babbage. If we’re really talking about the 15 or 20 people who could most reasonably be called “inventors of the Internet”—as opposed to “people who did a cool thing related to computers”—we are, in fact, talking about a bunch of guys. If we go with the broader “cool thing with a computer,” we’re no longer exclusively talking about guys, but until the last few decades, it’s still pretty disproportionate.
The correct takeaway from this, however, is not “herp derp, women can’t do math.” It’s that the social costs of sexism are really, really high. If, despite massive cultural and institutional barriers, significant numbers of women were making important contributions at the highest level all along, but denied credit, that would obviously be grossly unfair to the women in question. But it would be sort of a wash from the perspective of overall social utility: The allocation of credit is different, but society still gets the benefit of the brightest women’s contributions. The grimmer alternative is not that the wrong people get the credit, but that important innovations just don’t happen because the pool of brainpower available to tackle important social goals is needlessly halved—the potential female counterparts of Vint Cerf and Bob Kahn never got the opportunity to accelerate the progress of the Internet because, at the time, hostile institutions froze them out, or antiquated norms of femininity deterred them from obtaining STEM educations in the first place. That’s a much, much bigger loss.
It’s natural that we want to look for inspiration to the members of marginalized groups whose incredible achievements required surmounting equally incredible obstacles, but overselling the success stories can also subtly reinforce the complacent view that Genius Always Finds A Way, regardless of social arrangements, even if it’s not properly recognized until much later. The depressing reality is that it very often doesn’t. And the deeper the roots of the inequality—the more culturally entrenched it is—the longer we should expect inequality in achievement to persist even when the most obvious formal barriers have been eliminated. It’s worth pausing to belatedly recognize the neglected heroines who did overcome the odds, but insisting that there’s been some hidden parity of contributions all along actually seems to risk underselling the gravity of the collective harm we’ve done ourselves. Sexism has consequences—and it has left all of us vastly worse off.
Ross Douthat thinks so. Responding to my previous post on this, he writes:
I have indeed read my Euthyphro, and my response is basically the conventional Christian (and Jewish) response, which is that the dilemma Plato raises is a false one. Virtue is not something that’s commanded by God, the way a magistrate (or a whimsical alien overlord) might issue a legal code, but something that’s inherent to the Christian conception of the divine nature. God does not establish morality; he embodies it. He does not set standards; he is the standard. And even when he issues principles or precepts through revelation (as in the Ten Commandments or the Sermon on the Mount) he isn’t legislating in the style of Hammurabi or Solon. Instead, he’s revealing something about his own nature and inviting us to conform ourselves to the standards that it sets. [....]
Certainly no serious Christian moralist has ever suggested that moral problems are “a black box” that “we don’t need to worry our pretty little heads about” because God will always tell us what to do. Rather, the possibility of God’s existence — and with it, the possibility that moral laws no less than physical laws correspond to an actual reality, or Reality — is what makes those problems genuinely meaningful and interesting (as opposed to just innings in an “ethics game”) and lends the project of moral reasoning its coherence. The idea of God doesn’t replace secular moral reasoning, in other words, but it grounds this reasoning in something more durable than just aesthetic preference.
This, I think, helps illustrate my original point quite nicely. Ross evidently thinks this counts as some sort of explanation of how there might be moral truths. I think it is a classic virtus dormativa—a series of grammatically well-formed strings masquerading as propositions. It’s not much of an explanation to say Zeus causes thunderstorms unless you have an account of how Zeus does it.
My claim had never been, for what it’s worth, that God is a “black box” because it removes the need for moral deliberation about which specific acts are right; it’s a black box because saying “God” or “divine nature” or whatever doesn’t actually solve—or even make a gesture in the direction of solving—the question of how there could be normative facts or properties. If God is the standard, why ought we accept the standard to emulate it? How could a natural fact about God—even if you call it a “supernatural” fact, whatever that distinction amounts to—constitute a reason? If the fact that some action will cause suffering isn’t adequate motivation to avoid it without something further, why is the fact that the divine nature abhors suffering (or sin, or whatever we think) supposed to do any better? Why do we imagine someone could (rationally?) greet the first fact with a shrug, but not the second? Why is it more meaningful and interesting for moral rules to “correspond to reality” than to exist in some sort of “ethics game”? Are “meaningful” and “interesting” also natural properties, or just part of a meaningfulness-and-interestingness game? Every canonical modern metaethical question can be repeated with undiminished force after we accept (arguendo) everything Ross says here.
Even if you were fully convinced of the existence of God, you’d still be better off consulting Moore or Mackie or Parfit or Nagel on these questions than any theologian. Many modern moral writers think they can show how (and that) objective moral truths exist, and reasonable people can disagree about how successful these accounts are. But every one of the more prominent ones is vastly more persuasive and sophisticated than this “God is the standard” stuff, again, even taking for granted the existence of a God. Some, like Parfit, assert that there are objective moral truths, while allowing they can’t (yet) give a fully adequate metaphysical account of what this entails. But neither does the theologian, beyond slapping a name on the question and confusedly calling it an answer. Parfit at least has some interesting things to say on the matter because he recognizes the need for a further argument—one that might, if nothing else, undermine our sense that nothing could possibly fit the bill.
One might suggest there is no further explanation to be had: That it’s a brute fact that normative properties are part of the furniture of reality (or Reality)—which is what the appeal to God amounts to, with a boatload of other baggage tacked on for good measure. But that answer works as well for the secular moralist, and parsimony mitigates against unnecessarily stapling that answer to a lot of not-particularly-plausible Middle Eastern folklore. If the answer has something to do with it being in our own (divinely created) nature to respond to these facts as reasons, then again, God’s role in the explanation is either redundant or question begging. If it’s enough for us to be configured with a disposition to accept certain facts as reasons, then Simon Blackburn’s quasi-realism, or an argument along the lines of Nagel’s in The Possibility of Altruism works well enough on its own. If it’s the divine origin of that disposition that’s supposed to make this account satisfying, we’re back to circularity.
There is, incidentally, a pretty intriguing analogy here with the contemporary mind/body problem. There, too, we have a domain of phenomena—specifically, well… phenomena—that fit uneasily with a materialistic picture of the world. If the eliminativist solution—asserting that the apparent domain is merely illusory—is unattractive in ethics, it is quite literally unbelievable with respect to experiential phenomena. An alternative in both spheres is to show that the tension is illusory: That natural facts just are phenomenal or normative facts, and to explain why this mistakenly seems conceptually impossible to us. A third option—in the modern era most prominently taken by Chalmers in the mental realm, Moore in the moral—is to allow that we need a metaphysically distinct domain of properties and facts whose relation to natural properties and facts remains somewhat obscure to us (perhaps in part because we’re fuzzy on exactly what we mean by “natural”). It is still more obscure, however, what we are supposed to believe is added to this third option by appending: “Oh, and God!” or “also… Souls!” All it really seems to do, if we find ourselves compelled to take option three, is to be satisfied with a maximally obscure understanding of the relationship—and obscure the obscurity with religious language.
In that sense, looking to something like a “divine nature” for answers to these questions is worse than superfluous; it’s a dangerous red herring. It’s like proposing an obscure form of matter as constituting the truth of logical propositions, or that the property of being morally right is identical with the property of being cerulean blue. It is not what the right answer to the question could possibly look like—even if we harbor no doubts about the existence of matter or cerulean—and tends to distract us from the effort to frame the question in a way that might be answered satisfactorily. It’s not that the theologians had a coherent picture that their secular successors have failed to replicate; it’s that only in light of the modest progress made in the past century does it become obvious how complete the failure of the preceding millennia really was. But as Socrates taught us at the outset, the recognition of ignorance should be welcomed as the beginning of wisdom.
Update: A commenter inadvertently illustrates my point more succinctly with the following—apparently sincere—argument for how theism “solves” the is-ought gap:
1. Everything God says is true
2. God says I ought to love my neighbor as myself
3. Therefore ‘I ought to love my neighbor as myself’ is true.
4. Therefore, I ought to love my neighbor as myself. (this follows from standard inference rules of deontological logic).
If we’re accepting that as valid, though, this is needlessly cumbersome. We can skip to:
1. “I ought to love my neighbor as myself” is true.
You can, of course, plug any proposition you like into the first version of the “proof,” including “square circles exist” and “colorless green ideas sleep furiously.” If the question is how
there could be normative truths, neither “proof” actually tells you that; both simply assert that there are
such truths. God isn’t doing any work here; it’s just providing cover for smuggling in the premise.
Matt Yglesias points to recent research questioning the oft-asserted link between early childbearing and later poverty:
Kearney and Levine used data on miscarriages to isolate the impact of giving birth from background characteristics that may contribute to a decision to give birth. When used this way as a statistical control, the negative consequences of teen childbirth appear to be small and short-lived. Young women who gave birth and young women who miscarried have similarly bleak economic outcomes. Similarly, when you compare teen mothers not to the general population but to their own sisters who aren’t teen moms “the differences are quite modest.”
Part of this is unsurprising: In their excellent book Promises I Can Keep (which I reviewed forever ago) Kathryn Edin and Maria Kefalas chronicle in some detail how single motherhood is often a powerful source of meaning and identity for poor young women whose already unattractive economic prospects make the opportunity costs seem relatively slight. So it’s hardly news that there’s causation in that direction—from poverty to early, often unmarried childbearing. But it should be quite shocking that there’s no further effect, that the causation here is unidirectional, rather than a feedback loop. Can it really be the case, as this summary seems to suggest, that the added burden of raising a child alone makes so little difference in long-term economic prospects? It’s implausible enough on face that it’s tempting to look for ways the effect might be obscured, rather than nonexistent.
Unfortunately, the article itself is paywalled, so all I’ve got to go on is Matt’s precis. But the obvious question is whether this is just picking up the arbitrariness of distinguishing between single “teen” and “young non-teen” pregnancies. We mostly talk about the former because, well, it can sound creepy to suggest that adult reproduction is a problem. But in terms of the effect on education and career, becoming a single mom at 21 is probably pretty disruptive too, even if it’s not an obstacle to finishing high school. (In some cases it might even be more so, since the teen parent is more likely to still be living at home, with family resources and support networks to draw upon—and a child old enough for school when it comes time to enter the work force.) If that describes a large enough subset of the comparison group—the sisters and the women who miscarry—then it might just tell us that the effects of single teen parenthood are similar to those of single young parenthood more generally. If that’s what’s going on, it would be an important reminder that nothing magical actually happens when the odometer rolls over, but wouldn’t really hit the question of whether delaying childrearing to (say) the early 30s rather than the early 20s would reduce poverty.
I was naturally pleased to hear the New York Times had sent a reporter to cover the panel on “Freedom and the Panopticon“ I moderated at the PEN World Voices Festival this weekend—but my jaw dropped a little at this bizarre paragraph in the writeup by the Times’ Larry Rohter:
The panel’s moderator was Julian Sanchez, a research fellow at the Washington-based Cato Institute, a libertarian advocacy organization whose donors include some of the country’s biggest corporations. His opening remarks and subsequent questions focused on the emergence of “the surveillance state,” largely glossing over the role that corporations play in the creation and maintenance of schemes of surveillance, and so it fell to other participants, like Catherine Crump of the American Civil Liberties Union, the Scottish science fiction novelist Ken MacLeod and Ms. Adamesteanu, to bring corporations into the discussion.
So first, I have to ask: Which specific “country’s biggest corporations” are those, exactly? Was there some research behind that insinuation, or are we just sort of ad libbing here? Because last I checked, corporate contributions were a little under 2% of Cato’s annual budget, as I could’ve told the reporter if he’d bothered to walk up after the panel and ask. Offhand, I don’t know whether we get any money from the companies most involved in collecting personal data—I try to ignore funding precisely to avoid any possible subconscious influence—but I’m betting the reporter didn’t bother to check. The authors on the panel, of course, are published by corporations, and need their books sold at Amazon and other large retail outlets, which probably makes them a good deal more financially dependent on corporate goodwill than I am, but this oddly didn’t provoke any speculative tangents about why they failed to go after Amazon by name.
Second, do we really need to start fabricating ulterior motives to explain why a researcher who specializes on national security spying, in an introduction that was primarily about literary metaphors, might focus on government surveillance when selecting examples? With a panel consisting of a lawyer who sues the government for a living, two novelists who lived in and wrote about communist surveillance states, and a science fiction writer whose latest book was centrally about government monitoring of the citizenry? Really? This is a big mystery that can only be unraveled by following the money? I guess I could have explicitly cashed out my remarks on the inadequacy of Orwell’s metaphor in the modern context by preempting what Catherine had planned to say about the explosion of private data gathering—which I agreed with completely, for what it’s worth—but it was, you know, a panel.
I’ll link my prepared remarks when they’re up at PEN, at which point readers can judge for themselves, but I’m pretty confident that they wouldn’t have inspired any commentary if they’d been delivered verbatim by someone from the ACLU. This is just a reporter injecting a glaringly irrelevant aside for the purpose of taking a cheap potshot at a think tank he obviously doesn’t care for. Which I normally try to be a little more Zen about, but geez, is it really such a psychological impossibility for some people to type the words “Cato Institute,” in any context, for any reason, without ginning up some horseshit pretext to editorialize?
There’s a widespread sense—of debatable historical accuracy, but widespread all the same—that we’re living in an era of especially pronounced political polarization, with a correspondingly poor ratio of tribal slogan slinging to meaningful democratic deliberation. One possible explanation for this is that the massive explosion of our media ecosystem makes it increasingly possible for us to construct ideologically congenial “filter bubbles” that provide us a rich enough stream of information to occupy all our available media consumption time. There are so many blogs, publications, podcasts, radio programs, Twitter feeds, and radio broadcasts that we can have a superficial impression of great variety, while only ever encountering information tailored to reinforce our preexisting worldview—conditions under which we know the median member of the group tends to adopt more extreme views over time. Our filter algorithms, as Eli Pariser argues at the link above, are increasingly doing this for us automatically, so that we may not even be aware of the echo chambers we’re constructing around ourselves. I assume most readers are reasonably familiar with this story.
But there’s another plausible story I think we can tell grounded in the ways information technology enables disagreement. Plenty has been said about the salutary effects of a more egalitarian media ecosystem, in which it’s far easier for anyone to weigh in on matters of public concern; to “talk back” to politicians, journalists, pundits, and others who once monopolized the microphone; or to become at least part-time pundits themselves.
The upside of this is that it’s much less likely that a smart contribution or novel perspective gets missed because it’s excluded by gatekeepers. The downside is that there’s a lot more crap to ignore. Any commenter on politics or public affairs whose audience reaches a certain size gets a level of feedback—via email, Twitter, blog posts and comments—that would have been unthinkable for any but the few most prominent public intellectuals a generation ago. Much of it is insightful and constructive. A whole lot spans the gamut from rude and ill-informed to semi-literate and vulgar. If the pundit is a woman, multiply that latter category by 10 and add a heaping spoonful of unsolicited sexual fantasies.
If, like me, you’re more wonky than partisan, and not especially well known outside a niche audience of folks who follow your issue space closely, this is a minor irritant. But my sense is that if you’re a little more famous (which is a low bar) and a bit more of a generalist (my issue space tends to work as an education filter), it can become an onslaught. You’d expect this to have an effect on the kind of people who end up being generalist political commentators, both by filtering the pool of people who find that an attractive vocation (or avocation) and inculcating certain habits and dispositions over time.
The nice way to say this is that selects for pundits who have a thick skin—or forces them to quickly develop one. The less nice way to say it is that it forces you to stop giving a shit what other people think. Maybe not universally——you’ll pick out a domain of people whose criticisms are allowed to slip through the armor—but by default.
Probably it always took a healthy ego to presume to hold forth on a wide array of public issues, confident that your thoguhts are relevant and interesting to the general populace, or at least the audience for political commentary. But in a media space this dense, it probably takes a good deal more.
If the type and volume of criticism we find online were experienced in person, we’d probably think we were witnessing some kind of est/Maoist reeducation session designed to break down the psyche so it could be rebuilt from scratch. The only way not to find this overwhelming and demoralized over any protracted period of time is to adopt a reflexive attitude that these are not real people whose opinions matter in any way. Which, indeed, seems to be a pretty widespread attitude. Scan the comments at one of the more partisan political blogs and you get a clear sense that the “other side” consists not so much of people with different ideas, but an inscrutable alien species. I think it’s self-evident that this is an unhealthy development in a democracy, but it may be a coping strategy that our media ecosystem is forcing on us—at least until we find a better one.
In an exchange at Slate with Will Saletan, Ross Douthat writes:
[T]he more purely secular liberalism has become, the more it has spent down its Christian inheritance—the more its ideals seem to hang from what Christopher Hitchens’ Calvinist sparring partner Douglas Wilson has called intellectual “skyhooks,” suspended halfway between our earth and the heaven on which many liberals have long since given up. Say what you will about the prosperity gospel and the cult of the God Within and the other theologies I criticize in Bad Religion, but at least they have a metaphysically coherent picture of the universe to justify their claims. Whereas much of today’s liberalism expects me to respect its moral fervor even as it denies the revelation that once justified that fervor in the first place. It insists that it is a purely secular and scientific enterprise even as it grounds its politics in metaphysical claims. (You will not find the principle of absolute human equality in evolutionary theory, or universal human rights anywhere in physics.) It complains that Christian teachings on homosexuality do violence to gay people’s equal dignity—but if the world is just matter in motion, whence comes this dignity? What justifies and sustains it? Why should I grant it such intense, almost supernatural respect?
Now, I know Ross has read his Euthyphro, but since he talks here as though he hasn’t, I’ll go ahead and make the obvious point: Invoking God doesn’t actually get you very far in ethics, because ascribing “goodness” to a deity or its laws is meaningless unless there’s some independent criterion for this. At best, God gets you two things: First, a plausible prudential internal motivation to behave “morally” (because God will punish you if you don’t), though of the same formal sort as the motivation you might have to obey a powerful state or a whimsical alien overlord. Second, a potential form of “expert validation” for independent moral truths we lack direct epistemic access to, as when we accept certain propositions on the grounds that mathematicians or scientists have confirmed them, even if most of us are incapable of comprehending the detailed proof. But invoking God doesn’t solve any of the problems that secular moral philosophers grapple with—it’s essentially just a way of gesturing at a black box, wherein we’re assured the answer lies, and asserting that we needn’t worry our pretty little heads about it.
If divine commandments are not supposed to be mere arbitrary rules we obey out of fear, then every question Ross thinks confronts the secular moralist reappears within a theistic framework. Why does being made in the image of God, whatever that entails, imbue people with dignity? Why would it obligate us to treat them (or refrain from treating them) in certain ways? Why should we believe that supernatural properties can supply us with the appropriate sort of reasons if natural properties cannot? As with cosmological questions, appealing to God defers the questions rather than answering them. In the moral case, one might add, it seems to do so in a rather unattractive way: It turns out that the reasons we have to respect other persons are rather like the reasons we have to respect property—flowing not from anything intrinsic to the object, but from the consideration due some third party who is the real source of value.
One way to highlight what’s wrong with this picture is by reflecting on the familiar but confused idea implicit in the observation: “You will not find the principle of absolute human equality in evolutionary theory, or universal human rights anywhere in physics.” Nor an account of plate tectonics in calculus, a diagnosis of schizophrenia in game theory, or the concept of Turing completeness in astronomy. It is not some kind of contingent disappointment that physics and biology have not discovered dutyons mixed in somewhere with the bosons and protons, or failed to detect the Rights Field generated by the human body: The kinds of facts studied by the natural sciences are, more or less by definition, not normative facts. But the same goes for supernatural facts. If there is a God, we still need ethics to get us across the gap to ought. Facts about the divine, if we had any, would just join natural facts in the pool of data for secular moral philosophy. [Addendum: This graf is not meant to take a position on the more contentious question of whether any natural facts—including facts about mental states—could be normative facts.]
Ross is certainly correct that we owe a debt to thinkers in the Christian tradition—who in turn owe one to pagan thinkers of ancient Greece and Rome—but it’s far less clear that the value of their contributions rests crucially on their theistic metaphysical trappings. Aquinas thought that moral law could be derived by human reason from reflection on natural facts. John Locke may have peppered his political philosophy with a generous dose of theology, but it’s not at all obvious that what was always most interesting and compelling in his arguments requires supernatural support. For that matter, Newton was famously quite devout, and thought the physical laws he described ordained by God. But it turns out that F=MA even after you reject that premise: Physical law (like moral law?) does not require a lawgiver. None of which is to deny there’s plenty of hard problems left for modern moral philosophers to solve, but they’re mostly problems that were obscured rather than seriously addressed by theology.
Ross closes with a pitch to modern liberals who wish to preserve ideals like human rights, suggesting that “for all its strange claims and confounding commandments, [Christianity] might still provide a better home for humankind than whatever destination our civilization is headed for.” This gets us to the odd circularity that’s always at the heart of moralized defenses of religion. The notion seems to be that someone not (yet) convinced of Christian doctrine would have strong reasons—strong humanistic reasons—to hope for a world in which human dignity and individual rights are respected. But then why aren’t these reasons enough to do the job on their own? If Christian doctrine is true, then external considerations are irrelevant to the truth of whatever normative beliefs it supports. If it is false, and our moral beliefs are unsustainable without this false premise, then we should be glad to be rid of false and unjustifiable beliefs. If we think it would be awful to discard those beliefs, then that awfulness is sufficient reason to hang onto them without any religious scaffolding. The only coherent version of this argument is that people who don’t think about it very hard will more readily believe that the religious version of this story provides reasons to respect rights, and comport themselves accordingly. If that were true, it might lead us to hope most people continue to hold the relevant false beliefs, but such pessimism seems premature.
This recent xkcd comic implies that Tumblr is on its way to outpacing blogs in popularity or cultural relevance. I’m not at all sure that’s what the graph in question shows, though.
Presumably in the early days of the mass Internet you had a much higher proportion of novice users entering search terms like “Buffy website” or “game review website,” because the whole idea of a website was novel enough to seem like it needed to be included in the specification of what you were looking for—but over time people would have realized this was superfluous.
Something a bit similar has probably happened with blogs, partly out of this sort of familiarity (people realize it’s redundant to search for “Instapundit blog” or “Gawker blog” for example) but also partly because we’ve integrated blogs into the media ecosystem so fully that they’re much less of a discrete entity from, well, a website. Most major newspapers and magazines now run (or have acquired) at least one, and more often several blogs, with writers producing content on the same site in various forms. The distinctive of the form also seems less important as more traditional reported news stories are, quite incidentally, delivered in a “blog” form. So what people are now likely to think, and link, is “Writing over at Slate, Dave Weigel argues…” without splitting hairs about whether the particular item appeared as part of the Weigel blog or was classified as full-fledged Slate article.
In other words, we’ve all finally gotten it through our heads that all those panels on “blogging versus journalism” were based on a weird category error: Blogging was essentially just a particular kind of display format for writing, which could be pro or amateur, journalism or opinion, a personal essay or a corporate press release. So we understand that whether a piece of content happens to be delivered in a blog format is probably one of the least relevant things about it. That’s especially the case now that so much of our media consumption is driven by social media and aggregators—which means you’re apt to click straight through to a particular item without even noting the context in which it’s delivered, even on sites that do still maintain some kind of meaningful segregation of “articles” and “blog posts.”
As a practical matter, moreover, the ubiquity and integration of blogs means that “blog” is a much less useful search term for narrowing down your results: When everyone casually references blog posts, but actual blogs at publications are often not actually named as blogs (at The Atlantic, for instance, they’re called “Voices” and “Channels”) it’s as likely to distort your results as get you to what you’re looking for in many cases.
Tumblr, by contrast, is still ultimately one domain, and distinctive enough that if you saw something on a Tumblr, you’re apt to remember that it was a Tumblr, both from contextual clues about the site itself, and because there are still some very characteristic types of content that we associate with Tumblrs. So including “Tumblr” in your search terms is actually a really good way to quickly narrow your results so that you find that new Tumblr about MadMen/funny animated GIFs/stupid things people Tweet, as opposed to other kinds of sites which will have different types of characteristic content.
OK, so why dwell at such length on a doodle? Because there’s a general point here about how to interpret trends in online activity—whether it’s Google, Twitter references, Facebook likes, or whatever. The frequency trend over time can’t actually be interpreted straightforwardly without thinking a little bit about both broader changes in the media ecosystem you’re examining and how changing user behavior fits into the specific purposes of the technology you’re tracking. With search, the question isn’t just “are people interested in term X?” but also “is term X a useful filter for generating relevant results given the current universe of content being indexed?” You could, for instance, see a spike in searches for terms like “band” or “music”—not because people are suddenly more interested in bands or music, but because a bunch of popular bands have recently chosen annoyingly generic names like Cults, Tennis, and Girls. (For the same reason, you’d expect a lot more people to search “Girls HBO” than “The Sopranos HBO” or “Game of Thrones HBO”—just looking at the incidence of HBO would give you a misleading picture of people’s interest in HBO programming.)
In the other direction, there’s the familiar snowball effect, perhaps easiest to note in realtime on Twitter: Once a term is trending on Twitter, you can rest assured its incidence will absolutely explode, through a combination of people reacting to the fact (“Oh no, why is Dick Clark Trending? Did he die?” or “Who’s this Dick Clark guy?”) or self-consciously including it in tweets as an attention-grabbing or spamming mechanism, since users are more likely to do searches on terms they see are trending. In principle, then, you could have a set of terms with very similar frequencies in the Twitter population—until one breaks into the trending list by a tiny initial margin and then rapidly breaks away from the pack.
We’ve got such unprecedented tools for quantifying the zeitgeist in realtime that it’s become pretty common to use these sorts of metrics as a rough-and-ready estimate of public interest in various topics over time. Probably most of the time, that works out fine—but it can also give wildly misleading results if we don’t pause to think about how other factors, like context and user purposes, tend to affect the trends.
An Arizona man named William Hall is headed back to prison for violating the terms of his parole, which apparently included the following rather dated language:
I will not use an electronic bulletin board system, Internet relay chat channel, DCC chat channel, instant messaging, newsgroup, user group, peer to peer (e.g.Napster, Gnutella, Freenet, etc).
Hall argued (unsuccessfully) that his use of Facebook and MySpace should not have constituted a violation of those terms. Given that the purpose of the restriction was to prevent him from repeating his offense—attempted sexual exploitation of a minor—it seems pretty natural that those sites would indeed be included, and the facts I’ve looked at suggest Hall not only realized as much, but deliberately sought to circumvent software blocks imposed by his supervisors, so I’m not at all inclined to dispute the result particular case. Still, this exchange between the court and Hall’s surveillance officer (or S.O.) caught my eye:
Q: Can you show me where on [the Guidelines] [Hall] is not allowed to use social networking sites?
S.O.: Yes, No. 13. I will not use any electronic bulletin board system[s] and social networking [is] underneath bulletin board system[s] because [users] can post comments and that’s what a bulletin board system is.
The thing is, if any site where users can post comments—whether or not that’s the site’s primary function—counts as an “electronic bulletin board system,” that term now encompasses nearly all of the most popular or useful sites on the Net: Amazon, Ebay, YouTube, Craigslist, Wikipedia, IMDB, Flickr, news sites like NYTimes.com, most blogs… even many corporate websites have some kind of community forum or chat-support function. One can, of course, use most of those sites without accessing the bulletin board function—though Craigslist is a fuzzy case—but plenty of people just maintain a more-or-less static profile page and photo host, without using the chat or messaging features. Intuitively, you want to apply the rule to exclude the use of “bulletin board” style functions, not entire sites, given how ubiquitous those functions are now. But then you’ve got to figure out which functions those are. A Facebook wall seems bulletin board–ish when comments from others are enabled, but if they’re turned off, the wall is just a string of recent status updates. The chat function presumably counts as “instant messaging,” but what about private messages to friends? Is that “posting a comment” or is it just another form of e-mail, which is conspicuously excluded from the forbidden list, presumably because it’s more or less impossible to work as a white-collar professional without using e-mail. Does reading a solution to a technical problem on a discussion board (perhaps after clicking through a Google link that doesn’t make clear what sort of site the answer is located on) count as “using” a board, or does that require signing up for an account and posting messages?
Many of the other categories raise further questions. Phones and text messaging are presumably allowed, but Apple’s new Messages client effectively obliterates the distinction: Text messaging is instant messaging. Gmail is presumably allowed, while Gchat is not—but you’d be forgiven for failing to see much difference between a threaded e-mail exchange and a Gchat log, and presumably if Google felt like it, they could obliterate the distinction within the user interface entirely. Skype is a VoIP service—presumably equivalent to ordinary phone service—but also has text chat functionality. Listservs operate over e-mail, but are functionally equivalent to “newsgroups” or “user groups” and are typically also accessible via Web-based interfaces that look like any other electronic bulletin board.
Since the great generative power of the Internet rests in its end-to-end architecture, which enables new and unexpected functions to emerge in a distributed way—often from the choices of users behaving in ways the platform creators did not anticipate—it shouldn’t be terribly surprising that a list based on decade-old categories would lead to substantial ambiguity. Which seems like a bit of a problem when the classifications determine how well a parolee can reintegrate into society as a productive professional and community member—not to mention whether they get to remain free or return to lockup. At the very least, you’d think that would be a reason to regularly update the boilerplate, but it also calls into question whether regulating parolees by means of these broad technological categories is really the right way to go about it.
Kashmir Hill is a little disturbed by the public reaction to a controversial iPhone app called “Girls Around Me,” which mined data from the social location platform Foursquare and public profiles on sites like Facebook to create what one breathless critic dubbed “a tool for rapists and stalkers.” Writes Hill:
For one, how do we know that the women who could be found on this map did not want to be visible in this way? A recent Pew study found that women are the savvier sex when it comes to privacy settings, visiting them and ramping them up at much higher rates than men. Those Bostonians who popped up on Brownlee’s map may want to be publicly broadcasting where they are. There are, after all, dating apps, such as Blendr, that do offer exactly that to both men and women. Sometimes we can be found because we want to be found. [....]
The women “exposed” by ‘Girls Around Me’ have chosen to be on Foursquare, and the company tells me that the app was only able to pull up a woman’s Facebook profile if she chose to link it to her Foursquare account. In rejecting and banishing the app, we’re choosing to ignore the publicity choices these women have made (assuming, as Brownlee, does, that they did not intend to be that public), in the name of keeping them safe. And we make the ugly assumption that men who might want to check out women in the area have nefarious intentions. If you extend this kind of thinking ‘offline,’ we would be calling on all women to wear burkas so potential rapists and stalkers don’t spot them on the streets and follow them home.
Framed as a privacy issue, the reaction is indeed a little strange. There is no reason to join Foursquare, nor to actively link it to your public Facebook profile, unless you want to publicly share that information: That is the point of the service. Nor, frankly, is it all that much more difficult to do what the app enabled by manually examining nearby locations using Foursquare’s own official client to see where women (or men) have checked in—so it seems like a stretch to say this is one of those cases where technically-public information is being aggregated in a radically game-changing way.
What seems more likely is that the reaction to the app is substantially a result of, as Hill puts it, the “design of ‘Girls Around Me,’ consisting of Bond-style silhouettes of naked ladies dancing and posing provocatively.” Suppose Foursquare were rebranded as “Hookupsquare” (or “Stalkersquare”), or Facebook absorbed by AdultFriendFinder, but everything else about the software remained the same. One assumes they would be a good deal less popular, despite being identical in terms of the information flows they enabled. The “creepiness” would be entirely in the use they appears to endorse of that information, or the dimensions of public information they bring to the fore.
One reasons labels are important here is that often when it comes to sex, we like to maintain a deliberate measure of vagueness about exactly what we’re doing. Strangers flirting at a bar or club seldom open with “I’m hoping we might be able to sleep together a little later,” however clear it might be to all concerned that this is ultimately the point—and someone who did would probably seem pretty creepy, even if you’d been harboring the same hope. Shifting online, a big part of the appeal of Facebook is that it serves many of the functions of a dating site readily without defining the core activity as searching for a romantic partner. Keeping the romantic or sexual functions of an interaction—or a platform for interaction—in the background actually ends up serving those functions by creating a space for intimacy to develop without suggesting that all other communication is just some tedious prelude to fluid exchange.
Privacy is probably a bit of a red herring here, then, but it may seem natural to cast it in those terms because the feeling of objectification may overlap with one of the harm we identify with privacy violations. Why do we dislike the idea of being spied on in the shower—even by people who’ve seen us naked before, and so aren’t really obtaining “new information”? Presumably the same thing we find “creepy” about the guy who’s conspicuously ogling body parts in public—which may make someone feel more physically exposed even though, technically, they already were. And we react differently to the same observation depending on how overt it is. Someone who notices they’re being checked-out by a furtive glance—or, if we’re in an 80s movie, the slowly lowered sunglasses—may not mind, or even regard it as complimentary, at the same time as they’d be repulsed by open leering. Why? Because the attempt to be subtle (even if not so subtle as to escape notice) recognizes the observed both as a sexual body and as a subject with a reciprocal gaze of their own. The leer encompasses its object only as object.
If we think about problems of observation (whether the gaze is digital or physical) primarily in terms of control over information flows, the backlash against “Girls Around Me” can seem confused: It doesn’t render private information more public, and it doesn’t substantively alter the purposes for which that information can be easily used. That doesn’t mean the feeling there’s something creepy or objectionable bout the app is misguided, though: It just means not all issues in this sphere are usefully shoehorned into a privacy rubric.
Joel Stein is being roundly booed as a snob for opining in a recent Times roundtable that “Adults Should Read Adult Books” and steer clear of young adult fare. Maybe out of pure contrariness, I’m inclined to offer a qualified defense. It has to be qualified because, let’s face it, I’m a 33-year-old man with an extensive comic book library. I even read all the Harry Potter and Hunger Games books, and I can’t see why that’s any worse a light entertainment than watching an action movie—which takes about as long. Nor—since he mentions the shame of seeing an adult crack one of these tomes on an airplane—are they appreciably less sophisticated or intellectually challenging than any number of spy thrillers, conspiracy yarns, and other airport bookshop staples. None of them contain prose as clunky or appalling as nominal “adult” author Dan Brown churns out. They even provide a broad form of common culture, an easy source of metaphors, because many more of us have time to blow through one of them on a lazy Sunday than can commit to tackling Ulysses or Infinite Jest—which means it’s hard to believe there’s some kind of one-to-one displacement effect.
All that said, while there’s often a surprising amount of thematic sophistication to mine in literature aimed at kids and teens, let’s not kid ourselves that it’s equivalent to what you’ll find in the best literary fiction. Well-rounded adults need their share of that too, and some of the most rewarding of it can be hard going. Infinite Jest is an enormously fun book in many ways, but nobody’s going to honestly call it an easy read. Serious literature is too enjoyable to take an eat-your-lima beans approach, but like most really worthwhile things, it can be difficult, and there probably is some danger of getting so accustomed to a diet of effortless page-turners that we lose our ability to digest richer food.
Most of us, let’s admit, are fundamentally lazy: After working hard all day, who wants to work in their spare time? Even if we’d be glad we did it at the end, it can be hard to motivate ourselves to pick up Joyce when Rowling beckons, promising fewer demands. So in the same way that a little bit of physical vanity can be healthy, if that’s what it ultimately takes to get you out to the gym or the yoga studio a few times a week, maybe a dollop of snobbery is beneficial in the long run if that’s what pushes us to bear the initial mental strain of reading challenging fiction. Sure, ideally we wouldn’t need it: The intrinsic long-run rewards of the activity would be motivation enough. Ideally everyone would behave decently because it’s the right thing to do, and not out of fear of public shaming or legal penalties.
But realistically, we all need and employ all sorts of social commitment mechanisms to help us overcome akrasia and short term bias—to do what we reflectively know is better for us in the long term, rather than always and only what’s immediately pleasurable. (Of course, once we’re in the habit of working the relevant muscles, we get more immediate pleasure out of the “difficult” activity too.) That is, ultimately, a huge part of what it means to grow up, to be an adult: Taking the long view rather than acting on your immediate desires and impulses—but the internal fortitude to do this develops through, and is sustained by, all sorts of external sanctions. Perhaps more so when it comes to our fictional diets, because it’s hard not to notice that you’re developing a paunch and getting winded climbing stairs after a few years of subsisting on junk food and skipping the gym—whereas the way our thinking and personalities are flattened when they’re starved of nutritious fiction can be hard to notice until you get back in the habit and realize what you’ve been missing.
So I’ll go ahead and say I think we’d probably be worse off in a world completely bereft of this kind of cultural snobbery. It’s hard to resist poking fun at the pretentious undergrad lugging some William Gaddis doorstop to the local café so everyone can see what they’re reading—but I’m not sure I’d prefer a world where grown men and women didn’t feel slightly sheepish about settling in with teen lit day after day instead. This probably isn’t an issue for the sort of wordsmith public intellectuals who felt inclined to comment on Stein’s squib: Of course they’re going to read plenty of adult fiction, and of course they’re right to bristle at anyone who’d sneer at them for throwing something a bit lighter into the mix. But that’s not a given for most adults, and a little nagging voice in the back of the head that says “Hey, you’re a grown-ass man/lady, shouldn’t you challenge yourself a bit?” is probably a net cultural asset.