One of the most interesting panels at CFP this week concerned reputation and privacy on social networking sites, and two of the panelists drew special attention to the often repugnant—and sometimes criminal—abuse and harrassment directed at female bloggers. What I found particularly fascinating at the time was the problem (potentially, if not in the specific fact patterns the panelists discussed) of decentralized or distributed harrassment. That is, we can imagine a case where many hundreds or thousands of people online all behave such that, considered in isolation, no person’s individual actions rise to the level of harrassment. We can even imagine it being the case that no individual intends that any other person experience harrassment (as opposed to, say, a single instance of criticism). Yet the aggregate effect of all these dispersed actions may easily lead to quite severe harassment in effect(a flood of critical e-mails and comments, possibly enough to crash a server).
Of course, in reality it seems clear that the people who make really vile attacks on female bloggers often do intend harassment, and often the attacks are coordinated, but I was intrigued by the possiblity that this need not be so. Then we would have an instance of what the blogger RadGeek, in a truly excellent recent post, has characterized as an “invisible fist”—that is, a toxic spontaneous order, as opposed to the benign sorts celebrated by admirers of Friedrich Hayek and Adam Smith. Seen from another angle, a version of Zeno’s millet seed paradox or Derek Parfit’s harmless torturers thought experiment.
But then on Thursday I encountered, quite by chance, the case of Ariel Waldman, which highlighted a quite different set of problems. Waldman is angry because the social messaging site Twitter “refuses to uphold its terms of service,” because while those terms stipulate that users “must not abuse, harass, threaten, impersonate or intimidate other Twitter users,” Twitter has refused to ban another user who had, Waldman says, subjected her to harassment over a period of months. She says Twitter representatives have told her they fear being sued if they cancel the user’s account.
We should get two bits of legal silliness out of the way first. If, in fact, Twitter fears a suit by a user stemming from an account cancellation, that really does seem inexplicable. Twitter is offering a free service, and their terms pretty clearly give them broad discretion to cut off that service at any time. (Which is a good thing, given that they seem to be down constantly. How would you know if you’d been banned?) On the other hand, while I certainly understand why someone would regard those terms as an implicit pledge to bar abusive users, the idea that they’re literally violating some sort of contractual obligation if they fail to do so seems like a non-starter; they explicitly disclaim any such obligation. This is from an archived copy of the TOS from November 2007:
On to the specifics of the case. The putatively harassing “user” in question is actually a kind of collective entity in itself: It’s an account called Confession—which has just abruptly ceased operating—and it actually just works as a feed for a Web interface where users are invited to post their “anonymous sins.” (Think of the online art project PostSecret.) Many users do use the feed to share their secret peccadilloes, impure thoughts, and other such miscellany. Many others just use it as an all-purpose Speaker’s Corner, and many of those seem to have plenty of vulgar thoughts to vent.
That means that, while it seems probable, it’s not entirely clear that the messages Waldman mentions all have the same author. (We know that there were “multiple” such messages over several months, but she does not say how many, and alludes to the specific content of only a few.) The Confession account is really, in effect, a kind of mini-forum in user drag. The closing of that account affects all the participants in the forum in addition to Waldman’s antagonist.
It’s also interesting that most commenters seem to accept at face value the characterization of the offending messages as “harassing,” a description that seems highly questionable. This is not because the content of the messages is remotely defensible. One message, we’re told, calls her a “cunt,” while a couple others, apparently referring to photographs posted to her Flickr account, opine that Waldman “looks like a … crack whore” or is making “lesbian porn.” My reservation about the label is that, at least colloquially, we use the word “harassment” to describe actions that actively impinge on the victim’s space in some way—making repeated phone calls, say, or sending many unwanted e-mails. By contrast, someone who writes nasty blog posts about another person may prove himself to be a nasty piece of work, but it doesn’t seem natural to think of what’s going on in that case as “harassment”. And however repugnant we found the content of those attacks, we might be disturbed by the practice of an ISP that decided to take a blog down on the basis of such posts.
So which model fits Twitter? Well, it is possible to send messages directly to another user, and it’s not hard to imagine someone carrying out the more traditional sort of harassment by that route. But in that case, the victim has an easy remedy: So long as they are not constantly switching accounts, the harasser can simply be blocked. The one wrinkle is that, as one of my fellow CFP attendees put it, we are now a lot like medieval demons: Speak our names, and we appear. In this context, that means that a tweet from any (unblocked) user prefixed with @username will show up on that user’s “replies” page. So there is a sense in which taking about someone automatically amounts to talking to them, though it’s unclear whether that applies to any of the offensive messages here.
For the most part, though, you only receive messages from other users you have chosen to follow. Which is to say, this arguably looks a lot more like being subject to attack on someone else’s blog than it does like anything we ordinarily describe as harassment. Of course, one might well think that Twitter should nevertheless be in the position of policing what any user may say about any other user, or about anyone at all. But once we acknowledge that this is what we’re talking about—not about preventing “harassment,” but about passing judgment on what sort of offensive remarks go “too far”—it’s a bit easier to understand Twitter’s hesitation.
Whether we think it is good policy for them to do this depends, in part, on how we think of Twitter. Some commenters have criticized the company for poor “community management,” and there’s surely something to this if we actually think of the service as a “community.” Good message boards often have moderators who run through deleting posts that involve vulgar personal attacks on other users, in part because a handful of noxious speakers can easily deter participation by the attacked users. This often sets off a snowball effect, as the departure of more civil contributors lowers the signal-to-noise ratio, further deterring participation, and so on in a vicious cycle.
By contrast, if we think of it as a series of overlapping but discrete clusters—less like a blog’s comment section than like the blogosphere as a whole—we’re apt to take a more laissez-faire view: If you don’t like what someone is saying about you, don’t read that person’s feed. I’ll confess, it had certainly never occurred to me to view the whole of Twitter as a community. I get messages from and send messages to a few dozen friends, and remain pretty much oblivious as to what the overwhelming majority of users are doing or saying. Perhaps that says less about the site itself than about my use of it, but I’ll confess I find it puzzling why someone would think a site organized on Twitter’s hyper-customized model needs active moderation.
Now, possibly Waldman herself doesn’t really think the problem is “harassment” either, even if that is (as the response at the sites above confirms) a rhetorically potent term to use. Rather, what really seems to bother her is the prospect of having these messages turn up in Google searches, perhaps by prospective employers. Which is to say, this is a brand management problem.
That brings us to another question that’s tricky online, insofar as we’re interested not in “harassment” but defamation or libel. Who is a public figure? Because the standards for libel and invasion of privacy are quite different, depending on whether the victim is a public or private figure. A popular adaptation of Warhol has it that on the Internet, everyone will be famous to 15 people. Do you qualify if you have a couple thousand Twitter followers? If you’re popular on Flickr? Traditionally, the law makes “public figure” a contextual matter: You can be a public figure relative to a relevant community, even if you’re not well known in the wider world.
The legal questions, of course, don’t determine what Twitter ought to do. But an awful lot of people seem awfully quick to demand that the folks providing the platform for speech start determining the acceptability of speech.