Some further assorted thoughts falling out of the first few weeks of Google Plus:
- People seem initially pleased with the way privacy is baked into the Circles architecture. I wonder whether it doesn’t actually make inadvertent privacy breaches of the “DM #fail” type more likely. The way most people use Facebook in practice, there’s a small amount of content you may (or may not) make generally available, and just about everything else is available to all (and only) approved friends. So you make one decision, more or less, at the outset: How much of my basic profile information am I OK with Joe Random seeing? You make a binary yes/no decision about each friend request. And then, given how strict or loose you’ve been in approving “friends,” the only decision you really have to make about each status update, photo upload, or other content sharing is: “Is this something I’m prepared to have all my so-called Friends see?” If not, it doesn’t get uploaded at all. The circles architecture encourages uploading of content only appropriate for a subset of the persons to whom one is connected. So, of course, does email. We see more DM fails than e-mail misfires because it’s easier to inadvertently use “@” instead of “D” (or click the wrong option in a client) on a platform where publicity is the default than it is to type the wrong email address in an e-mail TO: field (though autocomplete, while convenient, probably increases the risk of such errors). Making a mistake about which circles you’re sharing to is probably somewhere in the middle, but I expect we’ll start to see a bunch of these as the network grows.
- An interesting slideshow posted by technologist Vincent Wong suggests that it’s a mistake to think of G+ as essentially a Facebook competitor—one more social network. Instead, he argues, the real value to Google will be in pre-populating potential collaborative clusters for its many cloud apps. This has a certain logic to it. There are a bunch of ways in which the cloud model has advantages over desktop software: Simultaneous mass update rollouts; data backup; device independence. But a big one is the capability for information sharing and collaboration. This aspect becomes a lot more evident when users are already connected and grouped in useful ways. If I’m e-mailing or calling a group of people to ask them to come to a meeting or a party, which each of them make a note of, it doesn’t make a huge difference whether each of them is using an individual Calendar app or a cloud app like Google Calendar. But when you integrate Google Calendar with G+, where work colleagues and local friends are pregrouped, so that invitations sent to a circle can automatically be transformed into entries in the calendars they’re using in their mobile devices? Well, that’s a lot easier for everyone. The utility of the social network becomes much greater when those Circles aren’t just the basis of quick, selective messaging, but also collaboration in Google Docs or whatever cloud app Google rolls out next, and each app becomes much more useful when it comes with personally useful collaborative clusters preloaded (or easily adapted from existing ones).
- Speaking of privacy and apps, a more general point: We’re accustomed to talking about privacy, especially in the context of social network sites, as being primarily a matter of harm avoidance. Of course, there’s one surefire way to avoid privacy violations: Create a site on which it’s crystal clear that all content is totally public. Then nobody puts any information there unless they’re comfortable with absolutely anyone seeing it, and since nobody expects any privacy, nobody’s is ever violated. Needless to say, this would defeat the purpose of such sites, which is to create spaces in which people do feel able to have valuable interactions (and share thoughts, photos, personal details) with small trusted groups that they wouldn’t necessarily want broadcast to the world.
Instead of thinking about privacy exclusively in terms of harms, it might be useful—especially in the context of social technologies—to think about what kind of functions different types of privacy enable or disable. When I first got on Twitter, my friends and I mostly used it as a tool for social coordination—a convenient way of seeing who felt like catching a movie or quaffing a beer without spamming those who might not be interested with texts and e-mails. As the service exploded beyond the early adopters, and many of us found ourselves with hundreds or thousands of followers, this began to seem awkward. But the default publicity of Twitter also made it a great forum for broadcast, and for public conversations that anyone with a germane point to make might decide to chime in if (perhaps by a chain of retweets) she were to get wind of it. Some people, unsurprisingly, decided to maintain a public account for general-interest tweeting, and a private one for more intimate conversations.
Consider, on the other hand, anonymity (which is one type of privacy) on chat forums. In one way, it enables uninhibited discussion by making people feel free to air (or just “try on”) thoughts and views that they’d be wary of having associated with their real names. In another sense—as anyone who’s been a regular on an ill-moderated chat board or comment section can tell you—it can chill speech by removing the accountability that keeps people civil. Even in a closed chat room where everyone is pseudonymous, people may be chary of revealing private details that might be recognizable to a real-life acquaintance, because you can never be sure that one of those other participants in the conversation isn’t actually somebody you know.
As legal scholars have long complained, talking generically about “privacy” often obscures more than it illuminates. Privacy is multidimensional—to the point that many thinkers have suggested we might be better off doing away with it as an umbrella concept altogether, at least for the purpose of detailed policy discussions.
One aspect of privacy is anonymity (and its cousin, “practical obscurity”), which disconnects content—even wholly public—from an identifiable person. Pseudonymity is a sort of compromise that allows some amount of trust building by enabling a persistent idenity to accrue a reputation without linking it to a real-world person.
Another is access control, which can be binary (as in Facebook, where the world is, practically speaking, divided into “friends” and “everyone else”) or quite sophisticated (the Circles architecture of Google Plus).
A related but arguably distinct aspect is use control, meaning that some combination of technological constraints, norms, and contractual or legal rules limit how the persons with access to information can use, reshare, or combine that information without permission. Each of these aspects, of course, could be further analyzed into many subcategories.
Each aspect involves tradeoffs. A network where publicity is the default is useful for broad information sharing and discussion, less useful for intimate conversation or social coordination. If you want conversation that is both frank and high quality, you may need to accept the overhead costs of moderation as the price of permitting anonymity. A platform that’s built to enable trusted small-group interaction or collaboration will work poorly with total anonymity. For some uses, pseudonymity will do, and for others, it will be more desirable to be able to verify a real-world identity for each participant. For still other purposes, it may not matter if participants are pseudonymous to each other if some trusted third-party can verify that everyone meets some membership criterion. For a project like Wikileaks, even pseudonymity might not provide enough protection, because knowing that multiple leaks came from the same person may narrow down the pool of possible whistleblowers. On the other hand, it’s sometimes valuable for someone, such as a credible reporter, to know a whistleblower’s identity so that the reliability of the information can be verified, even if the source is not made public. Fine grained controls are good on a social network, but they’d be counterproductive on eBay if they allowed sellers to decide which comments and ratings from previous buyers would be visible.
The point, in short, is that it’s not always useful to think about privacy in generic less/more or good/bad terms. The right question to ask is what kinds of social functions are enabled by each dimension of privacy, both in isolation and in different combinations.