So, first, I’m super flattered to have been included on Wired’s new “101 Signals” list of recommended writers on security and politics. As you may notice, however, I’m not writing all that frequently on those topics here on my personal site anymore. If you’re interested in that, therefore, you probably want to follow me on Twitter (where I usually link stuff I’m writing elsewhere) or check out my posts on the Cato Institute blog.
Second, as quite a few people have noticed, there’s an unfortunate paucity of women on the 101 signals list, which even leaving aside considerations of gender equity, left out many of the best writers and thinkers in this space. So, without any slight intended to the excellent resources flagged by Wired, let me suggest that if you’re coming here from that list, you’ll probably also find enormous value in the Women-in-Technology Twitter list maintained by formidable privacy & drones expert Amie Stepanovich.
Addendum: I note my own list is a bit melanin deficient; I tried to think of some women of color who regularly write for a popular audience on privacy/tech/surveillance/security issues and sadly came up short. So, again, suggestions welcome in the comments.
It was made of the magnetic tapes carrying millions of international telegrams, couriered daily to the NSA under Project SHAMROCK. And it was made of the computer punchcards, holding the names of American citizens on Project MINARET watchlists, so their communications could be filtered from those telegrams.
Its operating system was written in secret memos and directives that distorted the law and ignored Supreme Court decisions.
The machine was built to fight communism—but it was reprogrammed to fight democracy.
The machine was turned on labor unions and anti-war activists, on journalists and public officials, civil rights leaders and Supreme Court justices.
It was turned on Malcom X and Muhammad Ali, on Stokley Carmichael and H. Rap Brown, on Jane Fonda and I.F. Stone. It was turned above all on Martin Luther King, as the FBI waged a six year campaign to discredit and destroy him, so he could be replaced by what they called “the right kind of negro leader”—meaning one controlled by the FBI and the American intelligence community.
Democracy ultimately proved stronger than that machine—and in the late 1970s, we took steps to dismantle it. We imposed oversight, safeguards, and strict limitations designed to enforce the guarantees of our Constitution and restore the Fourth Amendment.
Decades later, time and fear had dulled those memories, and we began building a new and far more powerful machine—a machine as far beyond Hoover’s as an iPhone is past an abacus.
The new machine runs programs with names like BLARNEY and STELLAR WIND, PINWALE and PRISM—and the source code is still secret.
The old machine was large but limited. It could spy on the government’s “enemies”—but it couldn’t spy on everyone.
The new machine can. It is wired into the cell phones in our pockets, and the switches that route every Web site we read, and every e-mail we write.
And when this machine is reprogrammed in secret—when it is turned against us, whether out of panic or malice, by the next officials who can’t distinguish a national security threat from a threat to the status quo—there may not be anywhere left for us to hide.
There’s a time when the operation of the machine becomes so odious, makes you so sick at heart that you can’t take part! You can’t even passively take part! And you’ve got to put your bodies upon the gears and upon the wheels, upon the levers, upon all the apparatus — and you’ve got to make it stop!
Well, it is that time again. It is time for us to put our bodies on the gears and make this machine stop. It is time to restore the Fourth Amendment.
A brief, slightly confusing exchange between Rep. Jerrold Nadler and FBI director Robert Mueller at a hearing this week is stirring up a lot of fuss, as C-NET reports (and The Hill repeats) that it reveals an NSA admission that analysts sifting through metadata can, without further court authorization, listen to calls or read e-mails:
First, let me suggest that nobody go too nuts with this just yet: This is a two minute exchange referencing an earlier classified briefing, and the parties to it haven’t responded to requests for comment yet. That said, let’s try to figure out what’s going on.
One possibility is just that Rep. Nadler is talking about analysts having discretion to get the subscriberinformation on a suspicious number and blurring that with content. But those are two pretty different things, and it seems unlikely he’d make that error. So let’s assume for a moment that’s not it.
What seems more likely is that Nadler is saying analysts sifting through metadata have the discretion to determine (on the basis of what they’re seeing in the metadata) that a particular phone number or e-mail account satisfies the conditions of one of the broad authorizations for electronic surveillance under §702 of the FISA Amendments Act. Those authorizations allow the targeting of whole groups or “categories of intelligence targets,” as the administration puts it. Once the FISA Court approves targeting procedures, they have no further role in deciding which specific accounts can be spied on. This is, as those of us who wrote about the FAA during its recent reauthorization observed, kind of a problem.
Legally speaking, the analysts don’t have carte blanche. In other words, this isn’t “warrantless wiretapping” so much as “general warrant wiretapping.” They can’t just tap any old call or read any old e-mail they strikes them as “suspicious.” They’ve got to be flagging content for interception because they believe it’s covered by a particular §702 authorization, and observe whatever “targeting procedures” the FISA Court has established for the relevant authorization. They can’t “intentionally” intercept any calls or Internet communications that are “known at the time of acquisition” to be totally domestic. But then, what an analyst “knows at the time of acquisition” may be pretty hard to determine, unless they clearly should have been able to determine from the metadata that all ends were located in the United States. Often, especially for Internet communications, that won’t necessarily be so.
Also, the “target” of the acquisition has to be “believed to be” outside the United States. But there’s some ambiguity about exactly what that “targeting” limitation means. That is, it’s not clear whether the phone or e-mail user you’re spying on must be outside the United States, or whether it’s enough that you are seeking information about a group primarily located overseas. I’ll assume the former, more restrictive case for now: The analyst must believe that one end of the communication is outside the United States, and flag that account or phone line for collection. Note that even if the real target is the domestic phone number, an analyst working from the metadatabase wouldn’t have a name, just a number. That means there’s no “particular, known US person,” which ensures that the §702 ban on “reverse targeting” is, pretty much by definition, not violated.
None of that would be too surprising in principle: That’s the whole point of §702! It means analysts get discretion to decide what particular accounts fall under a very broad order. A key question, of course, is just what the checks in the process are. Can an analyst technically (if not legally) plug in any selector to start collecting on and just start getting material? Does anyone check their work before call and e-mail content starts flowing in? How closely are their error rates checked after it does? Again, legally, they don’t have a blank check, but it’s the details of the system architecture that determine whether you’d be able to tell the difference in practice.
Anyway, creepy as this all may sound, it’s not exactly a new revelation if Nadler is indeed talking about authority to collect content under §702, though the potential for error seems greater if the basis for acquisition is literally nothing more than a “suspicious pattern” culled from metadata. In theory, the system could be flagging calls and e-mails for interception almost automatically (like GMail deciding what to flag as “important’), with the analyst occasionally checking off an “OK” box.
Still, this is more or less what the FISA Amendments Act was designed to do. Shame people didn’t freak out to this extent at the end of 2012, when Congress voted for five more years of it.
A little ditty I wrote for frustrated journalists, activists, and academics to sing while tangling with our beloved, Byzantine Freedom of Information Act process—to the tune of “That’s Amore,” of course:
Well, not everything President Obama and the 112th Congress managed to achieve is so terrible. With scarcely any notice, much less controversy, they did at least preserve one of the country’s most important post-9/11 antiterror tools.
One wonders just what their basis could be for the claim that warrantless wiretapping has been “one of the country’s most important post-9/11 anti terror tools.” After all, a comprehensive audit by the intelligence community’s own Inspectors General found exactly the opposite: That the program launched by President Bush was of no greater value than other intelligence tools; that it generated an enormous number of false leads that wasted time and resources; and that, indeed, it was difficult for intelligence officials to point to a single clear cut case where the program made a crucial contribution to a counterterror success. Much about that program remains secret, of course, but the Journal‘s assertion here is contradicted by the public evidence.
That would be wiretapping, which you may recall liberals portrayed during the George W. Bush era as an illegal and unconstitutional license for co-President Dick Cheney and his spymasters to bug the bedrooms of all U.S. citizens. But now Washington has renewed the 2008 amendments to the Foreign Intelligence Surveillance Act that were due to expire at the end of 2012, with no substantive changes and none of the pseudo-apoplexy that prevailed during the Bush Presidency.
Perhaps we shouldn’t be shocked that a publication owned by Rupert Murdoch would be inclined to make light of concerns about illegal wiretapping, but surely it’s not that mysterious why someone might be more comfortable with a duly authorized surveillance statute that preserves a role for the courts, however anemic and symbolic, than with a president’s unilateral decision to simply ignore federal law and bypass the courts entirely. Still, they do have a point: Substantively the FISA Amendments Act is at least arguably more problematic than the Bush program, because the surveillance programs it authorizes are potentially much more sweeping than Bush’s was, at least on the basis of public reporting. And it really is telling that many people who expressed outrage over the Bush program seem totally uninterested in scrutinizing the track record of its successor now that we have a Democrat in the White House.
In September the House passed the “clean” five-year extension that the White House desired, 301 to 118. The Senate reserved all of a single day of debate on the floor to coincide with the post-Christmas fiscal cliff chaos, and a broad bipartisan majority defeated multiple amendments from the civil liberties absolutists on the left and right such as Kentucky’s Rand Paul.
This is a turnabout from 2007 and 2008, when letting U.S. spooks read al Qaeda emails or listen in on phone calls that passed through domestic switching networks supposedly spelled doom for the American Republic.
This is just not an accurate description of what the law permits. The programmatic surveillance authorized by the FAA is not limited to “al Qaeda e-mails,” or to the communications of terrorists; the “target” of surveillance can be any foreign group or individual, and the “target” need not actually be a party to the intercepted communications. Nor is it limited to communications that merely “pass through” domestic switching networks: Calls or e-mails sent and received by American citizens are also fair game. If the original Bush program is any guide, enormous numbers of entirely innocent communications are almost certainly being swept up in the process.
Hypocrisy aside, the irony is that the imperfect 2008 deal could have stood a little scrutiny. The concessions Mr. Bush was forced to make inserted the special FISA court into the wartime chain of command, requiring the national security agencies in most cases to get judicial permission to eavesdrop on even foreign enemies. We still don’t know if this new regime has compromised U.S. intelligence gathering.
This is also false. The law has never required court approval to eavesdrop on communications when both parties were known to be foreigners, and it still doesn’t. The vast majority of the NSA’s signals intelligence activities remain completely unregulated by FISA. The FISA Amendments Act covers wire communications between Americans and foreigners—which previously required far more rigorous individualized warrants if the wiretap was conducted in the United States—as well as cases where the location of one party to a communication can’t be determined in advance (as is often the case with e-mail). The latter presented a genuine problem that could and should have been solved far, far more narrowly. The FISA court’s minimal involvement in the FAA process—which is limited to rubber stamping broad “targeting procedures” developed by NSA—falls fall short of the traditional warrant approval process, and the idea that it could have “compromised U.S. intelligence gathering” seems frankly absurd.
If the “Imperial Presidency” is only imperial when the President is a Republican, at least that doesn’t represent a real political conviction, merely naked partisanship.
On this point, at least, the Journal is entirely correct: It is sad to see so many Democrats shed their concerns about executive surveillance powers—historically abused by presidents of both parties—now that their bête noire has left office. And with the FAA extended for five more years, Obama too is likely to be long gone before we have another occasion to debate the wisdom of these powers.
As a general rule, going on television reminds me why I’ve chosen a career in print, but between the reauthorization of the FISA Amendments Act and the David Petraeus scandal, I’ve nevertheless ended up making a handful of small-screen appearances in the past couple months. For those of you who are into that sort of thing, here are the clips.
I’ve just had a chance to play around with C-SPAN’s clip-and-share functionality from its video archives, which seems like a pretty great tool for wonks like me who actually pay attention to stuff like last week’s marathon Senate debate over the reauthorization of the FISA Amendments Act, which President Obama signed on Sunday evening. With the Fiscal Cliff firmly lodged in the headlines, most news media paid scant attention to the reauthorization—though the Rachel Maddow Show did have me on to talk about it. Now, at least, C-SPAN has made it relatively easy to assemble the important points from these legislative debates without spending hours fussing with video editing software. This strikes me as having great potential to enable crowdsourced efforts to draw attention to undercovered stories in an engaging format. So here are what I saw as some important moments from the FISA debate—mostly legislators proposing some extremely mild, common-sense reforms and safeguards that all got shot down.
Sen. Ron Wyden explains that the FISA Amendments Act isn’t just about foreigners, as supporters ritually claim, but can easily result in large-scale interception of Americans’ communications as well:
Over the weekend, I had a piece at Ars Technica urging Google to roll out end-to-end encryption for Gmail, allowing hundreds of millions of ordinary users to enjoy the level of privacy now largely reserved for paranoid ubergeeks. I tried to address some of the obvious economic reasons Google might be hesitant to do this, but as Princeton’s Ed Felten points out, there are important technical questions as well:
First, how would the crypto keys and crypto code be managed? […] To start with, we would need a place to store your private key. We could store it on your desktop, but this would conflict with the usual cloud model that gives you access from multiple devices. We could have Google store your private key for you, then download it to whatever device you’re using at the moment, but then what’s the point of encrypting your messages against Google? The best solution is to have Google store your private key, but encrypt your private key using a password that only you know. Then Google would download your encrypted private key to your device, you would enter your password, and the private key would be decrypted on the device.
This is pretty much how I’d imagined it working for the average user, but there’s no real reason we need a one-size-fits-all solution here; lots of cloud services that offer encryption let the user choose whether or not to let the provider keep a backup copy of the user’s keys. The more paranoid could sacrifice some mobility and convenience—and risk losing access to some of their messages if their local copies of the key are destroyed—by opting not to let Google keep even an encrypted copy of their key. Or, as a middle ground, a user could always store an encrypted backup copy of her key with a different cloud provider, like Dropbox, which need not even be known to Google. That provides all of the advantages of storing the key with Google at a relatively minor cost in added hassle, but substantially raises costs for any attacker, who now must not only crack the passphrase protecting the key, but figure out where in the cloud that key is located. Assuming it’s accessed relatively infrequently (most of us read our e-mail on the same handful of devices most of the time) even a governmental attacker with subpoena power and access to IP logs is likely to be stymied, especially if the user is also employing traffic-masking tools like Tor
What is most problematic is that the software code to do all of this–to manage your keys, decrypt messages, and so on–would itself be written and delivered by Google, which means that Google would, after all, have the ability to see your messages, simply by sending you code that silently uploaded your keys and/or data. So if your goal is to make it impossible for Google to see your messages, for the protection of you and/or Google, then you won’t have achieved that goal. […] The only solution we know is to acquire the secure functionality by a traditional download, incorporating carefully vetted code that cannot be modified or updated without user control. The code might be provided as a standalone app, or as a browser extension. We could do that for GMail (and at least one company has done it), but that would give up some of the portability that makes the cloud email attractive.
I think the speed issue is probably not that big a deal on newish devices, and will only become less of an issue, but for some of the other reasons Ed cites, the preferable way to do this is with dedicated client software. This does create some sacrifice in terms of portability, but frankly if you’re really concerned about secure communications you probably don’t want to be decrypting your sensitive messages on untrusted devices anyway. Also, as I note in the piece, this is where Google has an advantage as the distributor of a widely-used open source operating system and browser. The relevant functionality could come bundled with Chrome and/or Android (and serve as a selling point for both) as well as being offered as a separate plugin for other browsers (or bundled with Google’s widely-installed voice/video chat plugin). Users could still, of course, access their unencrypted webmail from any old browser, but one imagines that if Google leads the way, other developers will have a strong incentive to make their own software compatible.
The second major issue is how to keep messages secret while still providing GMail features that rely on Google seeing your messages. These features include spam filtering (which you couldn’t live without) and the content-based ads that Google shows next to your messages (which Google probably wouldn’t want to live without). Can these be provided without leaking the full content of messages to Google? I suspect the answer is a qualified yes–that pretty good versions of these features could be provided in a more privacy-friendly way–but that’s a topic for another day.
Add to these issues that encrypted messages won’t be searchable (unless stored locally as plaintext), which is a bit of an inconvenience, but probably not a dealbreaker. You can probably still do a good deal of spam filtering just using metadata, and it helps that most users will generally be trading encrypted messages with friends and contacts. Users might even elect to only get such messages from “buddies,” whitelisted addresses, or (more permissively) other Gmail users, which would make encrypted e-mail within the service a little bit more akin to Facebook or Gchat messaging. At least initially, it probably makes sense to have this be the default, and users who really need to get encrypted messages from random, unapproved senders they’ve never interacted with before can tweak their settings to let those messages through.
As for content ads, well, that’s the million dollar question—and as Vint Cerf has candidly acknowledged, a primary reason Google hasn’t already done this. My answer here is the same as it was in the article: First, most people are still going to exchange a lot of unencrypted messages, and Google can still serve keyword ads based on those. Second, Google recently revised its policies to allow sharing of user information between its disparate services, provoking some grumbles from privacy folks. That means they’ve got a hell of a lot of other data to draw on in determining what ads are likely to be relevant to a particular e-mail user, from search history to favorite YouTubes, which I’d actually expect to be substantially more useful for tailoring ads than e-mail keywords. Also, at least initially, using the encryption feature will probably mean logging directly into your Google account via their Web interface (where Google gets to show you ads) rather than simply reading your messages in an ordinary mail client (where they don’t). So the loss of one kind of targeting data from some messages has to be balanced against the probable increase in ad exposures. It’s up to Google’s accountants to figure out how that all nets out, but these considerations seem like a good prima facie reason to at least run the numbers if they haven’t done it recently.
I’ll confess, while not particularly invested in the outcome of the presidential race, I shared the amusement of my Democratic friends watching staunch conservative pundits doing their best impression of the former Iraqi information minister in the weeks before the election. We saw a string of prominent conservatives confidently projecting landslide victories for Mitt Romney, and waving aside all the (as we now know, highly accurate) statistical models projecting a solid Obama win on the grounds that “all the vibrations are right” for Republicans. That said, given the role that partisan pundits play, I can’t really say they were wrong to do so. Indeed, the next time the best available models project a clear Republican win, their Democratic counterparts would probably be wise to do at least a little bit of the same thing, and gin up reasons (however spurious) to think the polls got it wrong this time.
Ideally, professional pollsters have no particular agenda beyond accurately forecasting the outcome of a race. But pundits are trying to influence outcomes, and forecasts don’t just predict outcomes, but at least partially help to determine them. There’s plenty of social psychology literature showing bandwagon effects in elections: Voters on the fence often pick the candidate they expect to triumph anyway, because it’s nice to be on the winning side. Campaign workers become demoralized if they think they’re laboring those long hours for a hopeless cause. A 20 percent chance of victory is still a chance, after all, and you don’t want people throwing in the towel prematurely. Here as in many areas of life, when the odds are heavily against you, being a perfectly accurate assessor of your chances can actually make the odds worse. If you are rational, you will want to have some irrational beliefs. So I don’t expect supporters of a candidate who’s unlikely to win on election eve to acknowledge this, any more than I expect the coach of an underdog team to deliver out an honest read of the stats as a pre-game pep talk. We don’t make fun of coaches for this, because we all understand they’re engaged in a bit of socially appropriate bullshitting.
Which is all well and good when it’s just the pep talk—especially on election eve, when it’s too late to alter a losing strategy. The danger, of course, comes when the coaches start believing their own pep-talks—or as Chris Hayes (channeling Biggie) puts it, when you’re actually so high on your own supply that you start rejecting negative information generally, even when adaptation is still an option. One contributor to this is surely the epistemic bubble created by an increasingly complex, interconnected, and self-sufficient conservative media ecosystem. But there are other factors that probably increase the tendency too.
Pundits are most often, at least initially, writers—which is to say, storytellers. So when polls were less frequent, less accurate, and less widely disseminated, it wasn’t much trouble to weave a story about how polling that looks bad for your team has failed to capture some ineffable factor (“the soul… is not so easily number-crunched“) or was conducted too early to account for some supposedly game-changing news story like Benghazi. A compelling storyteller might even feel free to ignore the polls altogether.
Now, though, we’ve got a larger array of polls, as well as increasingly sophisticated tools for aggregating and weighting them in statistical models that can also factor in external data about things like economic performance. Our predictive tools are better, and our media environment guarantees that the outcomes they project will get wide circulation well in advance of the election. A decade ago, it would have seemed bizarre for a statistician to be one of the most controversial and high-profile figures of a presidential campaign, partly because nobody would have found it plausible that race polling roughly even nationally could be called for one candidate at a high degree of confidence weeks in advance. Even if it could, a political media dominated by a few networks and major papers would have found it convenient to give those projections less attention and play up the sexier “dead heat” narrative. Players’ morale aside, nobody much wants to watch a game whose winner is a foregone conclusion either. Ignoring and slapdash storytelling aren’t really viable options anymore.
The thing about pep-talks, especially when you’re not hearing them live from a charismatic speaker in a small room full of comrades, is that they don’t work nearly as well if they’re obviously just pep talks. It is rational to have some irrational beliefs, but it’s more or less definitive of belief that you can’t consciously recognize it as irrational while you hold it. So in an age of modeling, a little offhand bullshit no longer cuts it: Now you need elaborate counter-models, a-la UnskewedPolls, or at least some more facially compelling argument for why all the models are systematically biased and untrustworthy. What was once the self-contained rational irrationality of a few election eve beliefs metastasizes and builds its own supporting superstructure, in much the same way as one lie can give rise to a whole elaborate network of lies designed to cover it. Or, for the logicians, the way the principle of explosion dictates that a set of propositions containing a single contradiction can spawn an infinity of falsehoods. At some point, the cost of this cognitive infection becomes too high, and a bit of rational irrationality becomes plain old irrational. The belief system you have to sustain to maintain your election-day optimism becomes an obstacle to strategic adaptation that might actually justify that optimism.
A resounding electoral defeat focuses the mind wonderfully, though, and while no doubt we’ll see the most thoroughly embubbled floating dark speculations of some massive conspiracy to rig the election, conservatives who don’t like losing are probably going to get a lot less dismissive of statistical modeling pretty quickly. Mocking the scientists is all fun and games when it’s abstract evolutionary theory, but most people don’t want to refuse lifesaving biotechnology. They may also become somewhat more skeptical of large swaths of the conservative media ecosystem whose primary function is not, in fact, to achieve electoral victories, but to attract eyeballs and extract revenue from conservative audiences. Those goals aren’t necessarily complementary: Ideological publications often do better when the other side is on the ascendant.
In the longer term—notwithstanding the massive and massively daunting security and anonymity challenges it poses—the real solution to this tension (and the ludicrous lines at polling places around the country) is an Internet voting system that lets every citizen with a modern phone—or just access to a public library—cast (and change) their ballot anytime between the conventions and “election day” as easily as they’d update a Facebook status or Instagram a kitten. The results probably become obvious a lot quicker—with a bit of volatility as groups voters decide they can individually afford a third-party protest vote, then realize that collectively they can’t—but the cost of action falls so low that morale concerns are far less significant. And on election night, my journalist friends can all begin drinking much earlier.