Julian Sanchez Main header image

photos by Lara Shipley

Stop, Hey, What’s That Sound?

October 22nd, 2012 · 39 Comments

The first time I heard the Divine Fits’ debut album, I remember thinking that the members of Spoon must be peeved that someone had so perfectly emulated the band’s sound as to produce what could pass for the best Spoon album since 2002’s Kill the Moonlight. After a quick Wikipedia search, of course, I realized that it wasn’t a case of “emulation” at all: The group’s line-up included Spoon front man and songwriter Britt Daniel, whose unmistakable sonic fingerprints were all over the album.

Probably every reader of this blog has often had the similar experience of hearing a new song by an artist they know well, and instantly recognizing its authorship, even before the singer’s voice comes in. What we’re recognizing, of course, are the rhythmic and melodic tricks certain songwriters recur to, and the idiosyncrasies of each performer’s technique. Yet those of us who aren’t musicians, or at least trained in music theory, usually can’t tell you exactly what we’re recognizing.  If pressed we might be able to isolate some familiar elements and say something vague about what makes them familiar: Johnny Marr’s “jangly” guitars, that yearning quality of James Mercer’s ascending and descending vocal melodies, the “too many notes” intricacy of a Mozart composition, the driving rhythm that somehow reminds you of three or four other Spoon songs, even though it’s not quite the same. It’s a classic case of what Hayek and Polanyi called “tacit knowledge“: We’re much better at employing our knowledge than we are at articulating exactly what it is we know, or how we’re doing it.

But of course, what we’re recognizing usually can be articulated by an appropriately trained person—and I bet I’m not the only non-musician who’d find it pretty fascinating to have that explained. So here’s a free idea for any music sites or magazines on the lookout for a fresh feature—or an aspiring music writer with the know-how to pull it off: A regular “What’s that Sound” column that picks an artist with a distinctive sound and tries to explain—with some technical detail, but presented with reference to specific examples so the untrained reader can get some sense of what it actually means— exactly what that trademark “sound” consists of. I’m so confident this would be popular that I’m almost surprised that (as far as I know) it hasn’t already been done.

→ 39 Comments

Bookmark and Share

 

 

 

 

Blasphemy and Public Reason

October 19th, 2012 · 9 Comments

I’ve noticed something interesting about Western press reports on the protests over the “Innocence of Muslims” YouTube trailer. Typically, but perhaps surprisingly when you think about it, the protesters quoted in these articles do not simply, as one might expect, say that insults to Islam or its prophet are an outrage against the one true faith and must be forbidden because that is God’s will. They instead make a familiar—and in one sense secular—type of argument grounded in the (supposed) rights of individuals, the psychological harms purportedly caused by exposure to mockery of one’s deepest beliefs, and the ideal of respect for the equal dignity of others. In principle, this kind of argument does not depend upon the truth of Islam, and indeed, is not usually framed explicitly as being limited to that faith.

While any argument for squelching speech in deference to religious taboos is obviously “illiberal” in its content, there’s a narrow sense in which this kind of argument is formally liberal, in that it strives to meet the requirements of liberal public reason. It is not, in other words, an argument that depends on one’s sharing any particular comprehensive religious or metaphysical doctrine, but aims to present reasons that could be accepted by persons of any faith (or none).

There are, to be sure, many reasons these reports may not really reflect the attitude of most protesters. The people eager to speak with Western reporters may not be representative of the larger pool of protesters. The quotes chosen by reporters for inclusion in an article may not be representative of the pool of what they hear in interviews—precisely because they know arguments predicated on the truth of Islam will be wholly unconvincing to non-Muslim readers and want to present the “best” argument for that side of the debate. And, for the same reason, interviewees may be crafting responses that reflect what they think will resonate Western readers rather than their true beliefs.  Certainly, there’s not a lot of evidence that governments in majority-Muslim countries are terribly concerned about limiting offensive and derogatory speech about other faiths. All that said, it seems like it’s got to be a healthy development that even many proponents censoring blasphemous speech so naturally adopt the language of liberal public reason for the purposes of public justification, whether sincerely or not. It’s so deeply embedded that we don’t even notice it in cases where it ought to be a little remarkable.

The problem for those proponents, of course, is that this argument fares pretty badly if we actually take the constraints of public reason seriously. Once you make your anti-blasphemy principle truly general, abandoning any reliance on the truth of the faith insulted, you’re pretty much forced to grant a veto on speech to anyone (any group?) claiming offense. That seems certain to make such claims more frequent, and to risk burdens on the speech of the faithful that far outweigh the benefit of having the offensive speech of others silenced—especially given the alternative, if the issue is offense to people rather than god, of not looking at online videos that offend you. The million dollar question is whether those who embrace the terms of public reason when attacking blasphemy will internalize its norms enough to actually accept a loss on those terms.

→ 9 Comments

Bookmark and Share

 

Much Ada About Nothing

October 16th, 2012 · 16 Comments

I love the idea behind Ada Lovelace Day: Celebrating the neglected contributions of women in science and technology in order to encourage young women to pursue careers in stereotypically male fields where, all too often, a “boys club” environment continues to reign. But I really wish this effort could pick a better mascot than Ada Lovelace, a figure of no real importance to the history of science or computing, whose fame rests largely on a single paper that regurgitated and popularized the ideas of a man.

Lovelace is frequently hailed as the “first computer programmer,” which is true in approximately the same sense that William Shatner is the “first starship captain.”  The “program” Lovelace published was an algorithm actually written by Charles Babbage, which could have computed a sequence of Bernoulli numbers on Babbage’s never-constructed Analytic Engine.  The original ideas in the paper are Babbage’s, and the paper—a translation from French of an Italian mathematician’s lecture on the Engine, followed by a much lengthier series of explanatory notes—was written with his close collaboration. Here’s how Babbage described the process:

I then suggested that she add some notes to Menabrea’s memoir, an idea which was immediately adopted. We discussed together the various illustrations that might be introduced: I suggested several but the selection was entirely her own. So also was the algebraic working out of the different problems, except, indeed, that relating to the numbers of Bernoulli, which I had offered to do to save Lady Lovelace the trouble. This she sent back to me for an amendment, having detected a grave mistake which I had made in the process.

Babbage was being charitable here: As their correspondence at the time shows, it was Lovelace who asked Babbage to send her the “necessary data & formulae” to construct an example involving Bernoulli numbers, with no hint that she merely wished to be saved the bother of a task she could have done for herself. Another letter suggests that Lovelace encountered substantial difficulty in translating Babbage’s “formulae” into the diagram format Babbage had used for his earlier programs—so possibly Babbage’s “save her the trouble” remark refers to some further assistance he rendered in constructing the table; it’s difficult to tell from the exchanges reproduced in the biographies I’ve read.

Obviously Lovelace was no slouch if she spotted an error in Babbage’s algorithm, and by all accounts grasped the significance of his visionary project when many contemporaries regarded it as crankish. (Then again, she’d subsequently develop similar enthusiasms for mesmerism and phrenology.) But as one biographer puts it, the algorithms contained in Lovelace’s celebrated paper are essentially “student exercises rather than original work.” That’s not to say they’re unsophisticated, but they represent competent execution, not innovation: Her illustrative “programs,” excepting the freshly developed Bernoulli, had all been worked out years earlier by Babbage and his assistants, from whom Lovelace had learned—Lovelace was just the first to put one of these “illustrations” into print. In the eight years between her collaboration with Babbage and her premature death, Lovelace produced no further significant work.

Dorothy Stein was the first of Lovelace’s biographers with sufficient training to seriously assess Ada’s frequent  proclamations of her own extraordinary mathematical genius. She concludes that Lovelace was not quite the prodigy she imagined herself to be, often struggling to master relatively elementary concepts and principles. A November 1842 letter to her tutor, the renowned mathematician Augustus De Morgan, finds her stuck on a problem of the sort you probably recall working through in high school:

Show that f(x+y) + f(x-y) = 2f(x)f(y) is satisfied by f(x)=(ax + a-x)/2

Lovelace confessed she was “ashamed to say how much time I have spent upon it, in vain. These Functional Equations are complete Will-o-the-Wisps to me.”  This is a letter written at age 28, a year before her paper on the Analytic Engine.  She continued her education, of course, but it is very hard to believe these are the words of someone a year away from doing major original work in mathematics. Stein concludes that the “evidence of the tenuousness with which she grasped the subject of mathematics would be difficult to credit about one who succeeded in gaining a contemporary and posthumous reputation as a mathematical talent, if there were not so much of it.”

Modern historians without Stein’s background in computer science seem to have swallowed a little too credulously Lovelace’s inflated self-assessment, doubtless bolstered  by the praise  of contemporaries eager to indulge a countess and flatter her mother and husband. (Babbage, for his part, never stopped angling for more government funding for his Analytic Engine, and had reasons beyond her mathematical abilities to welcome the interest and advocacy of Lord Byron’s famous daughter.) Again, even with extensive guidance, her “Notes” are obviously the work of a highly intelligent person—but breathless descriptions of Lovelace as one of the great mathematical minds of her era are utterly detached from reality.

If there was something original to Lovelace in that paper, it may be the prescient suggestion that future versions of the engine, which Babbage seemed to imagine purely as a mathematical tool, might be programmed to generate music or graphic art. But if we’re counting feats of imagination, we can find something similar a century earlier in Gulliver’s Travels. This does arguably represent an important conceptual leap, from calculation to true “computation,” in the sense of abstract symbol manipulation, but it still ultimately a speculative aside—science fiction rather than science.

While Babbage and Lovelace may have glimpsed the future of information technology, their influence on its actual emergence was pretty much nil. The principles of computing were independently developed in the early 20th century, and only later was the work of Babbage and Lovelace rediscovered and retroactively integrated into the history of computing. If you want to know what the Information Age would look like if Ada Lovelace had never written a word, look out the window.

I’d guess the myth around Lovelace persists because a woman mathematical genius overcoming the strictures and prejudices of 19th century England to herald the age of computing makes for such a compelling story. It would be so awesome if it were true that nobody really wants to pop the bubble.  The flip side, though, is that Ada has become an icon while real women pioneers of computing like Grace Hopper remain far less well known. It seems more just and more honest to honor those genuine achievements than to insist on holding up a popularizer with an outsized ego as some kind of major figure. Maybe next year we can celebrate Grace Hopper day instead?

Update: The tone of this strikes me as a little harsh on second reading, so I should emphasize that Lovelace was clearly, in many ways, a remarkable woman of admirably broad intellectual curiosity. She had the insight to apprehend both the significance and the workings of Babbage’s Engines at a level few of his other contemporaries did, and the skill to explain them to the public more masterfully than Babbage himself ever managed to. It just seems silly to pretend she was something more than a gifted explicator on the grounds that it makes for a more inspiring story.

→ 16 Comments

Bookmark and Share

 

 

 

 

Why Do Intellectuals Favor Government Solutions?

October 9th, 2012 · 25 Comments

Back in the 1980s, the late philosopher Robert Nozick wrote an essay asking: “Why Do Intellectuals Oppose Capitalism?” Happily, the question as Nozick framed it is somewhat less relevant today, as Western intellectuals have increasingly accepted the superiority of some form of market economy to full-blown socialist planning. But a variant form remains: Why do intellectuals seem so disproportionately attracted to “progressive” political views and government-centric means of remedying social ills?

For those of us who tend to favor a relatively small and limited government, and prefer that social problems be addressed by private and voluntary mechanisms, it should be a source of some discomfort that these views find so little favor among some of the most highly educated and intelligent sectors of the population—the “elites” of popular conservative demonology. One simple explanation for this pattern, after all, would be that left wing political views are disproportionately attractive to the highly educated and intelligent because they’re best supported by logic and evidence. Following Aumann’s agreement theorem, this would imply that libertarians should regard the disagreement of large numbers of well-informed people who are at least as intelligent as we are as prima facie evidence that our views are in error, and revise them accordingly.

Nozick speculates that “wordsmith intellectuals” grow accustomed to winning the highest accolades in the academic environments of their formative years, and that this disposes them to be hostile toward the distribution of rewards in a market economy, which may accrue heavily to those with education, but are not necessarily strongly correlated with the kind of verbal intelligence that garners the top academic awards. Crudely put: The middle-class professor or writer will tend to feel cheated by a system that heaps greater rewards on those she remembers as academic inferiors. However plausible or implausible one finds Nozick’s account when it comes to the choice between capitalism and socialism, it seems less satisfactory as an account of the preference for expansive government within a market framework—even if something like this might contribute to the feeling that the wealthy can’t really deserve their holdings.

One thing to bear in mind is that even informed and intelligent people do not typically arrive at their political views by an in-depth review of the evidence in each particular policy area. Most of us can only be really expert in one or two spheres, and in others must rely heavily on those who possess greater expertise and seem to share our basic values. In practice, most people select a “basket” of policy views in the form of an overarching political ideology—which often amounts to choosing a political community whose members seem like decent people who know what they’re talking about. So we needn’t assume the majority view of the intellectual class represents the outcome of a series of fully independent judgments: A relatively mild bias in one direction or another within the relevant community could easily result in an information cascade that generates much more disproportionate social adoption of the favored views. So any potential biasing factors we consider need not be as dramatic as the ultimate distribution of opinion: Whatever initial net bias may exist is likely to be magnified by bandwagon effects. We should also bear in mind that polls of academic faculties often limit the options to “liberal” and “conservative”—and it seems plausible that responses here reflect the rejection of conservative views on social issues, where liberals and libertarians are generally in agreement—though there’s clearly more to the story than that.

Here, then, is an alternative (though perhaps related) source of potential bias. If the best solutions to social problems are generally governmental or political, then in a democratic society, doing the work of a wordsmith intellectual is a way of making an essential contribution to addressing those problems. If the best solutions are generally private, then this is true to a far lesser extent: The most important ways of doing one’s civic duty, in this case, are more likely to encompass more direct forms of participation, like donating money, volunteering, working on technological or medical innovations that improve quality of life, and various kinds of socially conscious entrepreneurial activity.

You might, therefore, expect a natural selection effect: Those who feel strongly morally motivated to contribute to the amelioration of social ills will naturally gravitate toward careers that reflect their view about how this is best achieved. The choice of a career as a wordsmith intellectual may, in itself, be the result of a prior belief that social problems are best addressed via mechanisms that are most dependent on public advocacy, argument and persuasion—which is to say, political mechanisms.

It seems equally possible, however, that a post hoc desire to justify the choice of such a career might play a biasing role. A person without extravagant material tastes can live quite comfortably as an academic or writer, and the work itself is highly interesting and intrinsically appealing. But intellectual jobs of this sort tend not to leave one with the resources to devote large amounts of money to charitable causes without significantly curtailing consumption of minor luxuries: meals out, shows, electronics, vacation travel, enrichment classes for the kids, and so on.

If the world is primarily made better through private action, then the most morally praiseworthy course available to a highly intelligent person of moderate material tastes might be to pursue a far less inherently interesting career in business or finance, live a middle-class lifestyle, and devote one’s wealth to various good causes. In this scenario, after all, the intellectual who could make millions for charity as a financier or high-powered attorney, but prefers to take his compensation in the form of leisure time and interesting work, is not obviously morally better than the actual financier or attorney who uses his monetary compensation to purchase material pleasures. Both are declining to sacrifice personal satisfaction in order to help others—one has just chosen a form of compensation that can’t be taxed and redistributed easily. If private efforts are ineffectual or relatively unimportant compared with political action, however, the intellectual can rest assured that he’s satisfying his moral obligations by paying taxes and writing persuasively in support of the appropriate political remedies.

This account seems consistent with our current political rhetoric, in which progressive political views are taken to signify compassion and concern for the badly off, while conservative or libertarian views are (progressives often say) evidence of callousness or selfishness. As Jason Brennan observes in a recent post at Bleeding Heart Libertarians, there’s something a little odd about using political views as a metric of compassion or selfishness. Talk, after all, is cheap: It costs nothing to express verbal support for a policy or candidate. One might think a better measure would be some indicia of compassion that involve a modicum of sacrifice—charitable donations or hours volunteered—and by these measures, Brennan claims the evidence is that progressives fare no better than anyone else. But of course, if you assume that political mechanisms are vastly superior to private ones, then writing blog posts and op-eds supporting progressive policies (as opposed to giving large sums to charity or working in a soup kitchen) may be the more morally relevant way of expressing compassion.

Of course, many intellectuals of every ideological stripe also give to charity or volunteer, and some lack the temperament that would make high-paying corporate work a realistic alternative. And one can just as easily tell a complementary story that explains why private businessmen would be disposed to believe (either because of selection effects or post hoc rationalization) that contributing to private economic growth is the best way to improve the world.

Still, combined with the effect of social information cascades, this account provides one reason we might expect wordsmith intellectuals to favor progressive views independently of whether these views are the best supported by arguments: It is on these views that—by engaging in intellectual activity, and by voting and advocating for the appropriate policies—intellectuals are already best meeting their moral obligation to help make the world better, even if other career choices might enable them to make larger direct, material contributions. This line of reasoning is no excuse for libertarians to become glibly complacent in their views, or to substitute psychoanalytic for substantive responses to specific progressive arguments. But it is, perhaps, reason to be less worried that the predominance of progressive views among intellectuals is, in itself, necessarily strong evidence against the libertarian position.

→ 25 Comments

Bookmark and Share

 

Class and the Fourth Amendment

September 10th, 2012 · 13 Comments

Most students learn in history class that our Fourth Amendment emerged from the hostility of the American colonists to “general warrants” and “writs of assistance” authorizing intrusive, discretionary searches of private homes.  What they’re seldom taught is how strongly that hostility was bound up with undisguised class-based contempt for the officers who conducted those searches—so much so that the scathing rhetoric deployed in the speeches and documents recognized as the Fourth Amendment’s inspiration can be a bit of a shock to modern ears.

A list of complaints against British abuses, adopted by Bostonians at a town meeting and then  widely-circulated as an influential pamphlet, lamented that under writs of assistance “our houses and even our bedchambers are exposed to be ransacked, our boxes, chests, and trunks broke open, ravaged, and plundered by wretches, whom no prudent man would venture to employ even as menial servants.” A legendary speech against the writs by the attorney James Otis—recorded and summarized by a young John Adams, who declared himself profoundly influenced by it—similarly complained that search authority was conferred upon “not only deputies, &c. but even THEIR MENIAL SERVANTS ARE ALLOWED TO LORD IT OVER US — What is this but to have the curse of Canaan with a witness on us, to be the servant of servants, the most despicable of God’s creation.”

Even before the Internet, some of the choicest bits of vitriol came from anonymous and pseudonymous essayists. An editorial in The Monitor alleged that general warrants “empower mean, Low-lif’d ignorant men to enter, and to act at discretion.” Maryland’s “Farmer and Planter” was outraged that discretionary search powers would be granted excise officers, who were drawn from the “scruf and refuse of mankind.” The “Father of Candor” sniffed at the notion that “any common fellow” might presume to search a home on the basis of gossip from his (presumably similarly “common”) acquaintances. A writer styling himself “Freeman” asked rhetorically: “What are the pleasures of the social table, the enlivening countenances of our family and neighbors in the fire circle or any domestic enjoyment if not only Custom House Officers but their very servants may break in upon and disturb them?” These “servants,” Freeman assured he reader, were “a gang of villains, who for meaness [sic] rapacity and corruption may be stiled the very dreggs and sediment of human nature in the last and highest stage of its possible depravity.” In comparison, “Regulus” seems positively restrained in characterizing these officers as “the most despicable wretches.”

There’s loads of commentary in the same vein. While it was scarcely the primary consideration, one reason specific judicial warrants seemed less offensive than the general warrant was that when magistrates rather than deputies had discretion over which homes could be searched, the indignity of the search was not compounded by the insult of being subject to the will of a social inferior.

You can, of course, find plenty of remarks casting the right of privacy as a great equalizer as well, such as William Pitt’s famous defense of the sanctity of the home:

The poorest man may in his cottage bid defiance to all the forces of the Crown. It may be frail — its roof may shake — the wind may blow through it — the storm may enter — the rain may enter — but the King of England cannot enter — all his force dares not cross the threshold of the ruined tenement!

In a way, these apparently quite different sentiments were probably mutually reinforcing: At a visceral and emotional level, elite opposition to general warrants was strengthened by the ghastly prospect of  these crude peasants breaking a gentleman’s doors, but when that sentiment is translated into a universal principle, it also entails the right of the “poorest man” to defy the King.

The extraordinary dilution of Fourth Amendment in recent decades—powerfully chronicled in Stephen Schulhofer’s new book More Essential than Ever—is routinely ascribed to a wide variety of causes, all with some justice: The rise of regular, professionalized police forces; changing technologies; evolving social norms and a culture of exposure; our ongoing Wars on Drugs and Terror (and probably Digital Piracy too—give it a few years). But I’d wager that a significant and subtle role has been played by subliminal class considerations—often in tandem with these other factors.

In Colonial America, recall, policing was largely carried out by amateurs and part-timers, along with such volunteers as they might deputize to help carry out a search. These days they’re professionals, often college graduates, who wear suits to court and often appear before the same judge regularly.

The general searches that so incensed Bostonians were meant to aid in the collection of excise taxes, which meant their targets were often respectable merchants and businessmen. Drug dealers are a primary targets of searches today—fully 85 percent of wiretaps are sought for narcotics investigations—which means, innocent or guilty, they’re seldom folks from a social class that naturally inspires judges to identify with them.

Finally, the central mechanism for enforcing the ban on unreasonable searches and seizures was a common-law trespass suit. An “officer” who searched without warrant was treated like any other private citizen and personally liable for damages. A warrant would typically immunize the agent who carried out the search—since he was only obeying the court’s instructions—but the officer or citizen whose affidavit gave rise to the warrant could still be liable if no contraband was found. For a variety of reasons, that’s no longer how it works. Now courts primarily enforce Fourth Amendment restrictions—unevenly, and with a growing list of exceptions—via the exclusionary rule, which bars the use of evidence derived (directly or indirectly) from illegal searches in criminal prosecutions. The predictable consequence is that virtually the only time an appellate judge sees a Fourth Amendment case is when a guilty criminal is trying to get a conviction overturned.

Perhaps it’s no surprise, then, that courts often seem to bend over backwards looking for ways to accommodate police and other government officials, explicitly assuming—utterly contrary to the spirit of the Fourth Amendment—that these upstanding professionals can and must be routinely trusted with  substantial individual discretion over who, how, and when to search. We certainly don’t need a return to the hostile view that government investigators are “despicable wretches”—but it would be nice to see more recognition that they are, after all, “common fellows” whose intrusions on the privacy and dignity of their fellow citizens require judicial supervision, whether that citizen is  an affluent merchant or “the poorest man… in his cottage,” and regardless of whether the investigator is a “menial servant” or a clean-cut middle-class professional.

→ 13 Comments

Bookmark and Share

 

Lord Camden on NSA Surveillance

September 5th, 2012 · 9 Comments

I’m doing a deep dive into the pre-history of the Fourth Amendment, and am continuously amazed at how perfectly the condemnations of the “general warrants” that incensed the Framers apply to the programmatic authority granted the NSA under the FISA Amendments Act. The legal challenge to that statute is currently hung up on questions of standing, which will shortly be before the Supreme Court in the case of Amnesty v. Clapper. The government’s position is that only those who can prove they’ve actually been wiretapped are entitled to challenge the sweeping and highly discretionary surveillance programs authorized by the law—and since FISA surveillance is permanently secret by default, nobody can have that proof unless the government volunteers it.

Which is funny (if you have a morbid sense of humor), because in one of the three seminal cases historians regard as the inspiration for the Fourth Amendment, Entick v. Carrington, Lord Camden’s ruling cites this exact situation as a key reason why general warrants should be forbidden:

[The general warrant] is executed by messengers with or without a constable (for it can never be pretended, that such is necessary in point of law) in the presence or the absence of the party, as the messenger shall think fit, and without a witness to testify what passes at the time of the transaction; so that when the papers are gone, as the only witnesses are the trespassers, the party injured is left without proof.

If this injury falls upon an innocent person, he is as destitute of remedy as the guilty: and the whole transaction is so guarded against discovery, that if the officer should be disposed to carry off a bank bill he may do it with impunity, since there is no man capable of proving either the taker or the thing taken.

Because a general warrant, unlike a specific or particularized one, left it to the discretion of the executor who would be searched, it would give rise to an intolerable scenario where many innocent people might be subject to the exposure or seizure of their private papers, with no realistic prospect of remedy for the invasion. The Framers of the Bill of Rights, naturally, found this intolerable too—and thus scholars pretty much universally recognize that the primary purpose of the Fourth Amendment was to prohibit such general warrants.

If the government’s argument is allowed to fly, we have to conclude that the Framers utterly failed in their attempt to remedy the kind of problem Lord Camden identified, because the problem just reappears at a higher level. General warrants, abhorrent for a host of reasons Camden and his contemporaries identified, are formally prohibited. But if you want to enlist the courts to enforce that prohibition, by challenging a law that rather blatantly establishes a system of general warrants, you’ve got to prove you were actually searched under one of those general warrants. When the digital papers are acquired, as the only witnesses are the eavesdroppers, the party injured is left without proof, and the whole transaction is so guarded against discovery that there is no man capable of proving either the interceptor or the thing intercepted.

Take a deep breath and appreciate the chutzpah involved here for a second. The government is effectively saying they’re immune from Fourth Amendment challenges because they have managed to create the exact, explicitly described scenario the Fourth Amendment was written to prevent. Maybe next they can pass a law making it illegal to publicly complain that your First Amendment rights have been violated.

→ 9 Comments

Bookmark and Share

 

On Partly Verbal Disputes

September 4th, 2012 · 1 Comment

In the previous post, I suggested that thoughtful and educated adults rarely engage for very long in purely verbal disputes (which, alas, is not the same as saying such disputes are rare). When people don’t disagree substantively, but are only using words differently, a few minutes of argument should usually make this fact apparent to all parties. But smart people do often get mired in disputes that are partly verbal and partly substantive. Because it’s evident to everyone that there is a real disagreement—the dispute clearly is not merely semantic—the parties may overlook semantic components of the disagreement, and as a result fail to make headway or argue productively.

This is, I think, why you’ll sometimes see arguments between libertarians, conservatives, and progressives get stuck on what appear to be semantic points: Who has the “true” conception of “freedom”? Whose understanding of “coercion” is correct? People are reluctant here to simply eliminate the contested term and stipulate more precise ones that reflect their divergent conceptions, in part because these are terms with thick normative connotations and great rhetorical power. Members of our political and linguistic community tend to regard “freedom” as presumptively valuable, and “coercion” as presumptively bad, which makes us wary of abandoning (even for the sake of clarity in argument) a definition of “freedom” that tracks our convictions about what’s valuable, or one of “coercion” that maps onto our views about what practices and institutions are morally objectionable. To concede an opponent’s definition, we may fear, opens the door to sneaky arguments that trade on the normative connotations the term has in ordinary usage.

But this isn’t the only reason we might be reluctant to employ the “method of elimination” and simply stipulate new, purely descriptive terms corresponding to our divergent conceptions. Moral reasoning and moral argument often proceed from shared intuitions about core types of cases. The conviction that torturing innocent people for fun is wrong is, for most of us, more certain and stable than our commitment to any abstract principle or theory that purports to explain why this is the case, and any theory that fails to condemn such conduct will be, for that reason alone, unacceptable. We have  many “thick” terms—terms which are partly descriptive and partly normative—precisely because we live in linguistic communities that share strong moral intuitions about types of conduct that are admirable or contemptible. So if the term “coercion” has a negative connotation, and it seems to comport with ordinary, colloquial usage to characterize a particular relationship as “coercive,” that’s a data point providing at least prima facie evidence that it’s in tension with our shared moral intuitions, and has features we would normally regard as objectionable without special justification. That’s hardly dispositive in itself—thick terms like “unchivalrous” can fall out of fashion because we come to see that they’re bound up with mistaken collective value judgments—but it’s not unreasonable to be wary of banishing, by neologism, whatever collective wisdom may be embedded in our ordinary linguistic practice. We can think of ordinary usage as establishing a burden of proof on someone who proposes introducing a distinction our colloquial usage fails to make, or ignoring one that it does typically reflect.

Whether for these reasons or out of pure stubbornness or misguided essentialism, then, we often choose to draw our battle lines around our words, arguing over whose conception of the contested term is “correct,” in the sense of tracking what is morally significant. This debate, though it is in one sense about language, is substantive rather than merely semantic. But in practice, it also means disputants are routinely trying to keep quite a few figurative balls in the air simultaneously. Sticking with the example of the debate over “coercion” in the workplace, the bones of contention may include:

  • Which senses of “coercion” track our ordinary linguistic practice
  • Which senses of “coercion” track normatively significant features of an action or system, or presumptively require special justification
  • Whether our ordinary or colloquial use of “coercion” applies to a particular case or example
  • Whether one or more narrower senses apply to that case
  • Whether a particular case is morally objectionable or problematic
  • Whether that particular case is objectionable (or acceptable) in virtue of fitting (or not fitting) one or more senses of “coercive,” or for some other reason.

Just to dilate briefly on that last point: An action or system may be coercive in a sense that presumptively demands special justification, but also be covered by such a justification. Provided we recognize the category of “justified coercion,” it would be a mistake to infer from a judgment that a case is morally acceptable on the whole that it must not involve (a normatively significant sense of) coercion. Conversely, a case may be morally objectionable without involving coercion, or for reasons distinct from whatever coercion it involves. If I make a fully-informed choice, from an attractive range of offers, to accept a job that involves obtaining supervisory approval each time I use the bathroom—and have sufficient resources that I could quit without serious hardship—it seems like a stretch to describe me as a victim of “coercion” in any ordinary sense. Nevertheless, if this requirement serves no significant business purpose, it seems open to moral criticism as a pointless intrusion on my dignity, autonomy, and privacy. Not every moral criticism can or need be shoehorned into the rubric of coercion.

In any event, when the points of contention are multiplied in this way, even very smart people are susceptible to confusion. Because the overall disagreement is clearly substantive and not merely verbal, and because more specific points of contention within that overall debate may arise for either semantic or substantive reasons, or for both at once, it will not always be obvious to all the parties which are the substantive and semantic components. That means it’s important to bear in mind that even the most manifestly substantive disagreement—a disagreement that is obviously not “purely verbal”—can still be partly verbal, and it often requires close attention to sort out which part is which.

→ 1 Comment

Bookmark and Share

 

Intellectual Strategies: Precisification & Elimination

August 28th, 2012 · 8 Comments

For many people, philosophy is the ultimate exemplar of a useless discipline—or at the very least, high in the top ten. I recall that on the first day of my high school drivers’ ed class, our instructor began by asserting that this was probably the most practical subject most of us would study, in contrast to subjects like philosophy, which might be very nice to mull over on a lazy afternoon, but wouldn’t have much bearing on our daily lives. (Even at the time, something about this rang false.  I didn’t then have the presence of mind to suggest that we do philosophy constantly—though mostly subconsciously, and therefore badly.) Perhaps I’m just rationalizing four expensive years, but readers will not be surprised to hear that I think otherwise. One reason the utility of studying philosophy isn’t always apparent, though, is that unless you go on to teach philosophy, you’re more likely to make routine use of the methods and strategies, the mental toolbox, that the training imparts than you are to deal explicitly with the traditional subject matter of philosophy. In the interest of not annoying our fellow humans more than necessary, most of us don’t call explicit attention to this or use philosophy jargon when we’re doing it—and often these strategies become so ingrained that their deployment is basically automatic. Since many of these strategies are useful in a wide variety of discursive contexts, I thought it might be nice to do an occasional series of posts highlighting useful ones when they occur to me. Maybe I’ve been doing this for years without quite noticing it—but giving these posts a common title might itself be helpful. If nothing else, I can turn them into a Kindle Single someday.

Let’s start with precisification—a process by which we try to dissolve merely semantic disagreements (and a surprising proportion of our most heated disagreements are at least partly semantic) by stipulating more precise definitions for contested terms that may be vague or ambiguous. I tried this in a recent post on a debate over “coercion” in the workplace that was playing out between the bloggers at Crooked Timber and Bleeding Heart Libertarians. Instead of arguing about which types of situations, actions, or social systems were “coercive” simpliciter, I suggested it would be more fruitful to eliminate the contested, morally-freighted term and replace it with a series of more precise terms, such as “exploitative coercion” and “baseline-sensitive coercion.” The disputants might continue to disagree about which forms of coercion were morally significant—serious ethical disagreements among educated people are seldom wholly semantic—but we could at least hope that there would be less disagreement about which precisified conception of coercion might be properly applied to different situations, or at least that the persisting disagreement would be more illuminating.

For this reason, I sometimes think of this as the elimination strategy: If disagreement seems to hang crucially on some contested or normatively loaded term, eliminate that term and see how it alters the conversation. (This does have its own pitfalls, because some terms are intrinsically normative, and cannot be usefully eliminated. The attempt to reduce normative terms like “good” or “wrong” to strictly descriptive conditions has come to be known as the “naturalistic fallacy,” though disagreement about whether it is actually a fallacy persists.) Elimination of this sort—not to be confused with “eliminationism,” which sometimes refers to the substantive denial of the existence of a class of entities, like moral facts or mental states—serves a couple of helpful functions. First, it forces everyone to make it clear to both themselves and others just what they mean. Every student of philosophy, starting with the hapless Athenians interrogated by Socrates, has had the disconcerting experience of discovering how hard it can be to  clearly explain concepts they use every day, and had thought they understood completely.  Second, it can clear away some of the emotional and normative baggage that inevitably attaches to terms like “liberty” or “coercion” even when we pledge with all sincerity to use them in some neutral and descriptive sense.

The late novelist Robert Anton Wilson advocated a rather extreme version of this strategy by urging the use of “E-Prime,” or English written without any form of the word “is.” Aside from being grammatically awkward, that’s probably neither necessary nor sufficient, even if it might be a useful exercise on occasion: You can make essentialist errors without using the word “is,” but happily, you can also avoid them while retaining it. The key move is to remember that saying “X is a Y” either asserts strict logical identity—which is only true in the trivial case when you’re asserting that an entity is identical to itself, or that one entity is known by two names—or, more often, is shorthand for ascribing a property that’s almost always more conditional or relational than the straightforward grammar of “is” implies. The Coke can on my desk “is” red, which is to say, under conditions of “normal” illumination, it will emit light of a wavelength that produces a phenomenal experience of “red” in a “normal” human observer when viewed through air, vacuum, clear glass, and so on. Any scientifically literate person will quickly acknowledge that this is what they “really” mean, implicitly, when they say “the Coke can is red.” Obviously nobody wants to lay all that out explicitly in normal conversation, both because it’s incredibly clunky and because there’s not much practical value, outside of dorm room bong sessions, in perpetually reminding ourselves that the “red” isn’t really in the can, but a mental property generated by an object-observer interaction that occurs within specific parameters.   But in more theoretical discussions, even when at some level we know better, the beguiling simplicity of “is” grammar can obscure details that do make a difference.

So, for instance, you may recall a little flurry of debate a while back over the Republican rhetorical trope of characterizing Social Security as a Ponzi scheme, and the ensuing boomlet of essays and blog posts vehemently insisting that obviously the program is or is not an instance of one. A more productive frame might have been: In what respects can Social Security be meaningfully analogized to the classic Ponzi scheme, in what respects does that analogy break down, and on which dimensions would these similarities render the two susceptible to the same concerns or objections? That’s not a frame that lends itself to catchy slogans, and probably any thoughtful person who participated in that debate would readily agree that this was the real question under dispute all along.  But I suspect you get a different and more instructive dialogue lingering a bit in that matrix of similarities and differences, rather than seeing it as a brief waystation on the road to the crucial all-things-considered verdict on whether it ultimately “is” or “isn’t.” If you don’t like this particular example; pick one of your own—there’s no shortage.

So that’s the first handy can-opener in our intellectual Swiss Army Knife: Haunted by a sneaking suspicion that a seemingly substantive debate is getting tangled in the weeds of semantics? Drop the contested term, and make up as many as you need to say what everyone means. If the hang up depends on what the meaning of the word “is” is, give up on trying to determine whether X is essentially Y and look at all the varied dimensions of similarity and difference to see which actually matter; whether they add up to an overall equivalence probably won’t.  These moves will seldom entirely dissolve a serious disagreement—thoughtful people will generally notice if a difference in views is merely and wholly semantic before they waste much time arguing about it—but they’ll often make the disagreement more productive.

Update: My ever-astute commenters note two related resources: David Chalmers’ essay “Verbal Disputes” (which similarly recommends what he calls the “method of elimination”) and a sharp post at Less Wrong cataloging “37 Ways That Words Can Be Wrong.”

→ 8 Comments

Bookmark and Share

 

What Follows from “You Didn’t Build That”?

July 19th, 2012 · 34 Comments

Barack Obama’s recent “Elizabeth Warren Moment” at a speech in Roanoke has been getting plenty of attention, though the focus of much of the criticism seems misplaced. Here’s the full relevant passage:

Look, if you’ve been successful, you didn’t get there on your own. You didn’t get there on your own. I’m always struck by people who think, ‘Well, it must be because I was just so smart.’ There are a lot of smart people out there. ‘It must be because I worked harder than everybody else.’ Let me tell you something, there are a whole bunch of hardworking people out there.

If you were successful, somebody along the line gave you some help. There was a great teacher somewhere in your life. Somebody helped to create this unbelievable American system that we have that allowed you to thrive. Somebody invested in roads and bridges. If you’ve got a business, you didn’t build that. Somebody else made that happen. The Internet didn’t get invented on its own. Government research created the Internet so that all the companies could make money off the Internet. The point is, that when we succeed, we succeed because of our individual initiative, but also because we do things together.

As others have noted, it’s not entirely clear what’s being referred to in the “you didn’t build that” line that’s gotten most of the attention: It could mean “the business”—in which case it seems obvious that what he means is “you didn’t build that alone“—or it could equally plausibly mean “this incredible American system” and “roads and bridges” and all the other things that made it possible for that business to be successful.

Either way, it seems undeniable as a self-contained descriptive point: No man is an island, and the wealth and success we enjoy are all profoundly dependent on a context of social cooperation that makes it possible. In 15,000 B.C., you’d have been dirt poor however smart and hardworking you were.

Indeed, arguably it doesn’t go far enough, as it suggests some sharp distinction between things that involve help from “somebody else,” on the one hand, and on the other traits like being “so smart” or “hardworking,” which presumably each individual really is responsible for. But as John Rawls would argue, that’s hardly true either: If you’re “so smart,” well, “you didn’t build that,” ultimately. You aren’t responsible for the genetic endowment that enables high-level cognitive processing, the nutrition that fed your developing infant brain, or for the vast store of inherited knowledge that allowed you to take calculus for granted, rather than re-deriving it from scratch (once you’d invented “writing” and “numbers”). If you’re hardworking, then it’s a good thing that your upbringing and education imbued you with a work ethic, and that your brain chemistry (with or without aid from modern pharmaceuticals) is well-calibrated for sustained focus and impulse control.

All that’s true enough, but what is the point supposed to be? That we need to “do things together” to succeed? Well, obviously. But as Aaron Powell and Jason Brennan rightly ask, why should we assume that “we” and “together” has to mean “through government”? Why can’t “we” do things “together” by… well, forming businesses? Clubs? Civic organizations? Churches? If we’re assigning credit for past achievements—and implicitly, the debt we owe for them—why the federal government and not, say, our fellow citizens directly, or state and municipal authorities, or the whole of humanity engaged in mutually enriching global trade?

Of course, there are solid arguments why certain things we build together—roads, for one—will generally not be adequately supplied unless we do them through government. But as Aaron Powell points out, if we limit ourselves to these kinds of examples, we arguably end up with a pretty libertarian conception of government. Does Obama think he has to make the argument against anarcho-capitalism? I’m all for a more philosophical approach to modern political discourse, but starting from a foundational justification of the state in terms of provision of essential public goods seems to me to be taking it a bit far: Even we minarchist libertarians are already on board with that, and I hadn’t thought the anarchists a significant enough force in the current electoral debate to require an extended refutation. If that’s the justificatory strategy he wants to embrace, of course, I’ll take it—and look forward to the radical reduction in the size and cost of government.

Maybe, however, the point is more along the lines of the Nagel/Murphy “Myth of Ownership” argument: Since you didn’t earn whatever wealth you have all by yourself, without external help, you can’t really claim to deserve or be entitled to it—it’s a matter of luck you’re not one of those smart, hardworking people who didn’t get rich, after all—and so “we” (apparently meaning “the government”) get to take back however much “we” think is appropriate.

But this one proves rather too much doesn’t it? You didn’t assemble your own DNA, or design your own reproductive system—your parents, and before them eons of evolution, built that. If you have religious or beliefs and practices, you didn’t build that—you inherited them from a whole tradition of thought, transmitted through institutions made of other people, in books written by other people, created using printing technology invented by other people. You have views on politics you want to express? You didn’t build those alone either—and you probably even want to do it over a computer network that the government subsidized a very different and primitive early form of decades ago.

As it turns out, we generally think we are entitled to control, or have rights over, a whole lot of things that are not (as Robert Nozick put it) “deserved all the way down” in the sense that we’re completely responsible for them—since, at the end of the day, nothing is “deserved all the way down” in that sense. It’s not that the “you didn’t build that” argument is wrong as a factual matter—it’s that it’s true about everything, and therefore doesn’t get you much of anything.

→ 34 Comments

Bookmark and Share

 

A First Sale Legal Hack?

July 17th, 2012 · 8 Comments

Here’s a little puzzle for the lawyers out there. It’s pretty obviously copyright infringement to buy a legitimate copy of a CD, make a copy, and then give away or sell the copy. But how about the reverse? Millions of people in their 20s and late-30s have large CD collections acquired back in Ye Olden Tymes, when Australopithecus purchased music encoded on shiny plastic discs rather than our modern formats of MP3 and vinyl. Most of those shiny plastic discs are gathering dust in a closet somewhere, having long since been ripped to the hard drives and iPods from which we actually play them. Would it be legal to give away or sell those mouldering original discs while retaining the digital copy you actually listen to? And could the buyer then repeat the process, and on indefinitely?

Intuitively, the answer is: Of course not! That would be indistinguishable, in its practical effect, from the infringing act of selling or distributing an unauthorized copy. It would defy common sense for it to make a difference whether you keep the copy or the original. And when it comes to computer programs, the copyright statute is quite explicit: Archival copies have to be destroyed as soon as one ceases to be the lawful owner of the original purchased copy. On face, you’d think the same principle would have to apply to sound recordings.

Except it’s not obvious from the statute that it does, technically, and a little quick Googling didn’t turn up any cases directly on point. Here’s the probably-too-cute-by-half argument for why there might be a loophole here. The RIAA has been notoriously cagey about this—but even their attorneys have argued before the Supreme Court that the ubiquitous practice of copying a CD you’ve purchased to you hard drive (and from there to your iPod) is a protected Fair Use that doesn’t infringe the exclusive rights of the copyright holder. The Ninth Circuit implicitly relied on the idea that noncommercial copying for personal use is noninfringing in RIAA v. Diamond, which established (over the recording industry’s objections) that MP3 players are, in fact, perfectly legal.

Now the tricky bit. If the CD owner genuinely makes that copy purely for the purpose of personal use—not as step one in a larger plan to later sell the CD—that’s a lawful copy under the statute. The statute still limits the further copying, sale, and distribution of that lawful copy for anything other than personal use. But “retention” isn’t one of the exclusive rights protected by copyright. Except for the provision covering software backups, there’s nothing explicitly suggesting that a lawful copy can become unlawful without some further triggering action—again, copying, sale, distribution, rental, and so on. Now flip back to the original CD copy sitting in a closet. The owner has a clear right under the First Sale doctrine to sell that particular copy without authorization from the copyright owner. And I can’t find anything explicit in the statute requiring anything be done about lawful copies previously made when the original CD is legally sold or gifted. Obviously if you’re trying to follow the spirit of the law, you’d delete those lawful personal copies when you transferred ownership of your original copy—though that might actually be a trickier than it sounds in a household where music is shared among several people with iPods and laptops. But technically, “not deleting a copy” doesn’t appear to be an action that violates an exclusive right of reproduction, distribution, or display. And all the other allowances for personal use hinge on whether a particular copy was lawfully made—not lawfully retained. (It’s implicit in the whole idea of a “backup copy” that one would continue to use it if the original is lost or destroyed, of course, so it can’t be that continued use of the music is contingent on physical possession of the original CD.)

Assuming the appropriate intention on the user’s part, then, there’s no infringement when the personal copy is made—unless we’re prepared to accept that essentially every legal CD owner is either an infringer or listens to their digital copies only at the sufferance of the RIAA, which could revoke permission at any time. Under the First Sale Doctrine, there’s no infringement when the original is sold. And there’s no infringement when the copy is retained, because “not deleting something” isn’t generally an act of infringement. Your gut says this overall pattern of conduct has to be infringing—even if in practice it would be impossible to police—but it’s hard to actually pinpoint when the infringement happens if you hew strictly to the statute’s definition of infringement.

Since the result is absurd on face, I’m pretty sure this line of argument can’t be right. I’m just not sure where it goes wrong if we’re applying the actual statute and case law, as opposed to our intuitive feelings about what makes sense. Any IP lawyers want to throw me a life vest?

→ 8 Comments

Bookmark and Share