Julian Sanchez header image 2

photos by Lara Shipley

Surveying Lott, Part II

January 14th, 2003 · No Comments

Like Jim Henley, I received an email from prof. Lott himself yesterday. In true Fox Mulder form, I want to believe, but there’s no really solid confirmation yet. On a few points, there seems to be little doubt. Plenty of people — from publishers to academic collaborators — can corroborate that Lott had a hard drive crash in 1997. It’s also easy enough to confirm that Lott has just recently completed a study of defensive gun uses at the American Enterprise Institute, whose results Lott says will be published shortly. On the other hand, there’s still the dog that didn’t bark: the absence of any physical remnants from, or other people who recall, what would have been, by all accounts, a fairly large undertaking. In his email, Lott did name one colleague with whom he said he discussed the 1997 survey — an economist whose name some readers would probably recognize — but when I contacted that person, he was unable to recall those conversations. Of course, it has been six years, so that doesn’t necessarily entail that they didn’t happen.

As I noted earlier, though, the magic of the Internet gives us a potential solution. Some of you out there attended the University of Chicago around 1997, or have friends who did. Did anyone you knew work on this study? Do you have five minutes to email around and see if any of their friends did? Or to check via the alumni website? If so, drop me a line and let me know that you’re looking. If you find something, then this is all over, nice and clean, and the good doctor is easily vindicated. If you don’t, well, that’s information too.

There’s an interesting side issue raised by this. Like Tim Lambert, I found it odd that so many other studies found much larger numbers of crime victims fired at least a warning shot, or fired and missed, though (like the Kleck study from which some think Lott got the 98% number) most have about 2% actually injuring or killing their attackers. Even the results of the National Crime Victimization Survey (a study conducted by the DOJ, and so likely to produce underreporting of defensive firearm use and, a fortiori, discharge) show 38% firing. I asked Lott how it was that he could be consistently obtaining such dramatically smaller numbers. Here’s his response:

These are not statistically significant differences. Imagine for a survey of 1,000 people you have a +-3 point 95 percent confidence interval, that involves essentially 60 people who are interviewed. For defensive gun uses you are talking about 1 percent of those surveyed. For brandishing, you are talking about a percent of that 1 percent. Remember that these results are “weighted.” For the current survey, suppose that you are talking about something like 13 people who said that they used a gun defensively out of over one thousand people surveyed. One person fired the gun, the rest brandished it. Suppose instead that three people (may be one or two from a large subgroup of the population) indicated that they had fired the gun. You could get something around 20 percent. That is a difference in answers of two people.

If that’s right (and I’ll take a professional statistician’s word for it), then it seems as though the technical “statistically significant” diverges pretty radically here from what is “significant” in the colloquial sense. If you tell me that there are two surveys, one of which claims that 98% of defensive gun uses are just brandishings, and another of which says it’s more like 75%, it sure sounds as though there’s a pretty stark sort of disagreement being described. If that’s the kind of variance that falls into the statistical fudge-range, though, it’s not altogether clear to me how useful any of these results are.

Update: James Lindgren has revised his report. Regarding the statistical issues, a friend who works as an economic researcher at another D.C. think tank writes the following:

If you want to know what percentage of people out of 1000 used a gun defensively and find in your survey that 13 did (as in his example), then, assuming the 1000 were randomly chosen, it is valid to say 1.3% of the US population used a gun defensively. You can also compare this finding to similar studies and test for significant differences between them because your randomly chosen sample size of 1000 guarantees that the distribution of gun use in your sample will mirror a T-distribution or chi-squared distribution or whatever statistical test you think fit to employ. In other words, your sample accurately represents the population from which it was taken.

However, if you want to know what percentage of people who used a gun defensively actually fired a shot and you find that one out of 13 did, then it is NOT valid to say that 7.7% of people in the US population who used a gun defensively had to fire a shot. This is because your sample size is now 13, not 1000, and no sample size smaller than 30, no matter if it was obtained randomly, is considered truly random. The implications here are, of course, no statistical tests would be valid AND no “weighted” value would be valid.

I don’t pretend to know a chi-squared from Chi-Chi Rodriguez, so make of that what you will. For what it’s worth, while Lott’s hypothetical involved 1000 people, he says his 1997 survey involved 2,424 respondents.

Tags: Uncategorized