Imagine my surprise this morning when I saw an incredible claim circulating that researchers from the Johns Hopkins University Bloomberg School of Public Health had conducted a study of fatal mass shootings in the United States but had “omitted one of the most often cited mass shootings in U.S. history.”
As it turns out, this claim actually was in-credible, as in, not credible.
I don’t read Breitbart so I don’t know if this error represents a legitimate mistake or a pattern of Kellyanne Conway-esque “alternative facts.” But I learned of the Breitbart article through another blog I follow, so I do know the error is already reverberating through the pro-gun echo chamber online.
Insofar as the author of the Breitbart piece cited the Johns Hopkins press release rather than the (open access, publicly accessible) original publication, I’m fairly confident he did not read the article he is criticizing. But even reading the press release makes clear any claim that the researchers excluded Sandy Hook from their study is false.
In fact, as the article explains at greater length, it is the FBI’s Supplemental Homicide Reports that omit not just the Sandy Hook Elementary School shooting, but also the Aurora, CO movie theater shooting (2012) and the Sutherland Springs, TX church shooting (2017).
HOWEVER, the authors used other data sources to add back in a total of 33 cases omitted from the FBI SHR data.
I have been critical of research on guns often in the past — on this blog and in my published scholarly work — but unlike human beings, not all critiques are created equal. Sometime ago I made the statement that I am neither pro-gun nor anti-gun but pro-truth when it comes to guns. Pro-gun advocates should not cherry pick their data or make unfounded criticisms any more than gun control advocates do.
The research article in question here is open access and publicly viewable for anyone interested in seeing what the authors actually say. As I note below, it merits a closer look.
The dependent variable in this study is “fatal mass shootings.” The time frame is 1984 to 2017 and includes cases in which four or more victims (not including any offenders) died and involved a firearm of any type. As the authors are studying “fatal mass shootings,” this makes sense. But if the concern is large numbers of people dying in a single incident, limiting the focus to mass SHOOTINGS seems problematic. The authors are concerned to know what REGULATIONS might lessen these deaths, but if regulations on guns result in SUBSTITUTION of other means of mass homicide, then those regulations do not have the intended outcome (or have a perverse unintended outcome).
I am a scholar of gun culture not gun violence, so I don’t know if there is any evidence of substitution in this area, but I do think those concerned with public health could profitably focus more broadly on mass murder rather than mass shootings (as Grant Duwe usefully does in his work).
Finally, I do not know what the possible implications of these exclusions are, but the authors exclude “Florida, Kansas, Kentucky, Nebraska, and Montana from our analysis because of systemic Uniform Crime Reports (UCR)–SHR reporting issues over multiple years.”
No research is perfect, and some of these exclusions highlight the need for better data for scholars to work with. The authors of this study appropriately point out the limitations of their data. The recognize that what they are trying to explain are “rare events.” They “acknowledge that our results are influenced by the definition [of mass shooting] that we have chosen.”