Bloomberg SPH vs. Breitbart: Battle of “Alternative Facts”?

Imagine my surprise this morning when I saw an incredible claim circulating that researchers from the Johns Hopkins University Bloomberg School of Public Health had conducted a study of fatal mass shootings in the United States but had “omitted one of the most often cited mass shootings in U.S. history.”

As it turns out, this claim actually was in-credible, as in, not credible.

Screen cap of https://www.breitbart.com/politics/2020/02/18/johns-hopkins-study-no-evidence-assault-weapon-bans-reduce-mass-shootings/ on 2/19/2020.

I don’t read Breitbart so I don’t know if this error represents a legitimate mistake or a pattern of Kellyanne Conway-esque “alternative facts.” But I learned of the Breitbart article through another blog I follow, so I do know the error is already reverberating through the pro-gun echo chamber online.

Screen cap of https://www.breitbart.com/politics/2020/02/18/johns-hopkins-study-no-evidence-assault-weapon-bans-reduce-mass-shootings/ on 2/19/2020.

Insofar as the author of the Breitbart piece cited the Johns Hopkins press release rather than the (open access, publicly accessible) original publication, I’m fairly confident he did not read the article he is criticizing. But even reading the press release makes clear any claim that the researchers excluded Sandy Hook from their study is false.

Screen cap of https://www.jhsph.edu/news/news-releases/2020/firearm-purchaser-licensing-laws-linked-to-fewer-fatal-mass-shootings.html on 2/19/2020.

In fact, as the article explains at greater length, it is the FBI’s Supplemental Homicide Reports that omit not just the Sandy Hook Elementary School shooting, but also the Aurora, CO movie theater shooting (2012) and the Sutherland Springs, TX church shooting (2017).

HOWEVER, the authors used other data sources to add back in a total of 33 cases omitted from the FBI SHR data.

Screen cap of https://onlinelibrary.wiley.com/doi/10.1111/1745-9133.12487

I have been critical of research on guns often in the past — on this blog and in my published scholarly work — but unlike human beings, not all critiques are created equal. Sometime ago I made the statement that I am neither pro-gun nor anti-gun but pro-truth when it comes to guns. Pro-gun advocates should not cherry pick their data or make unfounded criticisms any more than gun control advocates do.

The research article in question here is open access and publicly viewable for anyone interested in seeing what the authors actually say. As I note below, it merits a closer look.

Had the Breitbart author looked more closely at the publication in question, he would have found plenty to justifiably raise questions about. In the paragraph of the article preceding the one in which the missing and replaced data on Sandy Hook is discussed, the authors discuss some definitional choices they made which raise some legitimate concerns in my mind.

The dependent variable in this study is “fatal mass shootings.” The time frame is 1984 to 2017 and includes cases in which four or more victims (not including any offenders) died and involved a firearm of any type. As the authors are studying “fatal mass shootings,” this makes sense. But if the concern is large numbers of people dying in a single incident, limiting the focus to mass SHOOTINGS seems problematic. The authors are concerned to know what REGULATIONS might lessen these deaths, but if regulations on guns result in SUBSTITUTION of other means of mass homicide, then those regulations do not have the intended outcome (or have a perverse unintended outcome).

I am a scholar of gun culture not gun violence, so I don’t know if there is any evidence of substitution in this area, but I do think those concerned with public health could profitably focus more broadly on mass murder rather than mass shootings (as Grant Duwe usefully does in his work).

Furthermore, the authors choose to exclude “any case that was coded as having a connection to gang or narcotic activity because one of our supplemental data sets excludes gang‐ or narcotic‐related events.” They cite other studies that have done the same to support this decision, but the implications of this exclusion could be significant. Would including those cases (which involve serious criminal actors unlikely to be affected by gun regulations) weaken the relationship between gun regulations and fatal mass shootings? It seems to me it very well might.

Finally, I do not know what the possible implications of these exclusions are, but the authors exclude “Florida, Kansas, Kentucky, Nebraska, and Montana from our analysis because of systemic Uniform Crime Reports (UCR)–SHR reporting issues over multiple years.”

No research is perfect, and some of these exclusions highlight the need for better data for scholars to work with. The authors of this study appropriately point out the limitations of their data. The recognize that what they are trying to explain are “rare events.” They “acknowledge that our results are influenced by the definition [of mass shooting] that we have chosen.”

These limitations notwithstanding, the authors offer two unqualified policy recommendations: firearm purchaser or ownership licensing with fingerprinting and “large capacity magazine” bans.
Screen cap of https://onlinelibrary.wiley.com/doi/10.1111/1745-9133.12487

15 comments

  1. I had a hard time getting past the first paragraph in the study…

    “High‐profile public mass shootings (e.g., incidents that gain significant media attention as a result of high victim count and/or unique characteristic such as location or motive) prompt what have become predictable responses across the political spectrum. One side points to easy firearm access as the key cause of mass shootings and calls for stronger gun laws including comprehensive background checks, bans on assault weapons and large‐capacity magazines (if those were used), and more recently, Extreme Risk Protection Order (ERPO) laws to disarm persons planning violent acts. The other side sees unarmed victims being shot in mass shootings and focuses on the hypothetical question, “What if one of the victims or a bystander used a firearm to stop the attack?” The solutions to mass shootings that stem from this perspective include eliminating so‐called “gun free zones” and reducing or eliminating restrictions on civilian carrying of concealed firearms in public places.”

    … and wondered if this revised version would be accepted as equally valid:

    High‐profile public mass shootings (e.g., incidents that gain significant media attention as a result of high victim count and/or unique characteristic such as location or motive) prompt what have become predictable responses across the political spectrum. One side points to the almost universal effect of immediately halting such events with the first sign of armed resistance, be it from a police response or a legally armed citizen. The solutions to mass shootings that stem from this perspective include eliminating so‐called “gun free zones” and reducing or eliminating restrictions on civilian carrying of concealed firearms in public places. The other side sees unarmed victims being shot in mass shootings and focuses on the hypothetical question, “What if the shooter never gained access to a gun in the first place and thus couldn’t conduct the attack?” Proponents of this view claim to have identified easy firearm access as the key cause of mass shootings and calls for stronger gun laws including comprehensive background checks, bans on assault weapons and large‐capacity magazines (regardless if those were used), and more recently, Extreme Risk Protection Order (ERPO) laws to disarm persons planning violent acts.

    As a student of these types of events (and certified Civilian Response to Active Shooter Events instructor), it’s hard to get past the (straw man?) setup right out of the blocks.

    Liked by 1 person

    • Well put. The assumed premises are always that the laws in question work as intended, and thus their presence or absence is determinative. There seems to be a consistent, repeated, failure by public health researchers to consider extant research from actual subject-matter experts in criminology, economics, and even sociology on the actual effectiveness of those laws. They typically cite only to research which also supports the efficacy of the laws, usually research done by themselves and their peers. An entertaining exercise is to go through the bibliographies and see the same names, including of authors and co-authors, given to support the new research.

      Like

  2. In my opinion, Michael Bloomberg School of Public Health is a political advocacy organization. The NRA doesn’t manufacture “studies” in order to back up their positions. There are no headlines touting a “new NRA study” because they don’t pretend to be something they’re not.

    I’ve noticed Webster et al. definition of mass shooting with LCMs include everything from a 12 rounds magazine in a handgun to a 100 rounds drum. What if someone used a Glock pistol with LCM but fired only 6 rounds and killed 4 people? Webster et al. qualified this as LCM-involved mass shooting. It doesn’t make any sense to me.

    Also they didn’t even mentioned these two papers in their literature review section:

    – Mark Gius, “Effects of Permit-to-Purchase Laws on State-Level Firearm Murder Rates” (2017);
    – Gary Kleck, “Large-Capacity Magazines and the Casualty Counts in Mass Shootings” (2016);

    I’m not saying Bloomberg School people should agree with the conclusions of these two articles but it would be nice if they at least acknowledged that they exist in peer review literature.

    Liked by 1 person

  3. The elephant in the room is, as always, the causal link. The mechanical linkage between a given restriction and how it mechanically can cause or drive the correlated effect. Assuming arguendo everything about the study is solid, pointing out a correlation is fine. Except the claim isn’t novel, and neither is the research. At what point do we go from repeatedly pointing out correlations to doing the follow-up and look for, or at least posit, a causal link? Absent that, “Policy Implications,” such as the one given for this study, are not “scientific conclusions” but purely political assertions.

    Glaring red flag, they make a claim about handgun licensing (and the flag is the same for LCM’s as well), but, as usual for public health researchers, don’t look at whether the actual shooter’s whose crimes are being used as data could qualify for licensing, nor, in states with licensing, how they actually acquired the firearm used, nor do they examine the extant research on the effectiveness of such laws in reducing actual access. They take the effectiveness of the laws as a given and, presuming they work, then presume that the shooters would have been impacted by them. Such criminological research on the micro level is crucial before claiming a given policy is the cause of lower rates of anything. Which is a continuing problem with public health models treating objects as vectors, they ignore the human beings involved.

    Like

    • I definitely agree that large scale quantitative studies are good at identifying correlations and that the causal mechanisms are often a “black box” in statistical studies like this. It would be good if public health scholars would do more qualitative work to show the causal mechanisms linking independent and dependent variables.

      Liked by 1 person

  4. It seems to me one of the fundamental errors of this study is that the authors definition of “mass shooting” as involving four or more deaths not including the shooter. The problem is that this definition encompasses at least three separate and distinct types of shootings. Gang related shootings, domestic violence and spray shootings. The authors seem to recognize at least part of the problem by eliminating gang related shootings, but lumping domestic violence and spray shootings together seems to be including both apples and oranges in the study.

    Like

    • I think you have to go one way or the other. You either are studying “mass shootings” according to the 4 or more deaths definition with no exclusions for different “types,” or you are studying “mass public shootings” and excluding both gang and domestic shootings. By splitting the difference, as you say, they are excluding some apples and keeping others to go with their oranges.

      Liked by 1 person

  5. I first saw the “excluded gang or narcotic” in the 2013 FBI report after Sandy Hook. They created a new category of “Active shooter” and relied on “open sources” to supplement their official data. It was not up to the usual FBI standard of research. But it has become foundational for people studying “active shooters” especially in the public health vein of research. A bad beginning and bad follow up.

    Liked by 1 person

    • This is not my area of research so I will rely on you to educate me here. Seems to me that some jurisdictions report crime data to the FBI and others don’t, so the FBI “standard of research” is actually quite low and is what creates the need for people to use open sources to supplement the official data. That is, they are trying to overcome problems with the official data as best they can. What am I missing?

      Liked by 1 person

      • There are always issues with the official FBI data. But generally the UCR does a decent job with a crime like murder (because there’s a body that gets reported almost every time). The reports that go over their collected data tend be well put together. This particular one was born out of a very fraught time. In 2012 the FBI released their report on mass killings and then numbers were declining. Then Sandy Hook happened. Then instead of the normal few years before another report the Active Shooter report went out. The quality of it is, in my opinion, lower than the average FBI report. Normally they are clear “We used UCR, here are its limitations, here is what we can infer” this report’s conceptions were not well laid out and the supplementary data was not well explicated as I recall (I haven’t re-read the report in some time).

        Supplementing data is its own headache. Often I have found it to be code for “3 grad students and an undergrad searched google for news stories” and that does not inspire my confidence as a researcher. Even published authors miss important studies in the lit reviews for meta-analyses and those are better archived than internet news stories. You have search engine biases and multiple stories from different publications, updated stories, retracted stories, it begins to approach journalism. It is not bad per se, but I have rarely been inspired by the rigor.

        My point was that the questionable exclusion started there (to my knowledge) and is becoming entrenched. I have not seen convincing arguments for this exclusion.

        Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.