6 How to Test Gun Control Effects

 

Chapter Two – How to Test the Effects of Gun Control

THE EXISTING LITERATURE (pages 22-27)

Dr. Lott assumes that people on both sides of the gun control debate are (1) motivated to reduce the number of lives lost from firearms, and (2) concerned for how gun control laws will impact issues of violent crime. There are extensive philosophies on both sides. There are also detailed personal stories from opposite sides, with accounts that illustrate lives lost or lives spared because of guns. But while these stories may provide motivational, they aren’t always good for evaluation of current or proposed laws. Likewise, personal surveys have limited use. They generally are biased by the beliefs of people who respond, plus they can only say what did or did not happen – not the alternative actions taken by a criminal because of certain conditions.

Each of these sources is highly biased – too constrained for testing the effects of gun controls. So, a different kind is research is needed. As Dr. Lott explains:

To study these issues more effectively, academics have turned to statistics on crime. Depending on what one counts as academic research, there are at least two hundred studies on gun control. The existing work falls into two categories, using either “time-series” or “cross-sectional” data. Time-series data deal with one particular area (a city, county, or state) over many years; cross-sectional data look across many different geographic areas within the same year. The vast majority of gun-control studies that examine time-series data present a comparison of the average murder rates before and after the change in laws; those that examine cross-sectional data compare murder rates across places with and without certain laws. Unfortunately, these studies make no attempt to relate fluctuations in crime rates to changing law-enforcement factors like arrest or conviction rates, prison-sentence lengths, or other obvious variables. (Page 23, emphasis added.)

Both categories of research have limitations in identifying whether what someone thinks is causing a change is actually the factor doing so, or is doing so now. Dr. Lott suggests that “The solution to these problems is to combine both time-series and cross-sectional evidence and then allow separate variables, so that each year the national or regional changes in crime rates can be separated out and distinguished from any local deviations.” (Page 24, emphasis added.)

No one else had used this combined approach for gun-control research before the first edition of More Guns, Less Crime was published. From that research methodology base, Dr. Lott critiques of other kinds of research studies that supposedly test the effects of gun controls. Results from such studies were being quoted by politicians, media, and researchers at the time of his first edition (1998), and so were important for him to address. His critiques help show why research methods can raise as many questions as they answer. For example:

  • The largest cross-sectional study, done in 1980, failed to use variables related to crime deterrence (e.g., prison sentence lengths, arrest rates, conviction rates).
  • A much-reported time-series study examined five counties in three states from 1973-1992, but focused on urban areas and didn’t explain why they chose the specific places the studied. They also did not take into account other possible variables that could explain crime rates.
  • One cross-sectional study used a “case study” approach comparing a sample group involving a homicide versus a “control group” of people who lived nearby the victim and were of comparable gender, race, and age. The researchers “attempted to see if the probability of a homicide was correlated with the ownership of a gun” that was kept in the home (Page 25.) Dr. Lott critiques this study for leaving a flawed impression that the homicides used the gun that was kept in the home, when that was true in only 8 out of 444 homicide cases.

When you boil down the objections to how these other studies were conducted, it looks like this:

  • Researchers didn’t lay out all their key assumptions or explain how/why they did what they did.
  • They didn’t consider a range of factors/variables beyond gun controls that could be causing the changes they measured.
  • They didn’t “control” for key personal demographic variables such as gender, race, or age – or social demographics such as urban/rural, population size, etc.
  • They used a methodology not designed for this kind of statistical research, such as a case study method with a sample group and control group. That means they could not identify the specific factor that caused a change. (Just because two phenomena exist side by side does not mean one caused the other. Both might be caused by something else, or totally independent of one another.)

THE IMPACT OF CONCEALED HANDGUNS ON CRIME (pages 27-36)

All of this leads to Dr. Lott’s extended description in Chapter Three on how he set up his research database to answer questions about whether carry concealed firearms deters crime. But first, he lays out more background so that his research design makes sense.

Factors that are assumed relevant to research into crime deterrence include:

  • Punishment/penalties (prison, loss of licenses, loss of voting rights, reduced earnings).
  • Arrest rates.
  • Crime conviction rates.
  • Handgun-control laws.

Given that list of potential deterrence factors, part of what made research difficult is that the data sets in the FBI Uniform Crime Reports for 1977-1992 included arrest rates but not conviction rates or prison sentence data for U.S. counties. So, Dr. Lott included a separate variable to account for average crime rates for each county. He also found other variables to use in explaining crime rates, and gathered county-level conviction rates and average lengths of prison sentences for three samples states.

The FBI reports use seven categories of crime, grouped into two different categories (shown below). Problems can happen when using “the average county” for a specific category or crime. For instance, in 1992, the “average” U.S. county had eight murders, but that isn’t a particularly meaningful statistic when 46% of all counties had NO murders that year, 41 counties had more than a hundred murders, and two counties had more than a thousand.

  • Violent Crimes:
    • Murder and non-negligent manslaughter
    • Rape
    • Aggravated assault
    • Robbery
  • Property Crimes:
    • Auto theft
    • Burglary
    • Larceny

After explaining how the FBI configures its crime statistics, Dr. Lott suggests that the categories require thinking through which crimes would more likely be decreased by more guns in the hands of law-abiding citizens, and which not. (This is important to developing the specific questions to be researched and how to go about it.) He concludes that violent crimes are more likely deterred by more citizens with guns, as these crimes involve direct contact between criminals and their victims. Also, property crimes are less likely to be deterred by gun ownership, because there is a lower chance of criminal/victim contact. Property crimes tend to use stealth. So, if the “risk”/”cost” to criminals of being confronted, wounded, or worse by an armed citizen is higher within the violent crime category, they may substitute a lower-risk/cost act in the property crimes category.

Another problem happens when studies on concealed handguns use state-level data that ignores diversity within a state – such as population demographics, widely different crime and arrest rates among its counties, etc. This isn’t corrected by using data for cities. Some cross-sectional city data may be available, but you cannot tell if a particular law had an impact on crime in a specific city unless you have time-series data on it. Dr. Lott suggests two ways of dealing with this problem. First, use national data and see what differences happen between high-population counties versus low-population counties when nondiscretionary right-to-carry laws are passed. Second, obtain time-series data on right-to-carry permits for every county in the sample of states used. (This is what he did, using Arizona, Oregon, and Pennsylvania for his sample states.)

Then he takes a closer look at why the data sets used are important to the research design and successful statistics that explain something important. Using undifferentiated state-level data requires interpreting results as if all the counties in a state are exactly the same. That’s the same problem of acting as if every state in the United States is exactly the same. It doesn’t make sense. What if one county in a state has a sharp increase in crime arrests that raises the average arrest rate for the whole state? An elevated average does not mean the whole state has become more crime ridden. Similarly, it is harder to examine any specific relationship between deterrence and crime when all counties in a state are lumped together as if they were all the same. Dr. Lott analyzes murder and rape rates and concludes:

  1. States with the highest rates still had counties with no murders or rapes.
  2. Counties with the highest murder rates tend to be well-known places (e.g., New Orleans, Brooklyn, Los Angeles, Baltimore) – but sometimes small rural counties can take over for short periods of time.
  3. Counties with the lowest murder rates are always small, rural ones.

Which all leads to this very important conclusion: “Unfortunately, this emphasis on state-level data pervades the entire crime literature, which focuses on state- or city-level data and fails to recognize the differences between rural and urban counties.” (Page 34.) This is why Dr. Lott’s research in the mid-1990s was ground-breaking for his use of county-level data. He is fairing in noting that his approach does have drawbacks (such as it is common to find wide variations from year to year in arrest and conviction rates of low population counties).

But his carefully crafted research design and data set configurations may be why his research has withstood many kinds of attacks from academic, media, and political sources for over 15 years. And it is important that he details the design so that the conclusions he eventually draws will make sense. He tells us that he will be looking at research questions on causation, such as:

  • How do changes in gun laws affect crime rates, and vice versa?
  • How do changes in crime rates affect arrest rates?
  • What factors drive such changes?
  • Have we made mistakes in how we measure them, and if so, how do we correct them?

These are the kinds of questions that must be asked and answered to accurately test the effects of gun controls.

In closing out Chapter Two, Dr. Lott looks at other important research design problems. Here are these additional concerns, questions, and possible solutions – each with its own technical research limitations and problems to resolve:

  • Research studies that limit samples to counties with large populations.
  • Research studies that use a “moving average” of arrest or conviction rates over a period of several years.
  • What if otherwise law-abiding citizens carried concealed handguns before it was legal? How does that affect statistics on total number of concealed guns, or possible impact on crime after they are legalized?
  • Could concealed-gun laws simultaneously make individuals safer, yet also increase crime rates? Will people with firearms take more risks in where they go or what time they travel?
  • Why do certain states adopt concealed-handgun laws? Why could higher offense rates result in lower arrest rates? Why might crime rates rise even though concealed-handgun laws are passed?

Next is Chapter Three: Gun Ownership, Gun Laws, and the Data on Crime.

 Posted by at 11:00 am