Opinions, Assholes, and Believability

My next post was going to be a continuation of my introduction to complexity, and I promise that I’ll get around to that eventually, but a few days ago I was made aware of an exchange on Facebook that got me thinking, and I’d like to take a moment to lay out my thoughts on the matter.

I personally did not witness this exchange, but a friend of mine took a screenshot of the first part of the conversation (before the original commenter apparently deleted the thread).  First, some context: this occurred after a firearms industry page (Keepers Concealment, a maker of high quality holsters) shared a video of Ernest Langdon demonstrating the “Super Test,” a training drill that requires a shooter to fire rapidly and accurately at various ranges.  Ernest Langdon is indisputably one of the best handgun shooters in the world.  That’s an objective fact, and he has the competition results and measurable skills to prove it.  He is ranked as a Grand Master in the US Practical Shooting Association, a Distinguished Master in the International Defensive Pistol Association, and has won 10 National Championship Shooting titles and 2 World Speed Shooting titles.  All of which explains why when some nobody on Facebook (who we shall refer to as “Mr. Blue” as per my color-coded redacting) made this comment, quite a few people who know who Ernest Langdon is raised their collective eyebrows:

Opinions Screenshot

Mr. Blue, who as mentioned is a nobody in the shooting world with exactly zero grounds to critique Ernest Langdon, still for some reason felt the appropriate response to this video of the one the best shooters to have ever walked the face of the earth was to provide unsolicited advice on how he could improve.  Then, when incredulous individuals who actually know what they’re talking about point out exactly how arrogantly stupid that response is to this particular video, another person, Mr. Red, chimes in to claim that if we accept no one is above reproach, then “it’s fair for people (even those who can’t do better), to critique what they see in the video.”  To which I want to respond: no, it is not.

I agree entirely with Ray Dalio, the founder of Bridgewater Associates—the world’s largest hedge fund—when he says, “While everyone has the right to have questions and theories, only believable people have the right to have opinions. If you can’t successfully ski down a difficult slope, you shouldn’t tell others how to do it, though you can ask questions about it and even express your views about possible ways if you make clear that you are unsure.”  What that means is not that you can’t form an opinion.  It means that just because you have the right to HAVE an opinion doesn’t mean you have the right to express it and expect for anyone to take it seriously.  Just because you happen to be a breathing human being doesn’t make you credible, and the opinions of those who don’t know what they’re talking about are nothing more than a waste of time that serves only to prove that you’re an idiot.  Like the old saying says, “Better to remain silent and be thought a fool than to speak and remove all doubt.”

But Mr. Red’s comment goes to an attitude that lies at the heart of stupidity: the idea that everyone’s opinion is equally valid and worth expressing, and all have a right to be heard and taken seriously.  This certainly isn’t a new phenomenon: Isaac Asimov wrote about a cult of ignorance in an article back in 1980: “The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.’”  But new or not, it very much drives the willingness of ignorant nobodies to “correct” and “critique” genuine experts.  Mr. Blue has no idea of the thousands of hours of training Ernest Langdon has put into perfecting his grip and recoil management and trigger control, the hundreds of thousands of rounds of ammunition he’s put down range to hone his technique and become one of the best in the world at what he does.  Mr. Blue has put nowhere near that amount of time and effort into his own training—I know this, because if he had he’d also be one of the best shooters in the world, instead of some random nobody on Facebook.  But despite that vast gulf of experience and expertise, Mr. Blue still thinks he can and should provide unsolicited advice on how Ernest Langdon can be better.  And then doesn’t understand why others are laughing at him, and another commenter rides to the rescue, offended at the very notion people are dismissive of the critique of a nobody.

This is the same mindset that leads to people who barely graduated high school presuming to lecture the rest of us on why the experts are wrong on politics, on science, on economics, on medicine.  This is the mindset that leads to anti-vaccination movements bringing back measles outbreaks in the United States.  This is the mindset Sylvia Nasar described when she wrote “Frustrated as he was by his lack of a university education, particularly his ignorance of the works of Adam Smith, Thomas Mathus, David Ricardo, and other British political economics, [he] was nonetheless perfectly confident that British economics was deeply flawed.  In one of the last essays he wrote before leaving England, he hastily roughed out the essential elements of a rival doctrine.  Modestly, he called this fledgling effort ‘Outlines of a Critique of Political Economy.’”  The subject she was writing about?  Friedrich Engels, friend and collaborator of Karl Marx, and co-author of Das Kapital.  Is there any wonder that the system they came up with has never worked in practice?

While the conversation that inspired this line of thought was in the shooting world, I see it all the time in many, many different fields.  Novice weightlifters “critiquing” world record holders.  Undergraduate students “critiquing” tenured professors in their area of expertise.  Fans who’ve never stepped into a cage in their lives expounding upon what a professional fighter in the UFC “did wrong” as if they have the slightest idea what it’s like to step into the Octogon and put it all on the line in a professional MMA fight.  People with zero credibility believing they have the standing to offer unsolicited advice to genuine, established experts.  This isn’t to say that experts are infallible, or that criticism is always unfounded.  But to have your opinion respected, it must be believable, and if you lack that standing you’d damn well better be absolutely certain your criticism is well-founded and supported by strong evidence, because that’s all you have to go on at that point.  Appeal to authority is a logical fallacy, but unless you’ve got the evidence to back up your argument, the benefit of the doubt is going to go to the expert who has spent a lifetime in the field, versus the nobody who chooses to provide unsolicited commentary.

When you have an opinion on a technical subject, and you find yourself moved to express it in a public forum, please, just take a second and reflect.  “Do I have any standing to express this opinion and have it be believable, or is it well-supported by documented and cited evidence in such a way that it overcomes my lack of relative expertise?  Do I have a right for anyone to pay attention to my thoughts on this subject?  Or am I just another ignorant asshole spewing word diarrhea for the sake of screaming into the void and pretending I matter, that I’m not a lost soul drifting my way through existential meaninglessness, that my life has purpose and I’m special?”  Don’t be that guy.

Opinions and assholes, man.  Everyone’s got ‘em, and most of them stink.

Well, Actually… (A Rebuttal to a Rebuttal)

In June, researchers from the University of Washington released a National Bureau of Economic Research working paper entitled “Minimum Wage Increases, Wages, and Low-Wage Employment: Evidence from Seattle” (Jardim et al, 2017).  It made a lot of headlines, for the claim it made that the increased minimum wage in Seattle (up to $13 this year, and planned to increase to $15 within the next 18 months) has cost low-wage workers money by reducing employment hours across the board.  Essentially, Jardim and her colleagues showed rather convincingly through an in-depth econometric analysis that while wages for the average low-income worker increased per hour, their hours were cut to an extent that the losses exceeded the gains for a reduced total income.  It’s an impressive case for what I argued in my first “Well, It’s Complicated” article playing out in reality.

However, not everyone is convinced.  A friend of mine alerted me to an article by Rebecca Smith, J.D., of the National Employment Law Project that argues the study MUST be bullshit, because it doesn’t square with what she sees as reality.  In the article, Ms. Smith makes six specific claims in her effort to rebut the study.  Unfortunately for her, all these claims do is demonstrate she either doesn’t know how to read an econometric paper, or she didn’t actually read it that closely, because four are easily disproven by the paper itself, and the other two are irrelevant.

Specifically, she claimed the following:

  • The paper’s findings cannot “be squared with the reality of Seattle’s economy,” because “At 2.5 percent unemployment, Seattle is very near full employment. A Seattle Times story from earlier this month reported a restaurant owner’s Facebook confession that due to the tight labor market ‘I’d give my right pinkie up for an awesome dishwasher.’ Earlier this year, Jimmy John’s advertised for delivery drivers at $20 per hour.”

 

  • “By the UW team’s own admission, nearly 40 percent of the city’s low-wage workforce is excluded from the data: workers at multisite employers like Nordstrom, Starbucks, or even restaurants with a few locations like Dick’s.”

 

  • “Even worse, any time a worker left a job with a single-site employer for one with a chain, that was treated as a “lost job” that was blamed on the minimum wage — and that likely happened a lot since the minimum wage was higher for those large employers.”

 

  • “…Every time an employer raised its pay above $19 per hour — like Jimmy John’s did — it was counted not as a better job, but as a low-wage job lost as a result of the minimum wage.”

 

  • “The truth is, low-wage workers are making real gains in Seattle’s labor market. In almost all categories of traditionally low-wage work, there are more employers in the market than at any time in the city’s history. There are more coffee shops, restaurants and hotels in Seattle than ever before. The work is getting done. And the largest (and best-paid) workforce in the history of the city is doing it.”

 

  • “Nor can the study be reconciled with the wide body of rigorous research — including a recent study of Seattle’s restaurant industry by University of California economist Michael Reich, one of the country’s foremost minimum-wage researchers — that finds that minimum-wage-increases studies have not led to any appreciable job losses.”

 

Let’s look at each of these in turn.

 


 

THE CLAIM: This paper doesn’t match the reality of Seattle’s 2.5% unemployment rate, which is driving up wages regardless of the minimum wage increases due to high labor demand.

First, this isn’t an attack on the paper itself, just an expression of incredulity that demonstrates Ms. Smith apparently doesn’t understand how statistical analysis works—there are MANY factors that go into overall unemployment rates, and the minimum wage is just one of them.  Thus, the paper seeks to isolate unemployment and reduced employment hours in a given sector, and the overall unemployment rate is irrelevant to the analysis.

Second, Seattle’s unemployment rate is not 2.5%, and has not been 2.5% in a long time: the Bureau of Labor Statistics lists it as 2.9% in April 2017, its lowest point in the past year, and trended back up to 3.2% by May.  You don’t get to just make up numbers to refute points you don’t like.

Third, just to emphasize that this unemployment rate is not caused by the minimum wage increase, let’s compare Seattle to other cities.  At 3.2% unemployment in May, Seattle was tied with five other US cities: Detroit, San Diego, Orlando, San Antonio, and Washington, D.C.  All of these cities have their own minimum wages that vary between $8.10 and $13.75—but for a proper comparison, these rates have to be adjusted for cost of living.  When so adjusted, the lowest paid workers were those in Orlando, making the equivalent of a worker in Seattle taking home $10.94/hour.  The highest were those in San Antonio, with the equivalent of $19.39/hour at Seattle prices.  For comparison, workers actually IN Seattle were making just $13/hour in May—the average for all six cities was $13.28.  With such a range, can the “high” minimum wage be driving the employment rate that’s identical among all of them?  These six cities all tied for 11th place in lowest unemployment rates in the nation that month.  How about the best three?  First place goes to Denver, with a minimum wage of $11.53 (adjusted for Seattle cost of living).  Second to Nashville, at $9.72.  Third to Indianapolis, at $9.93.  I’d take a step back and reconsider any claim that the $13 minimum wage in Seattle is at all relevant to the overall employment rate, given that when you compare apples to apples, there is no apparent correlation at all.  Instead, let’s stick to what the paper was about: the impact of total income on low-income workers, given per hour wage increases versus changes in hours worked.

 


 

The Claim: The paper excluded 40% of the city’s low-wage workforce by ignoring all multisite employers.

Quite simply, no, it did not.  The paper did NOT exclude all multisite employers.  It excluded SOME multisite employers.  And those employers don’t account for “nearly 40% of the city’s low-wage workforce,” but rather 38% of the ENTIRE workforce across the state as a whole—no mention is made of their proportion within Seattle itself.  And if Ms. Smith had read closely, she’d realize that not only does this make perfect sense, but if anything it just as likely biased the results to UNDERESTIMATING the loss in employment hours for low-wage workers.

“The data identify business entities as UI account holders. Firms with multiple locations have the option of establishing a separate account for each location, or a common account. Geographic identification in the data is at the account level. As such, we can uniquely identify business location only for single-site firms and those multi-site firms opting for separate accounts by location. We therefore exclude multi-site single-account businesses from the analysis, referring henceforth to the remaining firms as “single-site” businesses. As shown in Table 2, in Washington State as a whole, single-site businesses comprise 89% of firms and employ 62% of the entire workforce (which includes 2.7 million employees in an average quarter).

Multi-location firms may respond differently to local minimum wage laws. On the one hand, firms with establishments inside and outside of the affected jurisdiction could more easily absorb the added labor costs from their affected locations, and thus would have less incentive to respond by changing their labor demand. On the other hand, such firms would have an easier time relocating work to their existing sites outside of the affected jurisdiction, and thus might reduce labor demand more than single-location businesses. Survey evidence collected in Seattle at the time of the first minimum wage increase, and again one year later, increase suggests that multi location firms were in fact more likely to plan and implement staff reductions. Our employment results may therefore be biased towards zero.”  (Jardim et al., pp 14-15).

Essentially, the nature of the data required they eliminate 11% of firms in Washington State before beginning their analysis, because there was literally no way to tell which of their sites (and therefore which of their reported employees) were located within the city of Seattle.  Multi-site firms that reported employment hours by individual site were absolutely included, just not those that aggregate their employment hours across all locations.  But that’s okay, because on the one hand, such firms can potentially absorb increased labor costs at their Seattle sites, but on the other they can more easily shift work to sites outside the affected area and thus reduce labor demand within Seattle in response to increased wage bills.  And surveys suggest that such firms are more likely to lay off workers in Seattle than other firms—hence, excluding them from the data is just as likely to make the employment reduction estimates LOWER than they’d be if the firms were included as they are to bias the estimates positively.  Ms. Smith’s objection on this point only serves to prove she went looking for things to object to, rather than reading in depth before jumping to conclusions.

 


 

The Claim: Workers leaving included firms for excluded firms was treated as job loss.

Literally no, it was not.  The analysis was based on total reported employment hours and not on total worker employment.  When employers lose workers to other firms, they don’t change their labor demand.  Either other workers get more hours or someone new is hired to cover the lost worker’s hours.  If hours DO decrease when a worker leaves, that means the employer has reduced its labor demand and sees no need to replace those hours.  In which case, it IS “job loss” in the sense of reduced total employment hours.

 


 

The Claim: When employers raised wages above $19/hour, it was treated as job loss.

Again, literally no, it was not.  Not only does the paper have an extensive three-page section addressing why and how they chose the primary analysis threshold of $19/hour, they also discuss in their results section how they checked their results against other thresholds up to $25/hour.  In short, a lot of previous research has conclusively shown that increasing minimum wages has a cascading effect up the wage chain: not only are minimum wage workers directly affected by it, but also workers who make above minimum wage—but the results decrease the further the wage level gets from the minimum.  Jardim et al did a lot of in-depth analysis to determine the most appropriate level to cut off their workforce sector of interest, and determined the cascading effects became negligible at around $18/hour—and they chose $19/hour to be conservative in case their estimates were incorrect.  And they STILL compared their results to thresholds ranging from $11/hour to $25/hour and proved the effects of the $13 minimum wage were statistically significant regardless of the chosen threshold.

 


 

The Claim: Low-wage workers are making gains, because in almost all categories of traditionally low-wage work, there are more employers in the market than at any time in the city’s history.

Simply irrelevant.  Number of employers has zero effect on number of hours worked for each worker.  Again, the analysis was based on total labor demand for low-wage workers as expressed in total employment hours across all sectors.  The number of firms makes no difference to how many labor hours each firm is demanding per worker.

 


 

The Claim: This study cannot be reconciled with the body of previous research, including Reich’s recent study of restaurant labor in Seattle, that indicates minimum wage increases don’t lead to job losses.

There are two parts of my response to this.  First, that body of previous research is MUCH more divided than Ms. Smith seems to believe, but that’s to be expected from someone who so demonstrably cherry-picks statements to support her point.  While one school of thought, led by researchers like Card and Krueger (the so-called New Minimum Wage Theorists), believes their research supports Ms. Smith’s argument, their claims have consistently been rebutted on methodological grounds by other researchers like Wascher and Neumark.  Over 70% of economists looking at the conflicting evidence have come down in support of the hypothesis that minimum wage increases lead to job loss among minimum wage workers, as cited by Mankiw in Principles of Economics.  I discuss both points of view more extensively in “Well, It’s Complicated #1.”

Second, the paper has a two and a half page section entitled “Reconciling these estimates with prior work,” where the authors discuss this issue quite in depth.  Including pointing out that when they limit their analysis to those methods used by previous researchers, their results are consistent with those researchers’ results, and they, too, support Reich’s conclusions in regards to the restaurant industry specifically.  In short, yes, this study ABSOLUTELY can be reconciled with the body of previous research.  That body just doesn’t say what Ms. Smith apparently believes it does.

 


 

So where does that leave us?  Quite simply, Ms. Smith is wrong.  Absolutely none of her criticisms of the paper hold water.  Actually, this is one of the most impressive econometric studies I’ve ever read—it even uses the Synthetic Controls methodology that I’ve previously criticized (see my article, “Lies, Damn Lies, and Statistics”), but it uses it in the intended limited and narrowly-focused manner in which it provides useful results.  And it does an excellent job of demonstrating that despite the booming Seattle economy, the rapid increase in the city’s minimum wage has hurt the very employees it intended to help, reducing their total monthly income by an average of 6.6%.

 



 

Original paper can be found here: http://www.nber.org/papers/w23532

 

Lies, Damn Lies, and Statistics: A Methodological Assessment

Last month, a National Bureau of Economic Research working paper made headlines across the internet when it claimed to demonstrate that so-called “Right to Carry” (RTC) laws increased violent and property crime rates above where they would have been without the passage of such laws.  Now, most science reporting is done by people with zero technical background in the advanced statistical techniques used by the paper’s authors, so I was a bit skeptical it actually said what they were claiming it said.  Fortunately, I DO have such a technical background, and for several years now I’ve been following with great interest the academic arguments about the effects of legal guns on crime rates.  And after having read the paper in question (Right-to-Carry Laws and Violent Crime: A Comprehensive Assessment Using Panel Data and a State-Level Synthetic Controls Analysis. Donohue, Aneja, and Weber. 2017), I’ve come to the conclusion that I was both right and wrong.  Wrong in that the paper’s authors drew the conclusion stated by the journalists—they do, in fact, claim their data shows RTC laws increase crime.  But right in that the data doesn’t actually show that when you read it with a more critical eye.  Therefore, I’m going to take this opportunity to teach a lesson in why you shouldn’t trust paper abstracts or jump to the “conclusions” section, but should instead examine the data and analysis yourself.

Disclaimer: I am a firearms enthusiast and active in the firearms community at large.  However, I am also a scientist, and absolutely made my very best efforts to set that bias aside in reading this paper, and give it the benefit of the doubt.  Whether I succeeded or not is up to you to decide, but I believe my objections to the authors’ conclusions are based solely on methodological grounds and will stand up to the scrutiny of any objective observer.  Unfortunately, I cannot say the same about Professor Donohue and his co-authors, as their own personal bias against guns is quite evident from their concluding paragraphs.  Because of that bias, I firmly believe this paper is a perfect example of “Lies, Damn Lies, and Statistics.”

The paper itself is really divided into two sections: a standard multiple regression analysis and then a newer counterfactual method called “synthetic control analysis.”  The authors claim both analyses show that RTC laws increase crime.  I disagree, at least with the extent they believe this to be true.  Let’s look at each in turn.

First, the regression analysis.  The meat of this analysis is comparing four different models (and three variations of those models) for a total of seven specifications.  Multiple regression analysis is a powerful tool to analyze observational data and attempt to control for several variables to see what impact each had on the target dependent variable.  In this paper, Donohue et al. build their own model specification (DAW), as well as comparing it to three pre-existing models from other researchers (BC, LM, MM).  They looked at the effects of states’ passage of RTC laws on three dependent variables: murder rates, violent crime rates, and property crime rates.  The key point of their research is that it goes beyond previous papers in its data set: where previous research has stopped at the year 2000, this paper looks at how the results change when the models are fed an additional 14 years of data, looking from 1977-2014.

The problem here is that the authors claim their panel data analysis consistently shows a statistically significant increase in violent crime when using the longer time horizon ending in 2014.  This is a problem because, quite bluntly, no, it does not.  The DAW variable specification (their new, original model built for this analysis) DOES find an increase in violent crime and property crime rates (though not murder, which they acknowledge).  But the spline model of the same variables finds no statistically significant correlation whatsoever.  They even acknowledge this in their paper: “RTC laws on average increased violent crime by 9.5 percent and property crime by 6.8 percent in the years following adoption according to the dummy model, but again showed no statistically significant effect in the spline model.” (DAW 8).  But then they never mention it again or seek to address why the spline model—an alternative method that’s often preferred over polynomial interpolation for technical reasons—achieves such different results.  This spline model was built from the National Research Council report in 2004, and they used it earlier (sans other regressors) to show that the NRC’s conclusions it tentatively showed a decrease in crime rates associated with RTC laws disappear when the data set is extended to 2014.  But when they re-run it with their own variables, the lack of statistical significance is mentioned in a single line and then never brought up again.

In fact, the spline model is used comparatively for all four regression specifications, and the only cases in which it finds ANY statistical significance are the two the authors themselves discredit as methodologically unsound (LM and MM in their original versions).  But this point is never addressed—the polynomial “Dummy Variable Model” specification and the spline models all dramatically disagree, no matter WHAT set of variables they choose.  This, to me, strongly suggests that any conclusions drawn from the panel data regression analysis is highly suspect and the choice of specification deserves further review before they can be believed one way or the other.  Regression analysis is always extremely sensitive to specification, and results can shift dramatically based on what variables are included, what are omitted, and how they’re specified.  Unfortunately, the paper does not seem to discuss any testing for functional form misspecification (such as a Ramsey RESET test), so it is unclear if the authors compared their chosen model specification to other potential functional forms.  There’s no discussion, for example, of whether the polynomial or spline models are better and why.  This is a huge gap in the analysis that I would like to see addressed before I’m willing to accept any conclusions therefrom.*

Additionally, panel data suffers from some of the same limitations as cross-sectional data, including a need for large data sets to be credible.  In this case, the analysis only looked at 33 states (those that passed RTC laws between 1977 and 2004), making any conclusions drawn from the limited N=33 data set tentative at best.  This is not necessarily the authors’ fault—much data is only available at the state level, so it’s much harder to do a broader assessment with more data points (e.g., by county).  But it certainly does increase the grain of salt with which the analysis should be taken.  Despite that, the authors seem quite willing to draw sweeping conclusions when they should, by rights, be a lot more cautious about conclusive claims.**

The second part of the paper is even more problematic.  In short, they build a counterfactual model of each state that passed an RTC law in the specified time period, and then compare the predicted crime rates in those simulated states versus the observed crime rates in their real world counterparts.  This is certainly an interesting statistical technique, and is mathematically ingenious.  It might even be a useful tool for certain applications.  Unfortunately, counterfactual analysis, no matter how refined, suffers a fundamental flaw: by its very nature, it assumes the effects of a single event can be assessed in isolation.  In reality, as I’ve discussed before, human social systems are complex systems.  One major legal change will have dramatic effects across the board—that policy in turn drives many decisions down the line, so plucking out the one policy of interest and assuming all post-counterfactual decisions will remain the same is blatantly ridiculous.  It’s the statistical equivalent of saying “If only Pickett’s Charge had succeeded, the South would have won the Civil War.”  Well, no, because everything that happened AFTER Pickett’s Charge would have been completely different, so we can only make the vaguest guesses about what MAY have happened.

But that’s precisely what the authors are attempting to do here, and put the stamp of mathematical certainty on it to boot.  They built models of each RTC state in the target period by comparing several key crime-rate-related variables to control states without RTC laws, and then assessed the predicted crime rate in that model against the actual reported crime rates in reality to make a causal claim about the RTC laws’ effects on those crime rates.  They decided their models were good fits by comparing how well they tracked the fluctuations in crime rates in the years prior to the RTC (the counterfactual point), and if they were similar enough, they claim it’s a good predictive model.  But that fails to account for the cascading changes that would have occurred AFTER the counterfactual point by the nature of a complex system.  The entire analysis rests on an incredibly flawed assumption, and thus NO conclusive answers can be derived from it.  At best, it raises an interesting question.

The paper isn’t worthless, by any means.  The panel data analysis does a good job showing that NO specification, including John Lott’s original model from which he built his flawed “More Guns, Less Crime” thesis, supports a claim that RTC laws decrease crime rates.  But that’s about all it does.  It hints at the possibility RTC laws may increase violent and property crime rates (though not murder).  It certainly doesn’t conclusively demonstrate that claim, but it raises enough doubt that others researchers should tackle it in much more depth.  Similarly, the counterfactual “synthetic controls” analysis by no means proves a causal relationship between RTC laws and crime rates for the reasons explained above, but it raises an interesting question that should be examined further.

No, the problem is that the authors pay only lip service to the limitations of their analysis and instead make sweeping claims their data does not necessarily support: “The fact that two different types of statistical data—panel data regression and synthetic controls—with varying strengths and shortcomings and with different model specifications both yield consistent and strongly statistically significant evidence that RTC laws increase violent crime constitutes persuasive evidence that any beneficial effects from gun carrying are likely substantially outweighed by the increases in violent crime that these laws stimulate.”  (DAW, 39).  The problem is that the panel data regression is unclear given the discrepancies between the Dummy Variable and Spline Models, and less than solid given the low N value for cross-sectional comparisons; and that the synthetic controls rests on a flawed assumption about the nature of the social systems being modeled.

These limitations, combined with the many other papers looking at other types of regressions (such as the impacts of gun ownership in general on violent crime rates) that have been unable to find statistically significant correlations between legal gun prevalence and violent crime rates, make me extremely skeptical of this paper.  To be fair, it has yet to undergo peer review (it’s a working paper, after all), and it’s certainly possible many of my objections will be rectified in the final published version.  But right now, the best I can say for the data is that it raises some questions worth answering.  And it certainly doesn’t support the authors’ claim that their analysis is persuasive evidence of anything.  At least, not nearly as persuasive as they’d have you believe.

That’s why I said, at the beginning, never trust an abstract or a conclusion section: read the analysis for yourself, and only then see what the authors have to say about it.  Because there’s a great deal of truth to the old saying, “There are three kinds of lies: lies, damned lies, and statistics.”  Statistics are a powerful tool.  But even with the best intentions they’re easily manipulated, and even more easily misunderstood.

 


*For those of you who don’t speak “stats geek,” what this paragraph means is that essentially the authors compared two different types of models, which had dramatically different conclusions, and they kinda ignored that fact entirely and moved past it.  And then didn’t discuss anywhere in the paper itself or any of the appendices why they chose one over the other, or why they specified any of their models the way they did versus other options.  It isn’t damning, but it’s certainly suspiciously like a Jedi handwave: “This IS what our data says, trust us.”

**Again, for the non-statisticians, larger data sets tend to produce more reliable estimates–the larger your data set, the more likely it is that your model’s estimates approach reality.  Small data sets are inherently less reliable, and 33 observations per year in the panel data is a tiny data set.

 

The original paper is available here for anyone who cares to examine it for themselves: http://www.nber.org/papers/w23510