Blog

Gangsters and Guns: The Root Causes of Violence in Drug Trafficking (Well, It’s Complicated #5)

A common argument, especially among the more libertarian-minded, is that because a lot of gun violence in American comes from drug dealers, ending the “War on Drugs” would immediately lead to plunging gun violence, especially in poor inner city neighborhoods where the drug trade is rampant and controlled by violent gangs.  But the truth is that’s too simple.

While I do tend to agree that a significant percentage of gun violence is directly tied to the economics of the illegal drug supply chain, the problem is that “calling off the drug war” does not in itself mean “take steps to make it an aboveground legal market.” To see what I mean, let’s look at what motivates violence in the drug market.

Gangsters, by and large, don’t go around shooting each other for no reason (though sometimes they do–the profession attracts a lot of violent sociopaths). Generally, violence is used for two main purposes in black markets (of any kind, from drugs to human trafficking to gun running to illegal food sales in Venezuela): to maintain property rights (e.g., protect one’s stash, one’s territory, one’s supply chain, etc, against competition), and to enforce contract rights (e.g., to ensure people play by the rules and don’t try to rip you off, by ensuring they fear your reprisal).  Both can take the form of what is sometimes called “instrumental violence,” that is, violence directly related to economic activities.  But the latter–enforcing contract rights–can also account for a significant amount of what is called “expressive violence,” in that such violence serves to reinforce the gang’s power, control, and reputation in its territory, thus decreasing the likelihood competitors and customers will attempt to cross them.

To remove both of these sources of violence requires giving drug dealers alternate means to maintain their property rights and to enforce their contracts. This requires not merely ending prosecution and incarceration for nonviolent drug offenses, but actually giving the drug market access to legal structures: they need to be able to set up shop legally in exchange for rent, so they aren’t fighting over corners; they need to be able to enter legally binding contracts with suppliers; they need to be able to enforce their rights through the legal system–call the police when someone steals from them, sue a supplier for breach of contract when shipments go missing, etc.

Further, the individuals currently in the underground drug trade either need to be able to transition peacefully to the aboveground drug trade, or be removed from the market entirely–if the barriers to entry into the legal market are too high, the illegal market will continue to operate until it is either economically untenable (through competition with legal alternatives) or shut down by law enforcement. And we’ve seen how well the latter works in practice through the past half century of the Drug War, so it may not be the best option except for the most egregiously violent criminals. By giving current illegal drug dealers the ability to transition to a legal, profitable, and regulated drug market and shifting the burden for enforcing property and contract rights onto the government, they no longer need to risk felony charges for doing so themselves (and for that matter they no longer need to risk getting shot themselves in disputes with other illegal drug traffickers). Only those individuals enamored of a “gangster lifestyle” would have an incentive to continue criminal activities instead of shifting to the safer and profitable legal drug trade, thus greatly decreasing the overall rate of violence currently fueled by drugs being a highly profitable black market.

This will not solve gang activity, nor will it solve all gang related violent crime.  There are certainly other criminal enterprises that many gangs involve themselves in–prostitution, gun running, illegal gambling, extortion, racketeering, etc.  Legalizing and regulating the drug trade, even with low barriers to entry to allow current drug traffickers to transition to legal markets, will have no effect on these other sources of revenue and their associated criminal violence.  But it will absolutely decrease overall violence as gangs get out of the drug trade and its associated high levels of street violence fades away, in favor of the relatively low violence in other criminal markets that rely less on direct territorial control of prized retail locations.

Ending the Drug War alone won’t affect the primary motivations for drug related violent crime, because the crime isn’t generally caused by criminals trying to avoid prosecution and incarceration.  Violence in the drug trade isn’t caused directly by the drug war; it’s caused more broadly by the fact the drug trade is a black market controlled by organized crime. Because it’s a black market, those in the business of selling drugs have no alternatives to ensure they stay in business but to make sure everyone plays by the rules or gets shot.  That’s the problem that needs to be addressed in order to solve the associated outcome of high levels of violence.

Heroes or Villains? The Economics of Price Gouging (Well, It’s Complicated #4)

The technical definition of price gouging is the practice of producers and/or retailers raising prices well above normal equilibrium levels during periods of short supply, generally during exogenous shocks that both decrease supply and increase demand.  In plain English, that means that when something happens to limit available supplies of high-demand items, such as an environmental disaster, some sellers increase their prices for those products to levels so far above normal that they’re seen as exorbitant.  For example, we often see the price of water and other basic staple supplies skyrocket in the days following major hurricanes and other disasters that disrupt supply chains.  The general public tends to condemn such behavior as taking unfair advantage of customers in their time of need, and many states have laws against it in the name of consumer protection.

But of course, the advocates of free market economics see it as nothing more than natural market forces, and often after such events they come out to proclaim store owners who engage in price gouging as heroes of the free market.  Recently, after Hurricane Harvey devastated the Houston region and Hurricane Irma did the same for the Caribbean and swaths of Florida, multiple articles appeared praising price gouging shopkeepers as heroic, or at the very least helpful rather than harmful.  The argument in both cases had basically four parts.  First, increased prices moderate demand by reducing hoarding tendencies and making customers only buy what they genuinely need, freeing up resources for other customers in need.  Second, higher prices encourage suppliers to bring in more of the goods in question to take advantage of the demand opportunity.  Third, there is no “right” price, only that dictated by the market—if the market dictates prices ten times what they were yesterday, that’s not immoral, it’s just economics.  To quote one such article, “facts should trump feelings,” so the feeling of unfairness is an illegitimate argument against economically rational behavior.  And fourth, private property rights mean individuals should be allowed to set whatever prices they want, because no one has a right to their property except through voluntary free exchange.  And since it’s just good business sense to sell your property for as much as possible, no one should tell you otherwise.  But I personally believe this, like many other pure free market views, presents an oversimplified picture of a much more complicated reality.  So let’s look at each in turn, and see what modern economic theory tells us is really going on.


 

Point the First: Price Increases Moderate Demand, Increasing Efficient Allocation of Resources

Well, yes and no.  Basic supply and demand theory tells us this is what SHOULD happen: prices increase, so people buy less (e.g., only what they really need), thus preventing hoarding.  The problem is that this moderation of demand is bounded by the fact that the goods in question generally are staple goods, necessary to survival in emergency conditions.  Thus, there’s a limit to elasticity.  Elasticity is how much demand changes as prices increase or decrease, and the more “inelastic” something is, the less sensitive demand levels are to price changes: few people drink more water when the price is low, nor are they physically able to drink much less even when the price is extremely high.  People need a minimum amount of water intake to survive.  Similarly, they need food.  They need heat.  Depending on the emergency situation in question, they may need other basic staple supplies like wood to reinforce their shelters and protect themselves from the elements, etc.

The reason societies find price gouging so egregious is because it’s seen as taking advantage of inelastic demand in a time when customers have no other option but to pay whatever the seller asks.  Yes, it might moderate hoarding behavior, but that purpose could be achieved by limiting purchase quantities per customer, without hurting those who can’t afford the elevated prices on goods they literally need to stave off death.  Plus there’s the flip side of the coin: it may also increase hoarding, as customers are afraid prices will continue to rise or supply will run out entirely, so it’s better to pay exorbitant prices now than risk going without later.  Limiting hoarding, even if effective, is a relatively minor positive when the potential externality is extreme hardship and potential death for those who literally cannot afford the high cost of these basic necessities.


 

Point the Second: Higher Profit Margins Encourage Suppliers to Increase Supply to Meet Demand

In an ideal hypothetical economic landscape, this would absolutely be true.  Unfortunately, we don’t live in the hypothetical world of free market models.  Our real world has what is sometimes called transfer friction, or in layman’s terms, logistics concerns: increasing supply means either producing more or bringing in supplies from outside sources.  In the emergency situations that often lead to price gouging, like natural disasters, often neither of these are physically possible for days or even weeks following the initial event.  Producers within the disaster zone are unlikely to be able to increase production capacity when all the infrastructure, including their own supply chains, has just been destroyed.  Nor can outside suppliers transport goods into the zone effectively when the logistics infrastructure has been devastated.  The logistics infrastructure that DOES exist is often commandeered for general disaster relief efforts, so there’s not much capability to flood the market with goods for a profit motive, even if outside suppliers desperately want to do so.

Take, for example, Hurricane Harvey: the public water system was contaminated with floodwaters, making it unsuitable for human consumption in many areas.  Thus drinking water had to come in the form of bottles or purifiers.  But no one IN the disaster zone could readily produce a significant supply of potable water, and no one OUTSIDE the zone could readily transport such a supply into the zone for sale, as the roads and railroads and ports were under water or heavily damaged.  The market could not easily respond to the increased demand level by increasing supply (and thus bringing prices back down) at least until the infrastructure could be repaired.  Thus increased prices do NOT effectively incentivize increased supply during an emergency when the supply chains cannot support it.


 

Point the Third: There Is No Right Price, Just the Market Price.  Facts, Not Feelings.

Maybe.  In terms of morals, sure.  But human markets aren’t made up of perfectly rational and emotionless decision makers.  They’re made of humans.  And both neuroeconomics and behavioral economics provide very strong evidence that feelings matter as much as reason in economic decision making.

Prospect theory, for which Daniel Kahneman won the Nobel Prize in Economics in 2002, tells us that we as humans don’t judge value in terms of absolutes, but in terms of gains and losses from the previous baseline.  Comparisons are relative—we might say that our subconscious minds think in terms of percentages rather than absolute values.  When my bank account is empty, a $100 windfall is a huge gain; when I’m a millionaire I barely even notice it.  Similarly, if a product is $1000 one day and $1100 the next, it’s a small jump, but when it instead goes from $10 to $110, we say the price “skyrocketed,” even though both increases are exactly the same amount of money in absolute terms.  What this means is that modern economic theory tells us there in fact IS a “right” price in economic decision-making, at least when the decision-maker is Homo sapiens rather than Homo economicus: the prior baseline from which the decision-maker is judging relative gains and losses.  If the price goes down, we are happy.  If it goes up, we are unhappy.  And if it skyrockets, we are angry.

This becomes important, because emotions have a major impact on our decision-making.  Which ties into the final point.


 

Point the Fourth: It’s Good Business to Maximize Profit, and Property Rights Mean No One Should Tell Us No

First, let’s be clear here.  I am staunchly in favor of property rights and the freedom to do what you want with your property so long as it isn’t actively infringing on the rights to others.  That said, human beings are social animals.  We’ve evolved in a social context, and many of our evolved behaviors are directly optimized for social, rather than individual, survival.  Two of the most interesting (and relevant) of these evolved social behaviors are what behavioral economists call “inequity aversion” and “altruistic punishment.”

Inequity aversion is the well-demonstrated tendency for people to dislike being treated unfairly and seeing others being treated unfairly.  The Golden Rule isn’t just something your elementary school teacher taught you to ease classroom interactions with the other kids—it’s a fundamental feature of human nature.  Yes, we’re by and large self-centered, but almost all of us have an innate sense of fair play, and we disapprove of behaviors that violate such unspoken norms.

Altruistic punishment is the (also well-demonstrated) willingness of human beings to go out of their way, often to the point of actively hurting their own self-interest, to punish those they see as behaving unfairly.  So not only do we dislike unfair behavior, but we want to punish it when we see it, even if it costs us to do so.  (This is a fascinating social trend, and some researchers believe our unique version of altruistic punishing behavior is one of the keys to human success versus other social animals.)

Why does this matter?  Well, while individuals should have the right to sell their property for any price they want, a basic understanding of modern economics tells us why price gouging is a terrible business decision in the long run.  From behavioral economics, we know people dislike being treated unfairly and seeing others treated unfairly.  We also know that they are willing to go out of their way to punish those they see as treating themselves or others unfairly.  And from Prospect Theory, we know that such judgements of unfairness are influenced not by absolute value of the price increase, but rather its relative value compared to the prior baseline.

Game theory helps us mathematically model optimal decision making in interactive situations, like when a seller is deciding whether to raise the price of an inelastic good to take advantage of temporary increased demand.  But there’s a different answer when the game is played once (such as a transaction between individuals who will likely never see each other again) and when it’s repeated (such as transactions between a shopkeeper and his or her regular customers).  With an individual interaction game, there is no long term loss from treating someone unfairly, because they have to take it or leave it—and for inelastic goods, they’re probably going to take it.  But with a repeated game, treating people unfairly may lead to a temporary spike in profit margins, but is likely to be repaid with long-term punishment, such as formerly regular customers shopping elsewhere because they no longer want to deal with someone who they feel took advantage of them in their time of need.  There is a strong business incentive for local stores to avoid being seen as acting unfairly, because long term profits are heavily impacted by short term perceptions.

Maximizing profits when your customers are desperate may well lead to having no customers when they have competing options.  Altruistic punishment means they’ll likely be willing to go to the competition, even if it’s a bit out of their way, rather than reward you with their custom.  And they may even be able to get their friends and neighbors to do the same—we’re social animals.  Do what you want with your property, but be aware that actions that are seen as unfair may very well have longer term repercussions.


 

Price gouging is not merely increasing prices in response to demand.  It is a huge price increase relative to previous baseline prices, at a time of high inelastic demand, when supply physically cannot increase to match said demand.  Thus, it’s sellers taking advantage of a situation in which buyers must buy their product at the price they set, because there are no other options like going without or shopping elsewhere.  Human social nature means we see this as unfair and are willing to punish such behavior even at cost to ourselves, which makes it a risky business decision, trading short term certain profit for long term potential losses.  It may or may not limit hoarding, but it most certainly hurts those who can’t afford the new prices for goods they desperately need.  Price gougers may not be evil villains, but they certainly aren’t noble heroes.  They’re just people trying to make a quick buck.

On Social Contracts and Game Theory

The “social contract” is a theory of political philosophy, formalized by Enlightenment thinkers like Rousseau (who coined the term), Hobbes, Locke, and their contemporaries, but tracing its roots back to well before the birth of Christ.  Social contract theories can be found across many cultures, such as in the writings of ancient Buddhists like Asoka and Mahavatsu, and ancient Greeks like Plato and Epicurus.  The idea of the social contract is that individual members of a society either explicitly or implicity (by being members of that society) exchange some of their absolute freedom for protection of their fundamental rights.  This is generally used to justify the legitimacy of a governmental authority, as the entity to which individuals surrender some freedoms in agreement for the authority protecting their other rights.

At its most basic, then, the social contract can be defined as “an explicit or implicit agreement that society—or its representatives in the form of governmental authorities—has the legitimate right to hold members of said society accountable for violations of each other’s rights.”  Rather than every member of a society having to fend for themselves, they agree to hold each other accountable, which by necessity means accepting limitations on their own freedom to act as they please (because if their actions violate others’ rights, they’ve agreed to be held accountable).

The purpose of this article isn’t to rehash the philosophical argument for and against social contract theory.  It’s to point out that the evidence strongly demonstrates social contracts aren’t philosophy at all, but rather—much like economic markets—a fundamental aspect of human organization, a part of the complex system we call society that arose through evolutionary necessity and is by no means unique to human beings.  That without it, we would never have succeeded as a species.  And that whether you feel you’ve agreed to any social contract or not is irrelevant, because the only way to be rid of it is to do away with society entirely.  To do so, we’re going to turn to game theory and experimental economics.

In 2003, experimental economists Ernst Fehr and Urs Fischbacher of the University of Zurich published a paper they titled “The Nature of Human Altruism.”  It’s a fascinating meta-study, examining the experimental and theoretical evidence of altruistic behavior to understand why humans will often go out of their way to help others, even at personal costs.  There are many interesting conclusions in the paper, but I want to focus on one, specifically—the notion of “altruistic punishment,” that is, taking actions to punish others’ for perceived unfair or unacceptable behavior even when it costs the punisher something.  In various experiments for real money, with sometimes as much as three months’ income at stake, humans will hurt themselves (paying their own money or forfeiting offered money) to punish those they feel are acting unfairly.  The more unfair the action, the more willing people are to pay to punish them.  Fehr and Fischbacher sought to understand why this is the case, and their conclusion plays directly into the concept of a social contract.

 

A decisive feature of hunter-gatherer societies is that cooperation is not restricted to bilateral interactions.  Food-sharing, cooperative hunting, and warfare involve large groups of dozens or hundreds of individuals…By definition, a public good can be consumed by every group member regardless of the member’s contribution to the good.  Therefore, each member has an incentive to free-ride on the contributions of others…In public good experiments that are played only once, subjects typically contribute between 40 and 60% of their endowment, although selfish individuals are predicted to contribute nothing.  There is also strong evidence that higher expectations about others’ contributions induce individual subjects to contribute more.  Cooperation is, however, rarely stable and deteriorates to rather low levels if the game is played repeatedly (and anonymously) for ten rounds. 

The most plausible interpretation of the decay of cooperation is based on the fact that a large percentage of the subjects are strong reciprocators [i.e., they will cooperate if others cooperated in the previous round, but not cooperate if others did not cooperate in the previous round, a strategy also called “tit for tat’] but that there are also many total free-riders who never contribute anything.  Owing to the existence of strong reciprocators, the ‘average’ subject increases his contribution levels in response to expected increases in the average contribution of other group members.  Yet, owing to the existence of selfish subjects, the intercept and steepness of this relationship is insufficient to establish an equilibrium with high cooperation.  In round one, subjects typically have optimistic expectations about others’ cooperation but, given the aggregate pattern of behaviors, this expectation will necessarily be disappointed, leading to a breakdown of cooperation over time.

This breakdown of cooperation provides an important lesson…If strong reciprocators believe that no one else will cooperate, they will also not cooperate.  To maintain cooperation in [multiple person] interactions, the upholding of the believe that all or most members of the group will cooperate is thus decisive. 

Any mechanism that generates such a belief has to provide cooperation incentives for the selfish individuals.  The punishment of non-cooperators in repeated interactions, or altruistic punishment [in single interactions], provide two such possibilities.  If cooperators have the opportunity to target their punishment directly towards those who defect they impose strong sanctions on the defectors.  Thus, in the presence of targeted punishment opportunities, strong reciprocators are capable of enforcing widespread cooperation by deterring potential non-cooperators.  In fact, it can be shown theoretically that even a minority of strong reciprocators suffices to discipline a majority of selfish individuals when direct punishment is possible.  (Fehr and Fischbacher, 786-7)

 

In short, groups that lack the ability to hold their members accountable for selfish behavior and breaking the rules of fair interaction will soon break down as everyone devolves to selfish behavior in response to others’ selfishness.  Only the ability to punish members for violating group standards of fairness (and conversely, to reward members for fair behavior and cooperation) keeps the group functional and productive for everyone.*  Thus, quite literally, experimental economics tells us that some form of basic social contract—the authority of members of your group to hold you accountable for your choices in regards to your treatment of other members of the group, for the benefit of all—is not just a nice thing to have, but a basic necessity for a society to form and survive.  One might even say the social contract is an inherent emergent property of complex human social interaction.

But it isn’t unique to humans.  There are two major forms of cooperative behavior in animals: hive/colony behavior, and social group behavior.  Insects tend to favor hives and colonies, in which individuals are very simple agents that are specialized to perform some function, and there is little to no intelligent decision making on the part of individuals at all.  Humans are social—individuals are intelligent decision makers, but we survive and thrive better in groups, cooperating with members of our group in competition with other groups.  But so are other primates—apes and monkeys have small scale societies with leaders and accountability systems for violations of accepted behavior.  Wolf packs have leaders and accountability systems.  Lion prides have leaders and accountability systems.  Virtually every social animal you care to name has, at some level, an accountability system resembling what we call a social contract.  Without the ability to hold each other accountable, a group quickly falls apart and individuals must take care of themselves without relying on the group.

There is strong evidence that humans, like other social animals, have developed our sense of fairness and our willingness to punish unfair group members—and thus our acceptance that we ourselves can be punished for unfairness—not through philosophy, but through evolutionary necessity.  Solitary animals do not have a need for altruistic punishment.  Social animals do.  But as Fehr and Fischbacher also point out, “most animal species exhibit little division of labor and cooperation is limited to small groups.  Even in other primate societies, cooperation is orders of magnitude less developed than it is among humans, despite our close, common ancestry.”  So why is it that we’re so much more cooperative, and thus more successful, than other cooperative animals?  It is, at least in part, because we have extended our concept of altruistic punishment beyond that of other species:

 

Recent [sociobiological] models of cultural group selection or of gene-culture coevolution could provide a solution to the puzzle of strong reciprocity and large-scale human cooperation.  They are based on the idea that norms and institutions—such as food-sharing norms or monogamy—are sustained by punishment and decisively weaken the within-group selection against the altruistic trait.  If altruistic punishment is ruled out, cultural group selection is not capable of generating cooperation in large groups.  Yet, when punishment of [both] non-cooperators and non-punishers [those who let non-cooperation continue without punishment] is possible, punishment evolves and cooperation in much larger groups can be maintained.  (Fehr and Fischbacher, 789-90)

We don’t just punish non-cooperators.  We also punish those who let non-cooperators get away with it.  In large groups, that’s essential: in a series of computer simulations of multi-person prisoners’ dilemma games with group conflicts and different degrees of altruistic punishment, Fehr and Fischbacher found that no group larger than 16 individuals could sustain long term cooperation without punishing non-cooperators.  When they allowed punishment of non-cooperators, groups of up to 32 could sustain at least 40% cooperation.  But when they allowed punishment of both non-cooperators AND non-punishers, even groups of several hundred individuals could establish high (70-80%) rates of long-term cooperation.  Thus, that’s the key to building large societies: a social contract that allows the group to punish members for failing to cooperate, and for failing to enforce the rules of cooperation.

It doesn’t much matter if you feel the social contract is invalid because you never signed or agreed to it, any more than you feel the market is unfair because you never agreed to it.  The social contract isn’t an actual contract: it’s an emergent property of the system of human interaction, developed over millennia by evolution to sustain cooperation in large groups.  Whatever form it takes, whether it’s an association policing its own members for violating group norms, or a monarch acting as a third-party arbitrator enforcing the laws, or a democracy voting on appropriate punishment for individual members who’ve violated their agreed-upon standards of behavior, there is no long-term successful human society that does not feature some form of social contract, any more than there is a long-term successful human society that does not feature some form of trading of goods and services.  The social contract isn’t right or wrong.  It just is.  Sorry, Lysander Spooner.

*Note: none of this is to say what structure is best for enforcing group standards, nor what those group standards should be beyond the basic notion of fairness and in-group cooperation.  The merits and downsides of various governmental forms, and of various governmental interests, are an argument better left to philosophers and political theorists, and are far beyond the scope of this article.  My point is merely that SOME form of social authority to punish non-cooperators is an inherent aspect of every successful human society, and is an evolutionary necessity.

Tulips, Traffic Jams, and Tempests (Part 2): The Properties of Complexity

In the first installment of this series, I discussed some well-known phenomena that are emergent effects of complex systems, and gave a general definition of complexity.  In this installment, we’re going to delve a little deeper and look at some common properties and characteristics of complex systems.  Understanding such properties helps us understand what are the types of complex systems and what kinds of tools we have available to study complexity, which will be the topic of the third installment of the series.

There are four common properties that can be found in all complex systems:

  • Simple Components (Agents)
  • Nonlinear Interaction
  • Self-organization
  • Emergence

But what do these mean, and what do they look like?  Let’s examine each in turn.

 

SIMPLE COMPONENTS (AGENTS):

One of the most interesting things about complex systems is that they aren’t composed of complex parts.  They’re built from relatively simple components, compared to the system as a whole.  Human society is fantastically complex, but its individual components are just single human beings—which are themselves fantastically complex compared to the cells that are their fundamental building blocks.  Hurricanes are built of nothing more than air and water particles.  These components are also known as agents.  The two terms are interchangeable, but I prefer agents in general and that will be the term used throughout the rest of this post; the usual distinction among those who use both terms is that agents can make decisions and components cannot.  But computer simulations show that even when agents can only make one or two very simple deterministic responses with no actual decision-making process beyond “IF…THEN…,” enough of them interacting will result in intricate complexity.  We see this in nature, too—an individual ant is one of the simplest animals around, driven entirely by instincts that lead it to respond predictably to encountered stimuli, but an ant colony is a complex system that builds cities, forms a society, and even wages war.  The wonder of complex systems is that they spring not from complexity, but from relative simplicity, interacting.  But there must be many of them—a single car on a road network is not a complex system, but thousands of them are, which leads us to our next property.

 

NONLINEAR INTERACTION:

For complexity to arise from simple agents, there must be lots of them interacting, and these interactions must be nonlinear.  This nonlinearity results not from single interactions, but from the possibility that any one interaction can (and often does) cause a chain reaction of follow-on interactions with more agents, so a single decision or change can sometimes have wide-ranging effects.

In technical terms, nonlinear systems are those in which the change of the output is not proportional to the change of the input—that is, when you change what goes it, what comes out does not always grow or shrink proportionately to that original change.  In layman’s terms, the system’s response to the same input might be wildly different depending on the state or context of the system at the time.  Sometimes a small change has large effects.  Sometimes a large change is absorbed by the system with little to no effect at all.

This is important to understand for two reasons.  First is that, when dealing with complex systems, responses to actions and changes might be very different than those the actor originally expected or intended.  Even in complex systems, most of the time changes and decisions have the expected result.  But sometimes not, and when the system has a large number of interactions, the number of unexpected results can start to have a significant impact on the system as a whole.

The other reason this is important is that nonlinearity is the root of mathematical chaos.  Chaos is defined as seemingly random behavior with sensitive dependence on initial conditions—in nonlinear systems, under the right conditions, prediction is impossible, even theoretically.  One would have to know with absolute precision the starting conditions of every aspect of the system, and considering that the uncertainty principle means that it’s physically impossible to do so according to the laws of physics, perfect prediction of a complex system is impossible: to see what happens in a complex system of agents interacting in a nonlinear fashion, you must let it play out.  Otherwise, the best you can do is an approximation that loses accuracy the further and further you get from the starting point.  This sensitivity to initial conditions is commonly simplified as the “butterfly effect,” where even small changes can have large impacts across the system as a whole.

In short, the reason the weather man in most places can’t tell you next week’s weather very accurately isn’t because he’s bad at his job, but because weather (except in certain climates with stable weather patterns) literally cannot be predicted very well, and it gets harder and harder the further out you try to do so.  That’s just the nature of the system they’re working with.  It’s remarkable they’ve managed to get as good as they have, actually, considering that meteorologists only began to understand the chaotic principles underlying weather systems when Lorenz discovered them by accident in 1961.  Complex systems are inherently unpredictable, because they consist of a large number of nonlinear interactions.

 

SELF-ORGANIZATION

Complex systems do not have central control.  Rather, the agents interact with each other, giving rise to a self-organized network (which in turn shapes the nonlinearity of the interactions among the agents of the network).  This is a spontaneous ordering process, and requires no direction or design from internal or external controllers.   All complex systems are networks of connected nodes—the nodes are the agents and the connections are their interactions—whether they’re networks of interacting particles in a weather system or networks of interacting human beings in an economy.

The structure of the system arises from the network.  Often it takes the form of nested complex systems: a society is a system of human beings, which is a system of cells, each level of which is itself a complex system.  Mathematically, the term for this is a fractal—complex systems tend to have a fractal structure, which is a common feature of self-organized systems in general.  Some complex systems are networks of simple systems; others are networks of complicated systems; many are networks of complex sub-systems and complicated sub-systems and simple sub-systems all interacting together.  A traffic light is a simple system; a car is a complicated system; a human driver is a complex system, the traffic system is a network of many individual examples of all three of these sub-systems interacting as agents.  And it is entirely self-organized: the human beings who act as drivers are also the agents who plan and build the road system that guides their interactions as drivers, by means of other complex systems such as the self-organized political system in a given area.

 

EMERGENCE

Emergent properties, as discussed in part one of this series, are those aspects of a system that may not be determined merely from isolating the agents—the system is greater than the sum of its parts.  An individual neuron is very simple, capable of nothing more than firing individual electrical signals to other neurons.  But put a hundred billion of them together, and you have a brain capable of conscious thought, of decision-making, of art and math and philosophy.  A single car with a single driver is easy to understand, but put thousands of them on the road network at the same time, and you have traffic—and its own resulting emergent phenomena like congestion and gridlock.  Two people trading goods and services are simple, but millions of them create market bubbles and crashes.  This is the miracle of complexity: nonlinear networks of relatively simple agents self-organize and produce emergent phenomena that could not exist without the system itself.

Some common emergent properties include information processing and group decision-making, nonlinear dynamics (often shaped by feedback loops that dampen or amplify the effects of behaviors of individual agents), hierarchical structures (such as families and groups which cooperate among themselves and compete with each other at various levels of a social system), and evolutionary and adaptive processes.  A hurricane, for example, is an emergent property in which many water and air molecules interact under certain conditions and with certain inputs (such as heat energy from sunlight), enter a positive feedback loop that amplifies their interactions, and become far more than the sum of their parts, until the conditions change (such as hitting land and losing access to a ready supply of warm water), at which point they enter a negative feedback loop that eventually limits its growth and later dictates its decline back to nonexistence.  Adam Smith’s “Invisible Hand” is an emergent property of the complex systems we call “economies,” in which individual actions within a nonlinear network of agents are moderated by feedback loops and self-organized hierarchical structures to produce common goods through self-interested behavior.  Similarly, the failures of that Invisible Hand such as a speculative bubbles and market crashes are themselves emergent behaviors of the economic system, that cannot exist without the system itself.

 

 

Now that we’ve established the common properties of complex systems, in the next article we’ll look at a couple different types, what the differences are, and what tools we can use to model them properly.

On Nazis and Socialists

I commonly run into the argument that the Nazis were clearly left wing, because “Socialism is right there in their name.”  It’s getting old, because it ignores literally everything else about them.  Bottom Line: yes, they were socialists, but no, they were not leftists.

Part of the problem is that there’s no good accepted narrow definition of socialism–it ranges from Marxist-style Communism to Soviet-style command economies to Scandinavian-style public welfare states. A few months ago the American Economics Association’s Journal of Economic Perspectives published a paper trying answer the question of whether modern China is socialist, and it was fascinating because first they had to establish a working definition of socialism. Even today, there’s serious ongoing debate about that in academic economics circles.

But in the broad sense, Nazis were socialist, in that the government controlled the economy towards its own goals–the Reich ran the factories and mines and basically the entire supply chain and directed how resources and products would be used at the macro level.

That said, the Nazis explicitly rejected what we’ve come to think of as the “left-right” spectrum in favor of what political theorists call a “third way,” which married leftist-style government control of the economy to right-wing-style government control of social lives in a militaristic fascism focused on directing all social and economic aspects of the country towards the needs of the Fatherland. Nationalism (right) + Socialism (left) = National Socialism. Funny how that works. Thus, it’s a great straw man, because BOTH sides can legitimately point to aspects of Nazism and say “See?! They were the other side!” When the reality is they were neither.

Note: neo-Nazis, on the other hand, generally ignore the economic aspects of National Socialism in favor of the eugenicist racism, conservative nativism, and militaristic nationalism, and ARE legitimately classified as right-wing extremists.

The more you know.

Opinions, Assholes, and Believability

My next post was going to be a continuation of my introduction to complexity, and I promise that I’ll get around to that eventually, but a few days ago I was made aware of an exchange on Facebook that got me thinking, and I’d like to take a moment to lay out my thoughts on the matter.

I personally did not witness this exchange, but a friend of mine took a screenshot of the first part of the conversation (before the original commenter apparently deleted the thread).  First, some context: this occurred after a firearms industry page (Keepers Concealment, a maker of high quality holsters) shared a video of Ernest Langdon demonstrating the “Super Test,” a training drill that requires a shooter to fire rapidly and accurately at various ranges.  Ernest Langdon is indisputably one of the best handgun shooters in the world.  That’s an objective fact, and he has the competition results and measurable skills to prove it.  He is ranked as a Grand Master in the US Practical Shooting Association, a Distinguished Master in the International Defensive Pistol Association, and has won 10 National Championship Shooting titles and 2 World Speed Shooting titles.  All of which explains why when some nobody on Facebook (who we shall refer to as “Mr. Blue” as per my color-coded redacting) made this comment, quite a few people who know who Ernest Langdon is raised their collective eyebrows:

Opinions Screenshot

Mr. Blue, who as mentioned is a nobody in the shooting world with exactly zero grounds to critique Ernest Langdon, still for some reason felt the appropriate response to this video of the one the best shooters to have ever walked the face of the earth was to provide unsolicited advice on how he could improve.  Then, when incredulous individuals who actually know what they’re talking about point out exactly how arrogantly stupid that response is to this particular video, another person, Mr. Red, chimes in to claim that if we accept no one is above reproach, then “it’s fair for people (even those who can’t do better), to critique what they see in the video.”  To which I want to respond: no, it is not.

I agree entirely with Ray Dalio, the founder of Bridgewater Associates—the world’s largest hedge fund—when he says, “While everyone has the right to have questions and theories, only believable people have the right to have opinions. If you can’t successfully ski down a difficult slope, you shouldn’t tell others how to do it, though you can ask questions about it and even express your views about possible ways if you make clear that you are unsure.”  What that means is not that you can’t form an opinion.  It means that just because you have the right to HAVE an opinion doesn’t mean you have the right to express it and expect for anyone to take it seriously.  Just because you happen to be a breathing human being doesn’t make you credible, and the opinions of those who don’t know what they’re talking about are nothing more than a waste of time that serves only to prove that you’re an idiot.  Like the old saying says, “Better to remain silent and be thought a fool than to speak and remove all doubt.”

But Mr. Red’s comment goes to an attitude that lies at the heart of stupidity: the idea that everyone’s opinion is equally valid and worth expressing, and all have a right to be heard and taken seriously.  This certainly isn’t a new phenomenon: Isaac Asimov wrote about a cult of ignorance in an article back in 1980: “The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.’”  But new or not, it very much drives the willingness of ignorant nobodies to “correct” and “critique” genuine experts.  Mr. Blue has no idea of the thousands of hours of training Ernest Langdon has put into perfecting his grip and recoil management and trigger control, the hundreds of thousands of rounds of ammunition he’s put down range to hone his technique and become one of the best in the world at what he does.  Mr. Blue has put nowhere near that amount of time and effort into his own training—I know this, because if he had he’d also be one of the best shooters in the world, instead of some random nobody on Facebook.  But despite that vast gulf of experience and expertise, Mr. Blue still thinks he can and should provide unsolicited advice on how Ernest Langdon can be better.  And then doesn’t understand why others are laughing at him, and another commenter rides to the rescue, offended at the very notion people are dismissive of the critique of a nobody.

This is the same mindset that leads to people who barely graduated high school presuming to lecture the rest of us on why the experts are wrong on politics, on science, on economics, on medicine.  This is the mindset that leads to anti-vaccination movements bringing back measles outbreaks in the United States.  This is the mindset Sylvia Nasar described when she wrote “Frustrated as he was by his lack of a university education, particularly his ignorance of the works of Adam Smith, Thomas Mathus, David Ricardo, and other British political economics, [he] was nonetheless perfectly confident that British economics was deeply flawed.  In one of the last essays he wrote before leaving England, he hastily roughed out the essential elements of a rival doctrine.  Modestly, he called this fledgling effort ‘Outlines of a Critique of Political Economy.’”  The subject she was writing about?  Friedrich Engels, friend and collaborator of Karl Marx, and co-author of Das Kapital.  Is there any wonder that the system they came up with has never worked in practice?

While the conversation that inspired this line of thought was in the shooting world, I see it all the time in many, many different fields.  Novice weightlifters “critiquing” world record holders.  Undergraduate students “critiquing” tenured professors in their area of expertise.  Fans who’ve never stepped into a cage in their lives expounding upon what a professional fighter in the UFC “did wrong” as if they have the slightest idea what it’s like to step into the Octogon and put it all on the line in a professional MMA fight.  People with zero credibility believing they have the standing to offer unsolicited advice to genuine, established experts.  This isn’t to say that experts are infallible, or that criticism is always unfounded.  But to have your opinion respected, it must be believable, and if you lack that standing you’d damn well better be absolutely certain your criticism is well-founded and supported by strong evidence, because that’s all you have to go on at that point.  Appeal to authority is a logical fallacy, but unless you’ve got the evidence to back up your argument, the benefit of the doubt is going to go to the expert who has spent a lifetime in the field, versus the nobody who chooses to provide unsolicited commentary.

When you have an opinion on a technical subject, and you find yourself moved to express it in a public forum, please, just take a second and reflect.  “Do I have any standing to express this opinion and have it be believable, or is it well-supported by documented and cited evidence in such a way that it overcomes my lack of relative expertise?  Do I have a right for anyone to pay attention to my thoughts on this subject?  Or am I just another ignorant asshole spewing word diarrhea for the sake of screaming into the void and pretending I matter, that I’m not a lost soul drifting my way through existential meaninglessness, that my life has purpose and I’m special?”  Don’t be that guy.

Opinions and assholes, man.  Everyone’s got ‘em, and most of them stink.

Well, Actually… (A Rebuttal to a Rebuttal)

In June, researchers from the University of Washington released a National Bureau of Economic Research working paper entitled “Minimum Wage Increases, Wages, and Low-Wage Employment: Evidence from Seattle” (Jardim et al, 2017).  It made a lot of headlines, for the claim it made that the increased minimum wage in Seattle (up to $13 this year, and planned to increase to $15 within the next 18 months) has cost low-wage workers money by reducing employment hours across the board.  Essentially, Jardim and her colleagues showed rather convincingly through an in-depth econometric analysis that while wages for the average low-income worker increased per hour, their hours were cut to an extent that the losses exceeded the gains for a reduced total income.  It’s an impressive case for what I argued in my first “Well, It’s Complicated” article playing out in reality.

However, not everyone is convinced.  A friend of mine alerted me to an article by Rebecca Smith, J.D., of the National Employment Law Project that argues the study MUST be bullshit, because it doesn’t square with what she sees as reality.  In the article, Ms. Smith makes six specific claims in her effort to rebut the study.  Unfortunately for her, all these claims do is demonstrate she either doesn’t know how to read an econometric paper, or she didn’t actually read it that closely, because four are easily disproven by the paper itself, and the other two are irrelevant.

Specifically, she claimed the following:

  • The paper’s findings cannot “be squared with the reality of Seattle’s economy,” because “At 2.5 percent unemployment, Seattle is very near full employment. A Seattle Times story from earlier this month reported a restaurant owner’s Facebook confession that due to the tight labor market ‘I’d give my right pinkie up for an awesome dishwasher.’ Earlier this year, Jimmy John’s advertised for delivery drivers at $20 per hour.”

 

  • “By the UW team’s own admission, nearly 40 percent of the city’s low-wage workforce is excluded from the data: workers at multisite employers like Nordstrom, Starbucks, or even restaurants with a few locations like Dick’s.”

 

  • “Even worse, any time a worker left a job with a single-site employer for one with a chain, that was treated as a “lost job” that was blamed on the minimum wage — and that likely happened a lot since the minimum wage was higher for those large employers.”

 

  • “…Every time an employer raised its pay above $19 per hour — like Jimmy John’s did — it was counted not as a better job, but as a low-wage job lost as a result of the minimum wage.”

 

  • “The truth is, low-wage workers are making real gains in Seattle’s labor market. In almost all categories of traditionally low-wage work, there are more employers in the market than at any time in the city’s history. There are more coffee shops, restaurants and hotels in Seattle than ever before. The work is getting done. And the largest (and best-paid) workforce in the history of the city is doing it.”

 

  • “Nor can the study be reconciled with the wide body of rigorous research — including a recent study of Seattle’s restaurant industry by University of California economist Michael Reich, one of the country’s foremost minimum-wage researchers — that finds that minimum-wage-increases studies have not led to any appreciable job losses.”

 

Let’s look at each of these in turn.

 


 

THE CLAIM: This paper doesn’t match the reality of Seattle’s 2.5% unemployment rate, which is driving up wages regardless of the minimum wage increases due to high labor demand.

First, this isn’t an attack on the paper itself, just an expression of incredulity that demonstrates Ms. Smith apparently doesn’t understand how statistical analysis works—there are MANY factors that go into overall unemployment rates, and the minimum wage is just one of them.  Thus, the paper seeks to isolate unemployment and reduced employment hours in a given sector, and the overall unemployment rate is irrelevant to the analysis.

Second, Seattle’s unemployment rate is not 2.5%, and has not been 2.5% in a long time: the Bureau of Labor Statistics lists it as 2.9% in April 2017, its lowest point in the past year, and trended back up to 3.2% by May.  You don’t get to just make up numbers to refute points you don’t like.

Third, just to emphasize that this unemployment rate is not caused by the minimum wage increase, let’s compare Seattle to other cities.  At 3.2% unemployment in May, Seattle was tied with five other US cities: Detroit, San Diego, Orlando, San Antonio, and Washington, D.C.  All of these cities have their own minimum wages that vary between $8.10 and $13.75—but for a proper comparison, these rates have to be adjusted for cost of living.  When so adjusted, the lowest paid workers were those in Orlando, making the equivalent of a worker in Seattle taking home $10.94/hour.  The highest were those in San Antonio, with the equivalent of $19.39/hour at Seattle prices.  For comparison, workers actually IN Seattle were making just $13/hour in May—the average for all six cities was $13.28.  With such a range, can the “high” minimum wage be driving the employment rate that’s identical among all of them?  These six cities all tied for 11th place in lowest unemployment rates in the nation that month.  How about the best three?  First place goes to Denver, with a minimum wage of $11.53 (adjusted for Seattle cost of living).  Second to Nashville, at $9.72.  Third to Indianapolis, at $9.93.  I’d take a step back and reconsider any claim that the $13 minimum wage in Seattle is at all relevant to the overall employment rate, given that when you compare apples to apples, there is no apparent correlation at all.  Instead, let’s stick to what the paper was about: the impact of total income on low-income workers, given per hour wage increases versus changes in hours worked.

 


 

The Claim: The paper excluded 40% of the city’s low-wage workforce by ignoring all multisite employers.

Quite simply, no, it did not.  The paper did NOT exclude all multisite employers.  It excluded SOME multisite employers.  And those employers don’t account for “nearly 40% of the city’s low-wage workforce,” but rather 38% of the ENTIRE workforce across the state as a whole—no mention is made of their proportion within Seattle itself.  And if Ms. Smith had read closely, she’d realize that not only does this make perfect sense, but if anything it just as likely biased the results to UNDERESTIMATING the loss in employment hours for low-wage workers.

“The data identify business entities as UI account holders. Firms with multiple locations have the option of establishing a separate account for each location, or a common account. Geographic identification in the data is at the account level. As such, we can uniquely identify business location only for single-site firms and those multi-site firms opting for separate accounts by location. We therefore exclude multi-site single-account businesses from the analysis, referring henceforth to the remaining firms as “single-site” businesses. As shown in Table 2, in Washington State as a whole, single-site businesses comprise 89% of firms and employ 62% of the entire workforce (which includes 2.7 million employees in an average quarter).

Multi-location firms may respond differently to local minimum wage laws. On the one hand, firms with establishments inside and outside of the affected jurisdiction could more easily absorb the added labor costs from their affected locations, and thus would have less incentive to respond by changing their labor demand. On the other hand, such firms would have an easier time relocating work to their existing sites outside of the affected jurisdiction, and thus might reduce labor demand more than single-location businesses. Survey evidence collected in Seattle at the time of the first minimum wage increase, and again one year later, increase suggests that multi location firms were in fact more likely to plan and implement staff reductions. Our employment results may therefore be biased towards zero.”  (Jardim et al., pp 14-15).

Essentially, the nature of the data required they eliminate 11% of firms in Washington State before beginning their analysis, because there was literally no way to tell which of their sites (and therefore which of their reported employees) were located within the city of Seattle.  Multi-site firms that reported employment hours by individual site were absolutely included, just not those that aggregate their employment hours across all locations.  But that’s okay, because on the one hand, such firms can potentially absorb increased labor costs at their Seattle sites, but on the other they can more easily shift work to sites outside the affected area and thus reduce labor demand within Seattle in response to increased wage bills.  And surveys suggest that such firms are more likely to lay off workers in Seattle than other firms—hence, excluding them from the data is just as likely to make the employment reduction estimates LOWER than they’d be if the firms were included as they are to bias the estimates positively.  Ms. Smith’s objection on this point only serves to prove she went looking for things to object to, rather than reading in depth before jumping to conclusions.

 


 

The Claim: Workers leaving included firms for excluded firms was treated as job loss.

Literally no, it was not.  The analysis was based on total reported employment hours and not on total worker employment.  When employers lose workers to other firms, they don’t change their labor demand.  Either other workers get more hours or someone new is hired to cover the lost worker’s hours.  If hours DO decrease when a worker leaves, that means the employer has reduced its labor demand and sees no need to replace those hours.  In which case, it IS “job loss” in the sense of reduced total employment hours.

 


 

The Claim: When employers raised wages above $19/hour, it was treated as job loss.

Again, literally no, it was not.  Not only does the paper have an extensive three-page section addressing why and how they chose the primary analysis threshold of $19/hour, they also discuss in their results section how they checked their results against other thresholds up to $25/hour.  In short, a lot of previous research has conclusively shown that increasing minimum wages has a cascading effect up the wage chain: not only are minimum wage workers directly affected by it, but also workers who make above minimum wage—but the results decrease the further the wage level gets from the minimum.  Jardim et al did a lot of in-depth analysis to determine the most appropriate level to cut off their workforce sector of interest, and determined the cascading effects became negligible at around $18/hour—and they chose $19/hour to be conservative in case their estimates were incorrect.  And they STILL compared their results to thresholds ranging from $11/hour to $25/hour and proved the effects of the $13 minimum wage were statistically significant regardless of the chosen threshold.

 


 

The Claim: Low-wage workers are making gains, because in almost all categories of traditionally low-wage work, there are more employers in the market than at any time in the city’s history.

Simply irrelevant.  Number of employers has zero effect on number of hours worked for each worker.  Again, the analysis was based on total labor demand for low-wage workers as expressed in total employment hours across all sectors.  The number of firms makes no difference to how many labor hours each firm is demanding per worker.

 


 

The Claim: This study cannot be reconciled with the body of previous research, including Reich’s recent study of restaurant labor in Seattle, that indicates minimum wage increases don’t lead to job losses.

There are two parts of my response to this.  First, that body of previous research is MUCH more divided than Ms. Smith seems to believe, but that’s to be expected from someone who so demonstrably cherry-picks statements to support her point.  While one school of thought, led by researchers like Card and Krueger (the so-called New Minimum Wage Theorists), believes their research supports Ms. Smith’s argument, their claims have consistently been rebutted on methodological grounds by other researchers like Wascher and Neumark.  Over 70% of economists looking at the conflicting evidence have come down in support of the hypothesis that minimum wage increases lead to job loss among minimum wage workers, as cited by Mankiw in Principles of Economics.  I discuss both points of view more extensively in “Well, It’s Complicated #1.”

Second, the paper has a two and a half page section entitled “Reconciling these estimates with prior work,” where the authors discuss this issue quite in depth.  Including pointing out that when they limit their analysis to those methods used by previous researchers, their results are consistent with those researchers’ results, and they, too, support Reich’s conclusions in regards to the restaurant industry specifically.  In short, yes, this study ABSOLUTELY can be reconciled with the body of previous research.  That body just doesn’t say what Ms. Smith apparently believes it does.

 


 

So where does that leave us?  Quite simply, Ms. Smith is wrong.  Absolutely none of her criticisms of the paper hold water.  Actually, this is one of the most impressive econometric studies I’ve ever read—it even uses the Synthetic Controls methodology that I’ve previously criticized (see my article, “Lies, Damn Lies, and Statistics”), but it uses it in the intended limited and narrowly-focused manner in which it provides useful results.  And it does an excellent job of demonstrating that despite the booming Seattle economy, the rapid increase in the city’s minimum wage has hurt the very employees it intended to help, reducing their total monthly income by an average of 6.6%.

 



 

Original paper can be found here: http://www.nber.org/papers/w23532

 

Lies, Damn Lies, and Statistics: A Methodological Assessment

Last month, a National Bureau of Economic Research working paper made headlines across the internet when it claimed to demonstrate that so-called “Right to Carry” (RTC) laws increased violent and property crime rates above where they would have been without the passage of such laws.  Now, most science reporting is done by people with zero technical background in the advanced statistical techniques used by the paper’s authors, so I was a bit skeptical it actually said what they were claiming it said.  Fortunately, I DO have such a technical background, and for several years now I’ve been following with great interest the academic arguments about the effects of legal guns on crime rates.  And after having read the paper in question (Right-to-Carry Laws and Violent Crime: A Comprehensive Assessment Using Panel Data and a State-Level Synthetic Controls Analysis. Donohue, Aneja, and Weber. 2017), I’ve come to the conclusion that I was both right and wrong.  Wrong in that the paper’s authors drew the conclusion stated by the journalists—they do, in fact, claim their data shows RTC laws increase crime.  But right in that the data doesn’t actually show that when you read it with a more critical eye.  Therefore, I’m going to take this opportunity to teach a lesson in why you shouldn’t trust paper abstracts or jump to the “conclusions” section, but should instead examine the data and analysis yourself.

Disclaimer: I am a firearms enthusiast and active in the firearms community at large.  However, I am also a scientist, and absolutely made my very best efforts to set that bias aside in reading this paper, and give it the benefit of the doubt.  Whether I succeeded or not is up to you to decide, but I believe my objections to the authors’ conclusions are based solely on methodological grounds and will stand up to the scrutiny of any objective observer.  Unfortunately, I cannot say the same about Professor Donohue and his co-authors, as their own personal bias against guns is quite evident from their concluding paragraphs.  Because of that bias, I firmly believe this paper is a perfect example of “Lies, Damn Lies, and Statistics.”

The paper itself is really divided into two sections: a standard multiple regression analysis and then a newer counterfactual method called “synthetic control analysis.”  The authors claim both analyses show that RTC laws increase crime.  I disagree, at least with the extent they believe this to be true.  Let’s look at each in turn.

First, the regression analysis.  The meat of this analysis is comparing four different models (and three variations of those models) for a total of seven specifications.  Multiple regression analysis is a powerful tool to analyze observational data and attempt to control for several variables to see what impact each had on the target dependent variable.  In this paper, Donohue et al. build their own model specification (DAW), as well as comparing it to three pre-existing models from other researchers (BC, LM, MM).  They looked at the effects of states’ passage of RTC laws on three dependent variables: murder rates, violent crime rates, and property crime rates.  The key point of their research is that it goes beyond previous papers in its data set: where previous research has stopped at the year 2000, this paper looks at how the results change when the models are fed an additional 14 years of data, looking from 1977-2014.

The problem here is that the authors claim their panel data analysis consistently shows a statistically significant increase in violent crime when using the longer time horizon ending in 2014.  This is a problem because, quite bluntly, no, it does not.  The DAW variable specification (their new, original model built for this analysis) DOES find an increase in violent crime and property crime rates (though not murder, which they acknowledge).  But the spline model of the same variables finds no statistically significant correlation whatsoever.  They even acknowledge this in their paper: “RTC laws on average increased violent crime by 9.5 percent and property crime by 6.8 percent in the years following adoption according to the dummy model, but again showed no statistically significant effect in the spline model.” (DAW 8).  But then they never mention it again or seek to address why the spline model—an alternative method that’s often preferred over polynomial interpolation for technical reasons—achieves such different results.  This spline model was built from the National Research Council report in 2004, and they used it earlier (sans other regressors) to show that the NRC’s conclusions it tentatively showed a decrease in crime rates associated with RTC laws disappear when the data set is extended to 2014.  But when they re-run it with their own variables, the lack of statistical significance is mentioned in a single line and then never brought up again.

In fact, the spline model is used comparatively for all four regression specifications, and the only cases in which it finds ANY statistical significance are the two the authors themselves discredit as methodologically unsound (LM and MM in their original versions).  But this point is never addressed—the polynomial “Dummy Variable Model” specification and the spline models all dramatically disagree, no matter WHAT set of variables they choose.  This, to me, strongly suggests that any conclusions drawn from the panel data regression analysis is highly suspect and the choice of specification deserves further review before they can be believed one way or the other.  Regression analysis is always extremely sensitive to specification, and results can shift dramatically based on what variables are included, what are omitted, and how they’re specified.  Unfortunately, the paper does not seem to discuss any testing for functional form misspecification (such as a Ramsey RESET test), so it is unclear if the authors compared their chosen model specification to other potential functional forms.  There’s no discussion, for example, of whether the polynomial or spline models are better and why.  This is a huge gap in the analysis that I would like to see addressed before I’m willing to accept any conclusions therefrom.*

Additionally, panel data suffers from some of the same limitations as cross-sectional data, including a need for large data sets to be credible.  In this case, the analysis only looked at 33 states (those that passed RTC laws between 1977 and 2004), making any conclusions drawn from the limited N=33 data set tentative at best.  This is not necessarily the authors’ fault—much data is only available at the state level, so it’s much harder to do a broader assessment with more data points (e.g., by county).  But it certainly does increase the grain of salt with which the analysis should be taken.  Despite that, the authors seem quite willing to draw sweeping conclusions when they should, by rights, be a lot more cautious about conclusive claims.**

The second part of the paper is even more problematic.  In short, they build a counterfactual model of each state that passed an RTC law in the specified time period, and then compare the predicted crime rates in those simulated states versus the observed crime rates in their real world counterparts.  This is certainly an interesting statistical technique, and is mathematically ingenious.  It might even be a useful tool for certain applications.  Unfortunately, counterfactual analysis, no matter how refined, suffers a fundamental flaw: by its very nature, it assumes the effects of a single event can be assessed in isolation.  In reality, as I’ve discussed before, human social systems are complex systems.  One major legal change will have dramatic effects across the board—that policy in turn drives many decisions down the line, so plucking out the one policy of interest and assuming all post-counterfactual decisions will remain the same is blatantly ridiculous.  It’s the statistical equivalent of saying “If only Pickett’s Charge had succeeded, the South would have won the Civil War.”  Well, no, because everything that happened AFTER Pickett’s Charge would have been completely different, so we can only make the vaguest guesses about what MAY have happened.

But that’s precisely what the authors are attempting to do here, and put the stamp of mathematical certainty on it to boot.  They built models of each RTC state in the target period by comparing several key crime-rate-related variables to control states without RTC laws, and then assessed the predicted crime rate in that model against the actual reported crime rates in reality to make a causal claim about the RTC laws’ effects on those crime rates.  They decided their models were good fits by comparing how well they tracked the fluctuations in crime rates in the years prior to the RTC (the counterfactual point), and if they were similar enough, they claim it’s a good predictive model.  But that fails to account for the cascading changes that would have occurred AFTER the counterfactual point by the nature of a complex system.  The entire analysis rests on an incredibly flawed assumption, and thus NO conclusive answers can be derived from it.  At best, it raises an interesting question.

The paper isn’t worthless, by any means.  The panel data analysis does a good job showing that NO specification, including John Lott’s original model from which he built his flawed “More Guns, Less Crime” thesis, supports a claim that RTC laws decrease crime rates.  But that’s about all it does.  It hints at the possibility RTC laws may increase violent and property crime rates (though not murder).  It certainly doesn’t conclusively demonstrate that claim, but it raises enough doubt that others researchers should tackle it in much more depth.  Similarly, the counterfactual “synthetic controls” analysis by no means proves a causal relationship between RTC laws and crime rates for the reasons explained above, but it raises an interesting question that should be examined further.

No, the problem is that the authors pay only lip service to the limitations of their analysis and instead make sweeping claims their data does not necessarily support: “The fact that two different types of statistical data—panel data regression and synthetic controls—with varying strengths and shortcomings and with different model specifications both yield consistent and strongly statistically significant evidence that RTC laws increase violent crime constitutes persuasive evidence that any beneficial effects from gun carrying are likely substantially outweighed by the increases in violent crime that these laws stimulate.”  (DAW, 39).  The problem is that the panel data regression is unclear given the discrepancies between the Dummy Variable and Spline Models, and less than solid given the low N value for cross-sectional comparisons; and that the synthetic controls rests on a flawed assumption about the nature of the social systems being modeled.

These limitations, combined with the many other papers looking at other types of regressions (such as the impacts of gun ownership in general on violent crime rates) that have been unable to find statistically significant correlations between legal gun prevalence and violent crime rates, make me extremely skeptical of this paper.  To be fair, it has yet to undergo peer review (it’s a working paper, after all), and it’s certainly possible many of my objections will be rectified in the final published version.  But right now, the best I can say for the data is that it raises some questions worth answering.  And it certainly doesn’t support the authors’ claim that their analysis is persuasive evidence of anything.  At least, not nearly as persuasive as they’d have you believe.

That’s why I said, at the beginning, never trust an abstract or a conclusion section: read the analysis for yourself, and only then see what the authors have to say about it.  Because there’s a great deal of truth to the old saying, “There are three kinds of lies: lies, damned lies, and statistics.”  Statistics are a powerful tool.  But even with the best intentions they’re easily manipulated, and even more easily misunderstood.

 


*For those of you who don’t speak “stats geek,” what this paragraph means is that essentially the authors compared two different types of models, which had dramatically different conclusions, and they kinda ignored that fact entirely and moved past it.  And then didn’t discuss anywhere in the paper itself or any of the appendices why they chose one over the other, or why they specified any of their models the way they did versus other options.  It isn’t damning, but it’s certainly suspiciously like a Jedi handwave: “This IS what our data says, trust us.”

**Again, for the non-statisticians, larger data sets tend to produce more reliable estimates–the larger your data set, the more likely it is that your model’s estimates approach reality.  Small data sets are inherently less reliable, and 33 observations per year in the panel data is a tiny data set.

 

The original paper is available here for anyone who cares to examine it for themselves: http://www.nber.org/papers/w23510

Voodoo Economics (Well, It’s Complicated #3)

  • Feldstein (1986)
  • Feldstein and Elmendorf (1989)
  • Garrison and Lee (1992)
  • Engen and Skinner (1992)
  • Slemrod (1995)
  • Auerbach and Slemrod (1997)
  • Mendoza et al. (1997)
  • Padovano and Galli (2001)
  • Gale and Potter (2002)
  • Desai and Goolsbee (2004)
  • Gale and Orszag (2005)
  • Eissa (2008)
  • Mertens and Ravn (2010)
  • Huang (2012)
  • Favero and Giavazzi (2012)
  • Yagan (2015)
  • Mertens (2015)
  • Zidar (2015)
  • Gale and Samwick (2017)

All of these studies, meta-studies, and papers have one thing in common: they all look at the effect of tax cuts on supply-side economic growth, either in the US specifically or in developed countries in general, by assessing empirical data.  They look at both narrow framed cuts like the Bush cuts in 2001/2003, and at broad framed reforms like the Reagan cuts in 1981 and 1986.  And ALL of them find the empirical data shows effectively no statistically significant correlation between tax cuts and supply-side economic growth.  None.  Zip, zero, zilch, nada.

The theory of supply side economics, also known as top down economics, investment-side economics, trickle-down economics, or Reaganomics, is elegant.  It sounds good.  It fits the rational models of Chicago school neoclassical economists to a T. In short, it says tax cuts stimulate economic growth by freeing up capital for investment and spending.

The problem is that little to no evidence supports it actually working that way in the real world.  At all.  The closest we get is Romer and Romer (2010), which supports the idea that demand will increase in the short term in response to unexpected tax cuts, but stops short of any evidence demonstrating actual long term growth, especially on the supply side.

As Gale and Samwick (2017) puts it: “The argument that income tax cuts raise growth is repeated so often that it is sometimes taken as gospel.  However, theory, evidence, and simulation studies tell a different and more complicated story.  Tax cuts offer the potential to raise economic growth by improving incentives to work, save, and invest.  But they also create income effects that reduce the need to engage in productive economic activity, and they may subsidize old capital, which provides windfall gains to asset holders that undermine incentives for new activity.  In addition, tax cuts…not accompanied by spending cuts…will typically raise the federal budget deficit.  The increase in the deficit will reduce national saving…and raise interest rates, which will negatively affect investment.  The net effect of the tax cuts on growth is thus theoretically uncertain and depends on both the structure of the tax cut itself and the timing and structure of its financing.”

If you want to argue that taxes are a moral evil, fine.  I’m not going to delve into the philosophy of social choice theory here.  That’s your call.  But if you want to support your philosophical argument by saying the economics are on your side, that “basic economics” tell us tax cuts are always good for the economy by boosting growth, that I’ll refute all day every day, because it just ain’t true.

The simplistic story told by “supply side economics” advocates is pure political bullshit, unsupported by evidence or theory in a complex and nuanced reality, no matter how many Thomas Sowell books you’ve read.  In the face of the real world, it’s complicated.

How Is the Money Supply Like Gastric Acid? (Well, It’s Complicated #2)

Discussions of banking and financial regulation, at least on social media with non-experts, tend to take one of two fairly absolutist views: either bankers are inherently malevolent extortionists who do nothing more than taking advantage of honest workers’ effort and thus need to be reined in by the watchful eye of government regulators, or the free market is a glorious paradise in which regulation is nothing more than an inefficient and unnecessary evil that hurts everyone by making the market less effective that it could be in producing wealth.  I hate to be the one to tell you, but neither view is correct.  Sorry.  But to see why, let’s take each in turn, and look at some examples.

First, there is little to no evidence that bankers are evil.  In fact, such views have been perpetuated throughout human history, and even underlie many anti-Semitic conspiracy theories (as until the modern era, Christian usury laws meant Jews were the primary financiers of medieval and renaissance Europe).  But the truth is the banking sector is a fundamental base of trade: it provides liquidity and investment capital that allows businesses to operate and expand.  Without investors, only the rich could afford to start businesses.  Thus the financial sector is not only not evil, it is intimately intertwined with every transaction in the modern world.  It allows for the existence of everything from start-up capital to pension funds to widespread home ownership.  Bankers want to make money, sure.  But by and large there’s no evidence they’re any more evil than their fellow non-financial industry citizens.  The existence of occasional bad actors like Bernie Madoff does not refute the vast amount of good that modern financial systems have done to develop economies and build general wealth and fuel trade and growth around the world.

But even with that firmly established, it does not mean regulation is unnecessary.  Even with the absolute best intentions, individual agents in the finance industry operate in a complex system.  Markets are highly interconnected and interdependent networks, and even if we grant the classical assumption that each agent is perfectly rational, the system in which they work means that the market does not act like a classical model would predict.  Rather, because of the high level of interconnectivity and interdependence, thousands of individually rational actors making perfectly rational decisions to optimize their own utility in their local environment interact in complex and often unpredictable ways, and feed off of each other.  What Agent A does in New York affects an investment decision of Agent B in London, which in turn influences the choices of Agent C in Tokyo, and so on for millions of decisions, rippling across the globe.  And all of these decisions are based on outside information as well, like weather patterns (for agricultural commodities futures) or political stability estimates or individual corporate strategies.  This network effect leads to emergent properties like speculative bubbles and market crashes and credit crunches and supply bottlenecks.  We see such inefficient trends even in 100% mechanically deterministic, perfectly rational simulations in agent-based computational models.  It’s even more inefficient when we introduce irrationality and the quirks of human individual and social behavior, like tendencies toward collusion and coercive practices and gaming the system through asymmetrical information and other “unfair” advantages. (For further information, please see the references I’ve listed at the end of this article.  I’ll also be elaborating more on the topic in my continuing Complex Systems series.)

All of these features of markets, especially with the actual human elements, can and do lead to widespread harm, from massive financial losses in crashes even to widespread starvation and death in the case of depressions and economic collapse.  So what, then, can we do to try to control such inefficient emergent properties like irrational bubbles and crashes as supply and demand get out of sync?

To answer this question, let’s briefly turn from economics, and turn instead to the human digestive system, using a metaphor first suggested to me by my dad.  Now, to be clear up front, I am neither a biologist nor a physiologist, so this will be a simplified metaphor to illustrate a point, rather than an examination of the mechanics of digestion.  But the human digestive system has evolved in such a way that it can control the amount of gastric acid in the stomach at any given time.  It does this because, when we were hunters and gatherers, we did not have a reliable source of food, so often our nutrient intake came in brief feasts—after a successful hunt or a profitable foraging effort—punctuated by long periods without food.  Thus the stomach needed to be able to adjust the level of acid, to digest food when it showed up, but avoid hurting itself when there was no food present.  Too much acid without food, and we get ulcers.  Too little acid when there IS food, and we can’t digest efficiently and have to sit around waiting for the food to dissolve slowly.  But the digestive system evolved a way to regulate the level of acid and adjust it as conditions change: keep it low during periods without food, ramp it up as necessary when food shows up, and then lower again to protect itself when the job is done.  This remarkable regulatory system gave us the flexibility to succeed as a species when we didn’t have a reliable food intake, and without it we’d likely have died off long before we figured out agriculture.  It’s not a perfect system: we still sometimes get ulcers, and we still sometimes have digestive problems if we gorge ourselves too fast and the system has to catch up after the fact.  But it works, pretty well, most of the time.

Now take that concept and apply it to the economy.  In this metaphor, food is market demand, and the acid is the money supply: it allows the market to process the demand as necessary.  But much like the stomach acid, a single constant level doesn’t work well.  Too much money supply, and we get massive inflation, and no one can afford anything regardless of demand.  Too little, and no one has money to buy things and trade grinds to a halt, and we might even get deflation (where people know their money will be in more demand in the future, so they’d prefer to hold on to it rather than spend it now).  The money supply, like our metaphorical gastric acid, has to be appropriate to the market’s requirements at the present time.  Therefore, the ability to adjust the money supply is essential to a smoothly functioning economy.  Money supply regulation helps the economy, by and large, by letting the market efficiently process demand through trade, without excessive inflation or deflation.

Now, much like the gastric acid regulatory system, money supply regulation isn’t perfect.  Generally it’s done by central banks like the Federal Reserve, which is a favorite target of free market advocates who are convinced the Fed has made the market worse and attributes many market problems, such as bubbles and crashes, to its interference.  However, there’s some decent evidence showing that’s not the case at all.  About a year and a half ago, I ran some numbers to see if the Fed has really made things worse.  What I found was that in the United States, prior to the founding of the Federal Reserve, depressions and recessions occurred on average every 4.33 years, lasting an average of 2.16 years each, with an average 22.8% peak-to-trough loss of business activity.  Since the founding of the Federal Reserve in 1913, they have occurred every 5.76 years, lasting an average 1.08 years, with an average peak-to-trough loss of only 10.1%.  If we look only at the period since the end of the Great Depression—an event which led to the creation of macroeconomic theory and its application by central banks—they drop to an average of 11 months every 6.33 years, with a peak-to-trough loss of a remarkably low 4.2%.  Now, I freely admit this was not a scientific, econometric analysis.  I did not control for confounding variables, so I’m not going to argue the Fed has itself caused the lower volatility of the markets since 1913.  But I’m not alone in noticing this trend: in financial economics, the period of approximately 1950-2007 is known as the “Great Moderation,” and prior to the 2007-08 crash, some financial and macroeconomic theorists firmly believed we’d “solved” the problem of major recessions, largely through high-level monetary and fiscal policy regulation.  Clearly, we have not (there’s a reason the Great Moderation ended in 2007).  But it’s virtually impossible to look at the empirical data and proclaim that the Fed somehow made things worse.  And there’s a very strong indication that policies such as regulating the money supply HAS dramatically reduced market volatility by matching the metaphorical acid level to the metaphorical food level.

Much like the digestive systems, however, it’s not a perfect system.  The experts and the regulators don’t always get it right.  Everyone makes mistakes and every system fails sometimes—especially when trying to control complex systems like economic markets.  Bubbles and crashes have not gone away even with a guiding hand on the wheel of the money supply.  There’s even some strong evidence that several Federal Reserve policies, combined with the independent actions of other regulators, inadvertently fueled the housing market bubble and risky financial practices that led to the the 2007-08 Wall Street collapse.  I’m certainly not arguing against regulatory reform.  I’m just saying that the idea regulation always makes things worse does not stand up to even the most cursory examination.  Sure, it certainly can make things worse—micromanaging policies add an unnecessary and often harmful regulatory burden that makes companies less effective and the market worse overall—but, if applied carefully and gently in the areas it CAN help, then it can also reduce volatility and decrease the negative effects when market agents get it wrong and everything goes bad.  Bankers aren’t inherently evil actors who exploit those less fortunate than themselves, but complex systems like financial markets mean even when everyone is acting with the best intentions, things can go very wrong in a hurry, and effective regulatory systems can help prevent them doing so or mitigate the harm when they do.

The money supply is just one example of a regulatory system that can help the market as a whole, if used carefully.  It’s certainly not the only one—others include limiting collusion and coercive behavior, reducing the impact of asymmetrical information in decision-making so “insiders” can’t take unfair advantage of the rest of the market, and other regulations that act as referees to keep the market as fair as possible.  But there are clearly harmful and wasteful regulations, too, like burdensome tax requirements and unnecessary micromanaging rules.  “Regulation” is such a broad term that no pithy one line explanation can possible capture the whole picture, and each needs to be examined individually in the context of how markets actually work to understand whether or not its valuable.  Like the title says, it’s complicated.

 


Further reading:

Eric Beinhocker, The Origin of Wealth: Evolution, Complexity, and the Radical Remaking of Economics, Harvard Business School Press, 2006

W. Brian Arthur, Complexity and the Economy, Oxford University Press, 2014