Gangsters and Guns: The Root Causes of Violence in Drug Trafficking (Well, It’s Complicated #5)

A common argument, especially among the more libertarian-minded, is that because a lot of gun violence in American comes from drug dealers, ending the “War on Drugs” would immediately lead to plunging gun violence, especially in poor inner city neighborhoods where the drug trade is rampant and controlled by violent gangs.  But the truth is that’s too simple.

While I do tend to agree that a significant percentage of gun violence is directly tied to the economics of the illegal drug supply chain, the problem is that “calling off the drug war” does not in itself mean “take steps to make it an aboveground legal market.” To see what I mean, let’s look at what motivates violence in the drug market.

Gangsters, by and large, don’t go around shooting each other for no reason (though sometimes they do–the profession attracts a lot of violent sociopaths). Generally, violence is used for two main purposes in black markets (of any kind, from drugs to human trafficking to gun running to illegal food sales in Venezuela): to maintain property rights (e.g., protect one’s stash, one’s territory, one’s supply chain, etc, against competition), and to enforce contract rights (e.g., to ensure people play by the rules and don’t try to rip you off, by ensuring they fear your reprisal).  Both can take the form of what is sometimes called “instrumental violence,” that is, violence directly related to economic activities.  But the latter–enforcing contract rights–can also account for a significant amount of what is called “expressive violence,” in that such violence serves to reinforce the gang’s power, control, and reputation in its territory, thus decreasing the likelihood competitors and customers will attempt to cross them.

To remove both of these sources of violence requires giving drug dealers alternate means to maintain their property rights and to enforce their contracts. This requires not merely ending prosecution and incarceration for nonviolent drug offenses, but actually giving the drug market access to legal structures: they need to be able to set up shop legally in exchange for rent, so they aren’t fighting over corners; they need to be able to enter legally binding contracts with suppliers; they need to be able to enforce their rights through the legal system–call the police when someone steals from them, sue a supplier for breach of contract when shipments go missing, etc.

Further, the individuals currently in the underground drug trade either need to be able to transition peacefully to the aboveground drug trade, or be removed from the market entirely–if the barriers to entry into the legal market are too high, the illegal market will continue to operate until it is either economically untenable (through competition with legal alternatives) or shut down by law enforcement. And we’ve seen how well the latter works in practice through the past half century of the Drug War, so it may not be the best option except for the most egregiously violent criminals. By giving current illegal drug dealers the ability to transition to a legal, profitable, and regulated drug market and shifting the burden for enforcing property and contract rights onto the government, they no longer need to risk felony charges for doing so themselves (and for that matter they no longer need to risk getting shot themselves in disputes with other illegal drug traffickers). Only those individuals enamored of a “gangster lifestyle” would have an incentive to continue criminal activities instead of shifting to the safer and profitable legal drug trade, thus greatly decreasing the overall rate of violence currently fueled by drugs being a highly profitable black market.

This will not solve gang activity, nor will it solve all gang related violent crime.  There are certainly other criminal enterprises that many gangs involve themselves in–prostitution, gun running, illegal gambling, extortion, racketeering, etc.  Legalizing and regulating the drug trade, even with low barriers to entry to allow current drug traffickers to transition to legal markets, will have no effect on these other sources of revenue and their associated criminal violence.  But it will absolutely decrease overall violence as gangs get out of the drug trade and its associated high levels of street violence fades away, in favor of the relatively low violence in other criminal markets that rely less on direct territorial control of prized retail locations.

Ending the Drug War alone won’t affect the primary motivations for drug related violent crime, because the crime isn’t generally caused by criminals trying to avoid prosecution and incarceration.  Violence in the drug trade isn’t caused directly by the drug war; it’s caused more broadly by the fact the drug trade is a black market controlled by organized crime. Because it’s a black market, those in the business of selling drugs have no alternatives to ensure they stay in business but to make sure everyone plays by the rules or gets shot.  That’s the problem that needs to be addressed in order to solve the associated outcome of high levels of violence.

Heroes or Villains? The Economics of Price Gouging (Well, It’s Complicated #4)

The technical definition of price gouging is the practice of producers and/or retailers raising prices well above normal equilibrium levels during periods of short supply, generally during exogenous shocks that both decrease supply and increase demand.  In plain English, that means that when something happens to limit available supplies of high-demand items, such as an environmental disaster, some sellers increase their prices for those products to levels so far above normal that they’re seen as exorbitant.  For example, we often see the price of water and other basic staple supplies skyrocket in the days following major hurricanes and other disasters that disrupt supply chains.  The general public tends to condemn such behavior as taking unfair advantage of customers in their time of need, and many states have laws against it in the name of consumer protection.

But of course, the advocates of free market economics see it as nothing more than natural market forces, and often after such events they come out to proclaim store owners who engage in price gouging as heroes of the free market.  Recently, after Hurricane Harvey devastated the Houston region and Hurricane Irma did the same for the Caribbean and swaths of Florida, multiple articles appeared praising price gouging shopkeepers as heroic, or at the very least helpful rather than harmful.  The argument in both cases had basically four parts.  First, increased prices moderate demand by reducing hoarding tendencies and making customers only buy what they genuinely need, freeing up resources for other customers in need.  Second, higher prices encourage suppliers to bring in more of the goods in question to take advantage of the demand opportunity.  Third, there is no “right” price, only that dictated by the market—if the market dictates prices ten times what they were yesterday, that’s not immoral, it’s just economics.  To quote one such article, “facts should trump feelings,” so the feeling of unfairness is an illegitimate argument against economically rational behavior.  And fourth, private property rights mean individuals should be allowed to set whatever prices they want, because no one has a right to their property except through voluntary free exchange.  And since it’s just good business sense to sell your property for as much as possible, no one should tell you otherwise.  But I personally believe this, like many other pure free market views, presents an oversimplified picture of a much more complicated reality.  So let’s look at each in turn, and see what modern economic theory tells us is really going on.


 

Point the First: Price Increases Moderate Demand, Increasing Efficient Allocation of Resources

Well, yes and no.  Basic supply and demand theory tells us this is what SHOULD happen: prices increase, so people buy less (e.g., only what they really need), thus preventing hoarding.  The problem is that this moderation of demand is bounded by the fact that the goods in question generally are staple goods, necessary to survival in emergency conditions.  Thus, there’s a limit to elasticity.  Elasticity is how much demand changes as prices increase or decrease, and the more “inelastic” something is, the less sensitive demand levels are to price changes: few people drink more water when the price is low, nor are they physically able to drink much less even when the price is extremely high.  People need a minimum amount of water intake to survive.  Similarly, they need food.  They need heat.  Depending on the emergency situation in question, they may need other basic staple supplies like wood to reinforce their shelters and protect themselves from the elements, etc.

The reason societies find price gouging so egregious is because it’s seen as taking advantage of inelastic demand in a time when customers have no other option but to pay whatever the seller asks.  Yes, it might moderate hoarding behavior, but that purpose could be achieved by limiting purchase quantities per customer, without hurting those who can’t afford the elevated prices on goods they literally need to stave off death.  Plus there’s the flip side of the coin: it may also increase hoarding, as customers are afraid prices will continue to rise or supply will run out entirely, so it’s better to pay exorbitant prices now than risk going without later.  Limiting hoarding, even if effective, is a relatively minor positive when the potential externality is extreme hardship and potential death for those who literally cannot afford the high cost of these basic necessities.


 

Point the Second: Higher Profit Margins Encourage Suppliers to Increase Supply to Meet Demand

In an ideal hypothetical economic landscape, this would absolutely be true.  Unfortunately, we don’t live in the hypothetical world of free market models.  Our real world has what is sometimes called transfer friction, or in layman’s terms, logistics concerns: increasing supply means either producing more or bringing in supplies from outside sources.  In the emergency situations that often lead to price gouging, like natural disasters, often neither of these are physically possible for days or even weeks following the initial event.  Producers within the disaster zone are unlikely to be able to increase production capacity when all the infrastructure, including their own supply chains, has just been destroyed.  Nor can outside suppliers transport goods into the zone effectively when the logistics infrastructure has been devastated.  The logistics infrastructure that DOES exist is often commandeered for general disaster relief efforts, so there’s not much capability to flood the market with goods for a profit motive, even if outside suppliers desperately want to do so.

Take, for example, Hurricane Harvey: the public water system was contaminated with floodwaters, making it unsuitable for human consumption in many areas.  Thus drinking water had to come in the form of bottles or purifiers.  But no one IN the disaster zone could readily produce a significant supply of potable water, and no one OUTSIDE the zone could readily transport such a supply into the zone for sale, as the roads and railroads and ports were under water or heavily damaged.  The market could not easily respond to the increased demand level by increasing supply (and thus bringing prices back down) at least until the infrastructure could be repaired.  Thus increased prices do NOT effectively incentivize increased supply during an emergency when the supply chains cannot support it.


 

Point the Third: There Is No Right Price, Just the Market Price.  Facts, Not Feelings.

Maybe.  In terms of morals, sure.  But human markets aren’t made up of perfectly rational and emotionless decision makers.  They’re made of humans.  And both neuroeconomics and behavioral economics provide very strong evidence that feelings matter as much as reason in economic decision making.

Prospect theory, for which Daniel Kahneman won the Nobel Prize in Economics in 2002, tells us that we as humans don’t judge value in terms of absolutes, but in terms of gains and losses from the previous baseline.  Comparisons are relative—we might say that our subconscious minds think in terms of percentages rather than absolute values.  When my bank account is empty, a $100 windfall is a huge gain; when I’m a millionaire I barely even notice it.  Similarly, if a product is $1000 one day and $1100 the next, it’s a small jump, but when it instead goes from $10 to $110, we say the price “skyrocketed,” even though both increases are exactly the same amount of money in absolute terms.  What this means is that modern economic theory tells us there in fact IS a “right” price in economic decision-making, at least when the decision-maker is Homo sapiens rather than Homo economicus: the prior baseline from which the decision-maker is judging relative gains and losses.  If the price goes down, we are happy.  If it goes up, we are unhappy.  And if it skyrockets, we are angry.

This becomes important, because emotions have a major impact on our decision-making.  Which ties into the final point.


 

Point the Fourth: It’s Good Business to Maximize Profit, and Property Rights Mean No One Should Tell Us No

First, let’s be clear here.  I am staunchly in favor of property rights and the freedom to do what you want with your property so long as it isn’t actively infringing on the rights to others.  That said, human beings are social animals.  We’ve evolved in a social context, and many of our evolved behaviors are directly optimized for social, rather than individual, survival.  Two of the most interesting (and relevant) of these evolved social behaviors are what behavioral economists call “inequity aversion” and “altruistic punishment.”

Inequity aversion is the well-demonstrated tendency for people to dislike being treated unfairly and seeing others being treated unfairly.  The Golden Rule isn’t just something your elementary school teacher taught you to ease classroom interactions with the other kids—it’s a fundamental feature of human nature.  Yes, we’re by and large self-centered, but almost all of us have an innate sense of fair play, and we disapprove of behaviors that violate such unspoken norms.

Altruistic punishment is the (also well-demonstrated) willingness of human beings to go out of their way, often to the point of actively hurting their own self-interest, to punish those they see as behaving unfairly.  So not only do we dislike unfair behavior, but we want to punish it when we see it, even if it costs us to do so.  (This is a fascinating social trend, and some researchers believe our unique version of altruistic punishing behavior is one of the keys to human success versus other social animals.)

Why does this matter?  Well, while individuals should have the right to sell their property for any price they want, a basic understanding of modern economics tells us why price gouging is a terrible business decision in the long run.  From behavioral economics, we know people dislike being treated unfairly and seeing others treated unfairly.  We also know that they are willing to go out of their way to punish those they see as treating themselves or others unfairly.  And from Prospect Theory, we know that such judgements of unfairness are influenced not by absolute value of the price increase, but rather its relative value compared to the prior baseline.

Game theory helps us mathematically model optimal decision making in interactive situations, like when a seller is deciding whether to raise the price of an inelastic good to take advantage of temporary increased demand.  But there’s a different answer when the game is played once (such as a transaction between individuals who will likely never see each other again) and when it’s repeated (such as transactions between a shopkeeper and his or her regular customers).  With an individual interaction game, there is no long term loss from treating someone unfairly, because they have to take it or leave it—and for inelastic goods, they’re probably going to take it.  But with a repeated game, treating people unfairly may lead to a temporary spike in profit margins, but is likely to be repaid with long-term punishment, such as formerly regular customers shopping elsewhere because they no longer want to deal with someone who they feel took advantage of them in their time of need.  There is a strong business incentive for local stores to avoid being seen as acting unfairly, because long term profits are heavily impacted by short term perceptions.

Maximizing profits when your customers are desperate may well lead to having no customers when they have competing options.  Altruistic punishment means they’ll likely be willing to go to the competition, even if it’s a bit out of their way, rather than reward you with their custom.  And they may even be able to get their friends and neighbors to do the same—we’re social animals.  Do what you want with your property, but be aware that actions that are seen as unfair may very well have longer term repercussions.


 

Price gouging is not merely increasing prices in response to demand.  It is a huge price increase relative to previous baseline prices, at a time of high inelastic demand, when supply physically cannot increase to match said demand.  Thus, it’s sellers taking advantage of a situation in which buyers must buy their product at the price they set, because there are no other options like going without or shopping elsewhere.  Human social nature means we see this as unfair and are willing to punish such behavior even at cost to ourselves, which makes it a risky business decision, trading short term certain profit for long term potential losses.  It may or may not limit hoarding, but it most certainly hurts those who can’t afford the new prices for goods they desperately need.  Price gougers may not be evil villains, but they certainly aren’t noble heroes.  They’re just people trying to make a quick buck.

On Social Contracts and Game Theory

The “social contract” is a theory of political philosophy, formalized by Enlightenment thinkers like Rousseau (who coined the term), Hobbes, Locke, and their contemporaries, but tracing its roots back to well before the birth of Christ.  Social contract theories can be found across many cultures, such as in the writings of ancient Buddhists like Asoka and Mahavatsu, and ancient Greeks like Plato and Epicurus.  The idea of the social contract is that individual members of a society either explicitly or implicity (by being members of that society) exchange some of their absolute freedom for protection of their fundamental rights.  This is generally used to justify the legitimacy of a governmental authority, as the entity to which individuals surrender some freedoms in agreement for the authority protecting their other rights.

At its most basic, then, the social contract can be defined as “an explicit or implicit agreement that society—or its representatives in the form of governmental authorities—has the legitimate right to hold members of said society accountable for violations of each other’s rights.”  Rather than every member of a society having to fend for themselves, they agree to hold each other accountable, which by necessity means accepting limitations on their own freedom to act as they please (because if their actions violate others’ rights, they’ve agreed to be held accountable).

The purpose of this article isn’t to rehash the philosophical argument for and against social contract theory.  It’s to point out that the evidence strongly demonstrates social contracts aren’t philosophy at all, but rather—much like economic markets—a fundamental aspect of human organization, a part of the complex system we call society that arose through evolutionary necessity and is by no means unique to human beings.  That without it, we would never have succeeded as a species.  And that whether you feel you’ve agreed to any social contract or not is irrelevant, because the only way to be rid of it is to do away with society entirely.  To do so, we’re going to turn to game theory and experimental economics.

In 2003, experimental economists Ernst Fehr and Urs Fischbacher of the University of Zurich published a paper they titled “The Nature of Human Altruism.”  It’s a fascinating meta-study, examining the experimental and theoretical evidence of altruistic behavior to understand why humans will often go out of their way to help others, even at personal costs.  There are many interesting conclusions in the paper, but I want to focus on one, specifically—the notion of “altruistic punishment,” that is, taking actions to punish others’ for perceived unfair or unacceptable behavior even when it costs the punisher something.  In various experiments for real money, with sometimes as much as three months’ income at stake, humans will hurt themselves (paying their own money or forfeiting offered money) to punish those they feel are acting unfairly.  The more unfair the action, the more willing people are to pay to punish them.  Fehr and Fischbacher sought to understand why this is the case, and their conclusion plays directly into the concept of a social contract.

 

A decisive feature of hunter-gatherer societies is that cooperation is not restricted to bilateral interactions.  Food-sharing, cooperative hunting, and warfare involve large groups of dozens or hundreds of individuals…By definition, a public good can be consumed by every group member regardless of the member’s contribution to the good.  Therefore, each member has an incentive to free-ride on the contributions of others…In public good experiments that are played only once, subjects typically contribute between 40 and 60% of their endowment, although selfish individuals are predicted to contribute nothing.  There is also strong evidence that higher expectations about others’ contributions induce individual subjects to contribute more.  Cooperation is, however, rarely stable and deteriorates to rather low levels if the game is played repeatedly (and anonymously) for ten rounds. 

The most plausible interpretation of the decay of cooperation is based on the fact that a large percentage of the subjects are strong reciprocators [i.e., they will cooperate if others cooperated in the previous round, but not cooperate if others did not cooperate in the previous round, a strategy also called “tit for tat’] but that there are also many total free-riders who never contribute anything.  Owing to the existence of strong reciprocators, the ‘average’ subject increases his contribution levels in response to expected increases in the average contribution of other group members.  Yet, owing to the existence of selfish subjects, the intercept and steepness of this relationship is insufficient to establish an equilibrium with high cooperation.  In round one, subjects typically have optimistic expectations about others’ cooperation but, given the aggregate pattern of behaviors, this expectation will necessarily be disappointed, leading to a breakdown of cooperation over time.

This breakdown of cooperation provides an important lesson…If strong reciprocators believe that no one else will cooperate, they will also not cooperate.  To maintain cooperation in [multiple person] interactions, the upholding of the believe that all or most members of the group will cooperate is thus decisive. 

Any mechanism that generates such a belief has to provide cooperation incentives for the selfish individuals.  The punishment of non-cooperators in repeated interactions, or altruistic punishment [in single interactions], provide two such possibilities.  If cooperators have the opportunity to target their punishment directly towards those who defect they impose strong sanctions on the defectors.  Thus, in the presence of targeted punishment opportunities, strong reciprocators are capable of enforcing widespread cooperation by deterring potential non-cooperators.  In fact, it can be shown theoretically that even a minority of strong reciprocators suffices to discipline a majority of selfish individuals when direct punishment is possible.  (Fehr and Fischbacher, 786-7)

 

In short, groups that lack the ability to hold their members accountable for selfish behavior and breaking the rules of fair interaction will soon break down as everyone devolves to selfish behavior in response to others’ selfishness.  Only the ability to punish members for violating group standards of fairness (and conversely, to reward members for fair behavior and cooperation) keeps the group functional and productive for everyone.*  Thus, quite literally, experimental economics tells us that some form of basic social contract—the authority of members of your group to hold you accountable for your choices in regards to your treatment of other members of the group, for the benefit of all—is not just a nice thing to have, but a basic necessity for a society to form and survive.  One might even say the social contract is an inherent emergent property of complex human social interaction.

But it isn’t unique to humans.  There are two major forms of cooperative behavior in animals: hive/colony behavior, and social group behavior.  Insects tend to favor hives and colonies, in which individuals are very simple agents that are specialized to perform some function, and there is little to no intelligent decision making on the part of individuals at all.  Humans are social—individuals are intelligent decision makers, but we survive and thrive better in groups, cooperating with members of our group in competition with other groups.  But so are other primates—apes and monkeys have small scale societies with leaders and accountability systems for violations of accepted behavior.  Wolf packs have leaders and accountability systems.  Lion prides have leaders and accountability systems.  Virtually every social animal you care to name has, at some level, an accountability system resembling what we call a social contract.  Without the ability to hold each other accountable, a group quickly falls apart and individuals must take care of themselves without relying on the group.

There is strong evidence that humans, like other social animals, have developed our sense of fairness and our willingness to punish unfair group members—and thus our acceptance that we ourselves can be punished for unfairness—not through philosophy, but through evolutionary necessity.  Solitary animals do not have a need for altruistic punishment.  Social animals do.  But as Fehr and Fischbacher also point out, “most animal species exhibit little division of labor and cooperation is limited to small groups.  Even in other primate societies, cooperation is orders of magnitude less developed than it is among humans, despite our close, common ancestry.”  So why is it that we’re so much more cooperative, and thus more successful, than other cooperative animals?  It is, at least in part, because we have extended our concept of altruistic punishment beyond that of other species:

 

Recent [sociobiological] models of cultural group selection or of gene-culture coevolution could provide a solution to the puzzle of strong reciprocity and large-scale human cooperation.  They are based on the idea that norms and institutions—such as food-sharing norms or monogamy—are sustained by punishment and decisively weaken the within-group selection against the altruistic trait.  If altruistic punishment is ruled out, cultural group selection is not capable of generating cooperation in large groups.  Yet, when punishment of [both] non-cooperators and non-punishers [those who let non-cooperation continue without punishment] is possible, punishment evolves and cooperation in much larger groups can be maintained.  (Fehr and Fischbacher, 789-90)

We don’t just punish non-cooperators.  We also punish those who let non-cooperators get away with it.  In large groups, that’s essential: in a series of computer simulations of multi-person prisoners’ dilemma games with group conflicts and different degrees of altruistic punishment, Fehr and Fischbacher found that no group larger than 16 individuals could sustain long term cooperation without punishing non-cooperators.  When they allowed punishment of non-cooperators, groups of up to 32 could sustain at least 40% cooperation.  But when they allowed punishment of both non-cooperators AND non-punishers, even groups of several hundred individuals could establish high (70-80%) rates of long-term cooperation.  Thus, that’s the key to building large societies: a social contract that allows the group to punish members for failing to cooperate, and for failing to enforce the rules of cooperation.

It doesn’t much matter if you feel the social contract is invalid because you never signed or agreed to it, any more than you feel the market is unfair because you never agreed to it.  The social contract isn’t an actual contract: it’s an emergent property of the system of human interaction, developed over millennia by evolution to sustain cooperation in large groups.  Whatever form it takes, whether it’s an association policing its own members for violating group norms, or a monarch acting as a third-party arbitrator enforcing the laws, or a democracy voting on appropriate punishment for individual members who’ve violated their agreed-upon standards of behavior, there is no long-term successful human society that does not feature some form of social contract, any more than there is a long-term successful human society that does not feature some form of trading of goods and services.  The social contract isn’t right or wrong.  It just is.  Sorry, Lysander Spooner.

*Note: none of this is to say what structure is best for enforcing group standards, nor what those group standards should be beyond the basic notion of fairness and in-group cooperation.  The merits and downsides of various governmental forms, and of various governmental interests, are an argument better left to philosophers and political theorists, and are far beyond the scope of this article.  My point is merely that SOME form of social authority to punish non-cooperators is an inherent aspect of every successful human society, and is an evolutionary necessity.

Well, Actually… (A Rebuttal to a Rebuttal)

In June, researchers from the University of Washington released a National Bureau of Economic Research working paper entitled “Minimum Wage Increases, Wages, and Low-Wage Employment: Evidence from Seattle” (Jardim et al, 2017).  It made a lot of headlines, for the claim it made that the increased minimum wage in Seattle (up to $13 this year, and planned to increase to $15 within the next 18 months) has cost low-wage workers money by reducing employment hours across the board.  Essentially, Jardim and her colleagues showed rather convincingly through an in-depth econometric analysis that while wages for the average low-income worker increased per hour, their hours were cut to an extent that the losses exceeded the gains for a reduced total income.  It’s an impressive case for what I argued in my first “Well, It’s Complicated” article playing out in reality.

However, not everyone is convinced.  A friend of mine alerted me to an article by Rebecca Smith, J.D., of the National Employment Law Project that argues the study MUST be bullshit, because it doesn’t square with what she sees as reality.  In the article, Ms. Smith makes six specific claims in her effort to rebut the study.  Unfortunately for her, all these claims do is demonstrate she either doesn’t know how to read an econometric paper, or she didn’t actually read it that closely, because four are easily disproven by the paper itself, and the other two are irrelevant.

Specifically, she claimed the following:

  • The paper’s findings cannot “be squared with the reality of Seattle’s economy,” because “At 2.5 percent unemployment, Seattle is very near full employment. A Seattle Times story from earlier this month reported a restaurant owner’s Facebook confession that due to the tight labor market ‘I’d give my right pinkie up for an awesome dishwasher.’ Earlier this year, Jimmy John’s advertised for delivery drivers at $20 per hour.”

 

  • “By the UW team’s own admission, nearly 40 percent of the city’s low-wage workforce is excluded from the data: workers at multisite employers like Nordstrom, Starbucks, or even restaurants with a few locations like Dick’s.”

 

  • “Even worse, any time a worker left a job with a single-site employer for one with a chain, that was treated as a “lost job” that was blamed on the minimum wage — and that likely happened a lot since the minimum wage was higher for those large employers.”

 

  • “…Every time an employer raised its pay above $19 per hour — like Jimmy John’s did — it was counted not as a better job, but as a low-wage job lost as a result of the minimum wage.”

 

  • “The truth is, low-wage workers are making real gains in Seattle’s labor market. In almost all categories of traditionally low-wage work, there are more employers in the market than at any time in the city’s history. There are more coffee shops, restaurants and hotels in Seattle than ever before. The work is getting done. And the largest (and best-paid) workforce in the history of the city is doing it.”

 

  • “Nor can the study be reconciled with the wide body of rigorous research — including a recent study of Seattle’s restaurant industry by University of California economist Michael Reich, one of the country’s foremost minimum-wage researchers — that finds that minimum-wage-increases studies have not led to any appreciable job losses.”

 

Let’s look at each of these in turn.

 


 

THE CLAIM: This paper doesn’t match the reality of Seattle’s 2.5% unemployment rate, which is driving up wages regardless of the minimum wage increases due to high labor demand.

First, this isn’t an attack on the paper itself, just an expression of incredulity that demonstrates Ms. Smith apparently doesn’t understand how statistical analysis works—there are MANY factors that go into overall unemployment rates, and the minimum wage is just one of them.  Thus, the paper seeks to isolate unemployment and reduced employment hours in a given sector, and the overall unemployment rate is irrelevant to the analysis.

Second, Seattle’s unemployment rate is not 2.5%, and has not been 2.5% in a long time: the Bureau of Labor Statistics lists it as 2.9% in April 2017, its lowest point in the past year, and trended back up to 3.2% by May.  You don’t get to just make up numbers to refute points you don’t like.

Third, just to emphasize that this unemployment rate is not caused by the minimum wage increase, let’s compare Seattle to other cities.  At 3.2% unemployment in May, Seattle was tied with five other US cities: Detroit, San Diego, Orlando, San Antonio, and Washington, D.C.  All of these cities have their own minimum wages that vary between $8.10 and $13.75—but for a proper comparison, these rates have to be adjusted for cost of living.  When so adjusted, the lowest paid workers were those in Orlando, making the equivalent of a worker in Seattle taking home $10.94/hour.  The highest were those in San Antonio, with the equivalent of $19.39/hour at Seattle prices.  For comparison, workers actually IN Seattle were making just $13/hour in May—the average for all six cities was $13.28.  With such a range, can the “high” minimum wage be driving the employment rate that’s identical among all of them?  These six cities all tied for 11th place in lowest unemployment rates in the nation that month.  How about the best three?  First place goes to Denver, with a minimum wage of $11.53 (adjusted for Seattle cost of living).  Second to Nashville, at $9.72.  Third to Indianapolis, at $9.93.  I’d take a step back and reconsider any claim that the $13 minimum wage in Seattle is at all relevant to the overall employment rate, given that when you compare apples to apples, there is no apparent correlation at all.  Instead, let’s stick to what the paper was about: the impact of total income on low-income workers, given per hour wage increases versus changes in hours worked.

 


 

The Claim: The paper excluded 40% of the city’s low-wage workforce by ignoring all multisite employers.

Quite simply, no, it did not.  The paper did NOT exclude all multisite employers.  It excluded SOME multisite employers.  And those employers don’t account for “nearly 40% of the city’s low-wage workforce,” but rather 38% of the ENTIRE workforce across the state as a whole—no mention is made of their proportion within Seattle itself.  And if Ms. Smith had read closely, she’d realize that not only does this make perfect sense, but if anything it just as likely biased the results to UNDERESTIMATING the loss in employment hours for low-wage workers.

“The data identify business entities as UI account holders. Firms with multiple locations have the option of establishing a separate account for each location, or a common account. Geographic identification in the data is at the account level. As such, we can uniquely identify business location only for single-site firms and those multi-site firms opting for separate accounts by location. We therefore exclude multi-site single-account businesses from the analysis, referring henceforth to the remaining firms as “single-site” businesses. As shown in Table 2, in Washington State as a whole, single-site businesses comprise 89% of firms and employ 62% of the entire workforce (which includes 2.7 million employees in an average quarter).

Multi-location firms may respond differently to local minimum wage laws. On the one hand, firms with establishments inside and outside of the affected jurisdiction could more easily absorb the added labor costs from their affected locations, and thus would have less incentive to respond by changing their labor demand. On the other hand, such firms would have an easier time relocating work to their existing sites outside of the affected jurisdiction, and thus might reduce labor demand more than single-location businesses. Survey evidence collected in Seattle at the time of the first minimum wage increase, and again one year later, increase suggests that multi location firms were in fact more likely to plan and implement staff reductions. Our employment results may therefore be biased towards zero.”  (Jardim et al., pp 14-15).

Essentially, the nature of the data required they eliminate 11% of firms in Washington State before beginning their analysis, because there was literally no way to tell which of their sites (and therefore which of their reported employees) were located within the city of Seattle.  Multi-site firms that reported employment hours by individual site were absolutely included, just not those that aggregate their employment hours across all locations.  But that’s okay, because on the one hand, such firms can potentially absorb increased labor costs at their Seattle sites, but on the other they can more easily shift work to sites outside the affected area and thus reduce labor demand within Seattle in response to increased wage bills.  And surveys suggest that such firms are more likely to lay off workers in Seattle than other firms—hence, excluding them from the data is just as likely to make the employment reduction estimates LOWER than they’d be if the firms were included as they are to bias the estimates positively.  Ms. Smith’s objection on this point only serves to prove she went looking for things to object to, rather than reading in depth before jumping to conclusions.

 


 

The Claim: Workers leaving included firms for excluded firms was treated as job loss.

Literally no, it was not.  The analysis was based on total reported employment hours and not on total worker employment.  When employers lose workers to other firms, they don’t change their labor demand.  Either other workers get more hours or someone new is hired to cover the lost worker’s hours.  If hours DO decrease when a worker leaves, that means the employer has reduced its labor demand and sees no need to replace those hours.  In which case, it IS “job loss” in the sense of reduced total employment hours.

 


 

The Claim: When employers raised wages above $19/hour, it was treated as job loss.

Again, literally no, it was not.  Not only does the paper have an extensive three-page section addressing why and how they chose the primary analysis threshold of $19/hour, they also discuss in their results section how they checked their results against other thresholds up to $25/hour.  In short, a lot of previous research has conclusively shown that increasing minimum wages has a cascading effect up the wage chain: not only are minimum wage workers directly affected by it, but also workers who make above minimum wage—but the results decrease the further the wage level gets from the minimum.  Jardim et al did a lot of in-depth analysis to determine the most appropriate level to cut off their workforce sector of interest, and determined the cascading effects became negligible at around $18/hour—and they chose $19/hour to be conservative in case their estimates were incorrect.  And they STILL compared their results to thresholds ranging from $11/hour to $25/hour and proved the effects of the $13 minimum wage were statistically significant regardless of the chosen threshold.

 


 

The Claim: Low-wage workers are making gains, because in almost all categories of traditionally low-wage work, there are more employers in the market than at any time in the city’s history.

Simply irrelevant.  Number of employers has zero effect on number of hours worked for each worker.  Again, the analysis was based on total labor demand for low-wage workers as expressed in total employment hours across all sectors.  The number of firms makes no difference to how many labor hours each firm is demanding per worker.

 


 

The Claim: This study cannot be reconciled with the body of previous research, including Reich’s recent study of restaurant labor in Seattle, that indicates minimum wage increases don’t lead to job losses.

There are two parts of my response to this.  First, that body of previous research is MUCH more divided than Ms. Smith seems to believe, but that’s to be expected from someone who so demonstrably cherry-picks statements to support her point.  While one school of thought, led by researchers like Card and Krueger (the so-called New Minimum Wage Theorists), believes their research supports Ms. Smith’s argument, their claims have consistently been rebutted on methodological grounds by other researchers like Wascher and Neumark.  Over 70% of economists looking at the conflicting evidence have come down in support of the hypothesis that minimum wage increases lead to job loss among minimum wage workers, as cited by Mankiw in Principles of Economics.  I discuss both points of view more extensively in “Well, It’s Complicated #1.”

Second, the paper has a two and a half page section entitled “Reconciling these estimates with prior work,” where the authors discuss this issue quite in depth.  Including pointing out that when they limit their analysis to those methods used by previous researchers, their results are consistent with those researchers’ results, and they, too, support Reich’s conclusions in regards to the restaurant industry specifically.  In short, yes, this study ABSOLUTELY can be reconciled with the body of previous research.  That body just doesn’t say what Ms. Smith apparently believes it does.

 


 

So where does that leave us?  Quite simply, Ms. Smith is wrong.  Absolutely none of her criticisms of the paper hold water.  Actually, this is one of the most impressive econometric studies I’ve ever read—it even uses the Synthetic Controls methodology that I’ve previously criticized (see my article, “Lies, Damn Lies, and Statistics”), but it uses it in the intended limited and narrowly-focused manner in which it provides useful results.  And it does an excellent job of demonstrating that despite the booming Seattle economy, the rapid increase in the city’s minimum wage has hurt the very employees it intended to help, reducing their total monthly income by an average of 6.6%.

 



 

Original paper can be found here: http://www.nber.org/papers/w23532

 

Lies, Damn Lies, and Statistics: A Methodological Assessment

Last month, a National Bureau of Economic Research working paper made headlines across the internet when it claimed to demonstrate that so-called “Right to Carry” (RTC) laws increased violent and property crime rates above where they would have been without the passage of such laws.  Now, most science reporting is done by people with zero technical background in the advanced statistical techniques used by the paper’s authors, so I was a bit skeptical it actually said what they were claiming it said.  Fortunately, I DO have such a technical background, and for several years now I’ve been following with great interest the academic arguments about the effects of legal guns on crime rates.  And after having read the paper in question (Right-to-Carry Laws and Violent Crime: A Comprehensive Assessment Using Panel Data and a State-Level Synthetic Controls Analysis. Donohue, Aneja, and Weber. 2017), I’ve come to the conclusion that I was both right and wrong.  Wrong in that the paper’s authors drew the conclusion stated by the journalists—they do, in fact, claim their data shows RTC laws increase crime.  But right in that the data doesn’t actually show that when you read it with a more critical eye.  Therefore, I’m going to take this opportunity to teach a lesson in why you shouldn’t trust paper abstracts or jump to the “conclusions” section, but should instead examine the data and analysis yourself.

Disclaimer: I am a firearms enthusiast and active in the firearms community at large.  However, I am also a scientist, and absolutely made my very best efforts to set that bias aside in reading this paper, and give it the benefit of the doubt.  Whether I succeeded or not is up to you to decide, but I believe my objections to the authors’ conclusions are based solely on methodological grounds and will stand up to the scrutiny of any objective observer.  Unfortunately, I cannot say the same about Professor Donohue and his co-authors, as their own personal bias against guns is quite evident from their concluding paragraphs.  Because of that bias, I firmly believe this paper is a perfect example of “Lies, Damn Lies, and Statistics.”

The paper itself is really divided into two sections: a standard multiple regression analysis and then a newer counterfactual method called “synthetic control analysis.”  The authors claim both analyses show that RTC laws increase crime.  I disagree, at least with the extent they believe this to be true.  Let’s look at each in turn.

First, the regression analysis.  The meat of this analysis is comparing four different models (and three variations of those models) for a total of seven specifications.  Multiple regression analysis is a powerful tool to analyze observational data and attempt to control for several variables to see what impact each had on the target dependent variable.  In this paper, Donohue et al. build their own model specification (DAW), as well as comparing it to three pre-existing models from other researchers (BC, LM, MM).  They looked at the effects of states’ passage of RTC laws on three dependent variables: murder rates, violent crime rates, and property crime rates.  The key point of their research is that it goes beyond previous papers in its data set: where previous research has stopped at the year 2000, this paper looks at how the results change when the models are fed an additional 14 years of data, looking from 1977-2014.

The problem here is that the authors claim their panel data analysis consistently shows a statistically significant increase in violent crime when using the longer time horizon ending in 2014.  This is a problem because, quite bluntly, no, it does not.  The DAW variable specification (their new, original model built for this analysis) DOES find an increase in violent crime and property crime rates (though not murder, which they acknowledge).  But the spline model of the same variables finds no statistically significant correlation whatsoever.  They even acknowledge this in their paper: “RTC laws on average increased violent crime by 9.5 percent and property crime by 6.8 percent in the years following adoption according to the dummy model, but again showed no statistically significant effect in the spline model.” (DAW 8).  But then they never mention it again or seek to address why the spline model—an alternative method that’s often preferred over polynomial interpolation for technical reasons—achieves such different results.  This spline model was built from the National Research Council report in 2004, and they used it earlier (sans other regressors) to show that the NRC’s conclusions it tentatively showed a decrease in crime rates associated with RTC laws disappear when the data set is extended to 2014.  But when they re-run it with their own variables, the lack of statistical significance is mentioned in a single line and then never brought up again.

In fact, the spline model is used comparatively for all four regression specifications, and the only cases in which it finds ANY statistical significance are the two the authors themselves discredit as methodologically unsound (LM and MM in their original versions).  But this point is never addressed—the polynomial “Dummy Variable Model” specification and the spline models all dramatically disagree, no matter WHAT set of variables they choose.  This, to me, strongly suggests that any conclusions drawn from the panel data regression analysis is highly suspect and the choice of specification deserves further review before they can be believed one way or the other.  Regression analysis is always extremely sensitive to specification, and results can shift dramatically based on what variables are included, what are omitted, and how they’re specified.  Unfortunately, the paper does not seem to discuss any testing for functional form misspecification (such as a Ramsey RESET test), so it is unclear if the authors compared their chosen model specification to other potential functional forms.  There’s no discussion, for example, of whether the polynomial or spline models are better and why.  This is a huge gap in the analysis that I would like to see addressed before I’m willing to accept any conclusions therefrom.*

Additionally, panel data suffers from some of the same limitations as cross-sectional data, including a need for large data sets to be credible.  In this case, the analysis only looked at 33 states (those that passed RTC laws between 1977 and 2004), making any conclusions drawn from the limited N=33 data set tentative at best.  This is not necessarily the authors’ fault—much data is only available at the state level, so it’s much harder to do a broader assessment with more data points (e.g., by county).  But it certainly does increase the grain of salt with which the analysis should be taken.  Despite that, the authors seem quite willing to draw sweeping conclusions when they should, by rights, be a lot more cautious about conclusive claims.**

The second part of the paper is even more problematic.  In short, they build a counterfactual model of each state that passed an RTC law in the specified time period, and then compare the predicted crime rates in those simulated states versus the observed crime rates in their real world counterparts.  This is certainly an interesting statistical technique, and is mathematically ingenious.  It might even be a useful tool for certain applications.  Unfortunately, counterfactual analysis, no matter how refined, suffers a fundamental flaw: by its very nature, it assumes the effects of a single event can be assessed in isolation.  In reality, as I’ve discussed before, human social systems are complex systems.  One major legal change will have dramatic effects across the board—that policy in turn drives many decisions down the line, so plucking out the one policy of interest and assuming all post-counterfactual decisions will remain the same is blatantly ridiculous.  It’s the statistical equivalent of saying “If only Pickett’s Charge had succeeded, the South would have won the Civil War.”  Well, no, because everything that happened AFTER Pickett’s Charge would have been completely different, so we can only make the vaguest guesses about what MAY have happened.

But that’s precisely what the authors are attempting to do here, and put the stamp of mathematical certainty on it to boot.  They built models of each RTC state in the target period by comparing several key crime-rate-related variables to control states without RTC laws, and then assessed the predicted crime rate in that model against the actual reported crime rates in reality to make a causal claim about the RTC laws’ effects on those crime rates.  They decided their models were good fits by comparing how well they tracked the fluctuations in crime rates in the years prior to the RTC (the counterfactual point), and if they were similar enough, they claim it’s a good predictive model.  But that fails to account for the cascading changes that would have occurred AFTER the counterfactual point by the nature of a complex system.  The entire analysis rests on an incredibly flawed assumption, and thus NO conclusive answers can be derived from it.  At best, it raises an interesting question.

The paper isn’t worthless, by any means.  The panel data analysis does a good job showing that NO specification, including John Lott’s original model from which he built his flawed “More Guns, Less Crime” thesis, supports a claim that RTC laws decrease crime rates.  But that’s about all it does.  It hints at the possibility RTC laws may increase violent and property crime rates (though not murder).  It certainly doesn’t conclusively demonstrate that claim, but it raises enough doubt that others researchers should tackle it in much more depth.  Similarly, the counterfactual “synthetic controls” analysis by no means proves a causal relationship between RTC laws and crime rates for the reasons explained above, but it raises an interesting question that should be examined further.

No, the problem is that the authors pay only lip service to the limitations of their analysis and instead make sweeping claims their data does not necessarily support: “The fact that two different types of statistical data—panel data regression and synthetic controls—with varying strengths and shortcomings and with different model specifications both yield consistent and strongly statistically significant evidence that RTC laws increase violent crime constitutes persuasive evidence that any beneficial effects from gun carrying are likely substantially outweighed by the increases in violent crime that these laws stimulate.”  (DAW, 39).  The problem is that the panel data regression is unclear given the discrepancies between the Dummy Variable and Spline Models, and less than solid given the low N value for cross-sectional comparisons; and that the synthetic controls rests on a flawed assumption about the nature of the social systems being modeled.

These limitations, combined with the many other papers looking at other types of regressions (such as the impacts of gun ownership in general on violent crime rates) that have been unable to find statistically significant correlations between legal gun prevalence and violent crime rates, make me extremely skeptical of this paper.  To be fair, it has yet to undergo peer review (it’s a working paper, after all), and it’s certainly possible many of my objections will be rectified in the final published version.  But right now, the best I can say for the data is that it raises some questions worth answering.  And it certainly doesn’t support the authors’ claim that their analysis is persuasive evidence of anything.  At least, not nearly as persuasive as they’d have you believe.

That’s why I said, at the beginning, never trust an abstract or a conclusion section: read the analysis for yourself, and only then see what the authors have to say about it.  Because there’s a great deal of truth to the old saying, “There are three kinds of lies: lies, damned lies, and statistics.”  Statistics are a powerful tool.  But even with the best intentions they’re easily manipulated, and even more easily misunderstood.

 


*For those of you who don’t speak “stats geek,” what this paragraph means is that essentially the authors compared two different types of models, which had dramatically different conclusions, and they kinda ignored that fact entirely and moved past it.  And then didn’t discuss anywhere in the paper itself or any of the appendices why they chose one over the other, or why they specified any of their models the way they did versus other options.  It isn’t damning, but it’s certainly suspiciously like a Jedi handwave: “This IS what our data says, trust us.”

**Again, for the non-statisticians, larger data sets tend to produce more reliable estimates–the larger your data set, the more likely it is that your model’s estimates approach reality.  Small data sets are inherently less reliable, and 33 observations per year in the panel data is a tiny data set.

 

The original paper is available here for anyone who cares to examine it for themselves: http://www.nber.org/papers/w23510

How Is the Money Supply Like Gastric Acid? (Well, It’s Complicated #2)

Discussions of banking and financial regulation, at least on social media with non-experts, tend to take one of two fairly absolutist views: either bankers are inherently malevolent extortionists who do nothing more than taking advantage of honest workers’ effort and thus need to be reined in by the watchful eye of government regulators, or the free market is a glorious paradise in which regulation is nothing more than an inefficient and unnecessary evil that hurts everyone by making the market less effective that it could be in producing wealth.  I hate to be the one to tell you, but neither view is correct.  Sorry.  But to see why, let’s take each in turn, and look at some examples.

First, there is little to no evidence that bankers are evil.  In fact, such views have been perpetuated throughout human history, and even underlie many anti-Semitic conspiracy theories (as until the modern era, Christian usury laws meant Jews were the primary financiers of medieval and renaissance Europe).  But the truth is the banking sector is a fundamental base of trade: it provides liquidity and investment capital that allows businesses to operate and expand.  Without investors, only the rich could afford to start businesses.  Thus the financial sector is not only not evil, it is intimately intertwined with every transaction in the modern world.  It allows for the existence of everything from start-up capital to pension funds to widespread home ownership.  Bankers want to make money, sure.  But by and large there’s no evidence they’re any more evil than their fellow non-financial industry citizens.  The existence of occasional bad actors like Bernie Madoff does not refute the vast amount of good that modern financial systems have done to develop economies and build general wealth and fuel trade and growth around the world.

But even with that firmly established, it does not mean regulation is unnecessary.  Even with the absolute best intentions, individual agents in the finance industry operate in a complex system.  Markets are highly interconnected and interdependent networks, and even if we grant the classical assumption that each agent is perfectly rational, the system in which they work means that the market does not act like a classical model would predict.  Rather, because of the high level of interconnectivity and interdependence, thousands of individually rational actors making perfectly rational decisions to optimize their own utility in their local environment interact in complex and often unpredictable ways, and feed off of each other.  What Agent A does in New York affects an investment decision of Agent B in London, which in turn influences the choices of Agent C in Tokyo, and so on for millions of decisions, rippling across the globe.  And all of these decisions are based on outside information as well, like weather patterns (for agricultural commodities futures) or political stability estimates or individual corporate strategies.  This network effect leads to emergent properties like speculative bubbles and market crashes and credit crunches and supply bottlenecks.  We see such inefficient trends even in 100% mechanically deterministic, perfectly rational simulations in agent-based computational models.  It’s even more inefficient when we introduce irrationality and the quirks of human individual and social behavior, like tendencies toward collusion and coercive practices and gaming the system through asymmetrical information and other “unfair” advantages. (For further information, please see the references I’ve listed at the end of this article.  I’ll also be elaborating more on the topic in my continuing Complex Systems series.)

All of these features of markets, especially with the actual human elements, can and do lead to widespread harm, from massive financial losses in crashes even to widespread starvation and death in the case of depressions and economic collapse.  So what, then, can we do to try to control such inefficient emergent properties like irrational bubbles and crashes as supply and demand get out of sync?

To answer this question, let’s briefly turn from economics, and turn instead to the human digestive system, using a metaphor first suggested to me by my dad.  Now, to be clear up front, I am neither a biologist nor a physiologist, so this will be a simplified metaphor to illustrate a point, rather than an examination of the mechanics of digestion.  But the human digestive system has evolved in such a way that it can control the amount of gastric acid in the stomach at any given time.  It does this because, when we were hunters and gatherers, we did not have a reliable source of food, so often our nutrient intake came in brief feasts—after a successful hunt or a profitable foraging effort—punctuated by long periods without food.  Thus the stomach needed to be able to adjust the level of acid, to digest food when it showed up, but avoid hurting itself when there was no food present.  Too much acid without food, and we get ulcers.  Too little acid when there IS food, and we can’t digest efficiently and have to sit around waiting for the food to dissolve slowly.  But the digestive system evolved a way to regulate the level of acid and adjust it as conditions change: keep it low during periods without food, ramp it up as necessary when food shows up, and then lower again to protect itself when the job is done.  This remarkable regulatory system gave us the flexibility to succeed as a species when we didn’t have a reliable food intake, and without it we’d likely have died off long before we figured out agriculture.  It’s not a perfect system: we still sometimes get ulcers, and we still sometimes have digestive problems if we gorge ourselves too fast and the system has to catch up after the fact.  But it works, pretty well, most of the time.

Now take that concept and apply it to the economy.  In this metaphor, food is market demand, and the acid is the money supply: it allows the market to process the demand as necessary.  But much like the stomach acid, a single constant level doesn’t work well.  Too much money supply, and we get massive inflation, and no one can afford anything regardless of demand.  Too little, and no one has money to buy things and trade grinds to a halt, and we might even get deflation (where people know their money will be in more demand in the future, so they’d prefer to hold on to it rather than spend it now).  The money supply, like our metaphorical gastric acid, has to be appropriate to the market’s requirements at the present time.  Therefore, the ability to adjust the money supply is essential to a smoothly functioning economy.  Money supply regulation helps the economy, by and large, by letting the market efficiently process demand through trade, without excessive inflation or deflation.

Now, much like the gastric acid regulatory system, money supply regulation isn’t perfect.  Generally it’s done by central banks like the Federal Reserve, which is a favorite target of free market advocates who are convinced the Fed has made the market worse and attributes many market problems, such as bubbles and crashes, to its interference.  However, there’s some decent evidence showing that’s not the case at all.  About a year and a half ago, I ran some numbers to see if the Fed has really made things worse.  What I found was that in the United States, prior to the founding of the Federal Reserve, depressions and recessions occurred on average every 4.33 years, lasting an average of 2.16 years each, with an average 22.8% peak-to-trough loss of business activity.  Since the founding of the Federal Reserve in 1913, they have occurred every 5.76 years, lasting an average 1.08 years, with an average peak-to-trough loss of only 10.1%.  If we look only at the period since the end of the Great Depression—an event which led to the creation of macroeconomic theory and its application by central banks—they drop to an average of 11 months every 6.33 years, with a peak-to-trough loss of a remarkably low 4.2%.  Now, I freely admit this was not a scientific, econometric analysis.  I did not control for confounding variables, so I’m not going to argue the Fed has itself caused the lower volatility of the markets since 1913.  But I’m not alone in noticing this trend: in financial economics, the period of approximately 1950-2007 is known as the “Great Moderation,” and prior to the 2007-08 crash, some financial and macroeconomic theorists firmly believed we’d “solved” the problem of major recessions, largely through high-level monetary and fiscal policy regulation.  Clearly, we have not (there’s a reason the Great Moderation ended in 2007).  But it’s virtually impossible to look at the empirical data and proclaim that the Fed somehow made things worse.  And there’s a very strong indication that policies such as regulating the money supply HAS dramatically reduced market volatility by matching the metaphorical acid level to the metaphorical food level.

Much like the digestive systems, however, it’s not a perfect system.  The experts and the regulators don’t always get it right.  Everyone makes mistakes and every system fails sometimes—especially when trying to control complex systems like economic markets.  Bubbles and crashes have not gone away even with a guiding hand on the wheel of the money supply.  There’s even some strong evidence that several Federal Reserve policies, combined with the independent actions of other regulators, inadvertently fueled the housing market bubble and risky financial practices that led to the the 2007-08 Wall Street collapse.  I’m certainly not arguing against regulatory reform.  I’m just saying that the idea regulation always makes things worse does not stand up to even the most cursory examination.  Sure, it certainly can make things worse—micromanaging policies add an unnecessary and often harmful regulatory burden that makes companies less effective and the market worse overall—but, if applied carefully and gently in the areas it CAN help, then it can also reduce volatility and decrease the negative effects when market agents get it wrong and everything goes bad.  Bankers aren’t inherently evil actors who exploit those less fortunate than themselves, but complex systems like financial markets mean even when everyone is acting with the best intentions, things can go very wrong in a hurry, and effective regulatory systems can help prevent them doing so or mitigate the harm when they do.

The money supply is just one example of a regulatory system that can help the market as a whole, if used carefully.  It’s certainly not the only one—others include limiting collusion and coercive behavior, reducing the impact of asymmetrical information in decision-making so “insiders” can’t take unfair advantage of the rest of the market, and other regulations that act as referees to keep the market as fair as possible.  But there are clearly harmful and wasteful regulations, too, like burdensome tax requirements and unnecessary micromanaging rules.  “Regulation” is such a broad term that no pithy one line explanation can possible capture the whole picture, and each needs to be examined individually in the context of how markets actually work to understand whether or not its valuable.  Like the title says, it’s complicated.

 


Further reading:

Eric Beinhocker, The Origin of Wealth: Evolution, Complexity, and the Radical Remaking of Economics, Harvard Business School Press, 2006

W. Brian Arthur, Complexity and the Economy, Oxford University Press, 2014

 

 

In Memoriam: Kenneth J. Arrow

A personal hero of mine passed away yesterday.

Ken Arrow was an economist, best known for his work on General Equilibrium (also called the Arrow-Debreu Model), for which he became the youngest person ever to win the Nobel Prize in Economics in 1972.  He also did extensive work on social choice theory, creating “Arrow’s Impossibility Theorem,” which proved mathematically that a fair voting system is not compatible with aggregate preference ordering when there are more than two options.  He developed the Fundamental Theorems of Welfare Economics, mathematically confirming Adam Smith’s “Invisible Hand” hypothesis in ideal markets.  Essentially, he spent the first part of his extensive career formalizing the mathematics underlying virtually every rationalist model in the latter half of the 20th century, providing the basis for Neoclassical Theory for decades.  And then, presented with overwhelming evidence that actual markets do not adhere to general equilibrium behavior, rather than allowing himself to be trapped by the elegance of his own theories, he spent the rest of his life trying to understand why.

He worked on endogenous growth theory and information economics, trying to understand phenomena that traditional rational models could not predict.  He was an early and outspoken advocate for modern behavioralist models after they first came to his attention at a seminar presentation by Richard Thaler in the late 1970s. He was a close friend and collaborator of W. Brian Arthur, the father of Complexity Economics, enthusiastically recognizing the potential of complexity theory in revolutionizing our understanding of market dynamics, even though that would mean his own Nobel Prize winning theory about how markets work was completely wrong.  Ken Arrow was never afraid to listen to new evidence and admit the possibility of his own errors and misunderstandings. When he saw something that explained the evidence better, he never hesitated to pursue it wherever it led.

Because he was a scientist. I know no higher praise.

Farewell, Professor.

On Rationality (Economic Terminology, #1)

As an economist, I often find myself talking past people when trying to explain complicated economic theories.  Surprisingly, this is less because of the in-depth knowledge required, and far more because we aren’t using the same terminology.  Many words used in economic contexts have very different meanings than their common usage.  Utility and value, for one.  Margin, for another.  And perhaps the most common source of confusion is the concept of rationality.

In common usage, “rational” basically means “reasonable” or “logical.”  The dictionary definition, according to a quick google check, is “based on or in accordance with reason or logic.”  Essentially, in common usage a rational person is someone who thinks things through and comes to a reasonable or logical conclusion.  Seems simple enough, right?

But not so in economics.  Traditional economic theory rests on four basic assumptions–rationality, maximization, marginality, and perfect information.  And the first of those, rationality, is the single biggest source of confusion when I try to discuss economic theory with non-economists.

To an economist, “rational” does not in the slightest sense mean “reasonable” or “logical.”  A rational actor is merely one who has well-ordered and consistent preferences.  That’s it.  That’s the entirety of economic rationality.  An economically rational actor who happens to prefer apples to oranges, and oranges to bananas, will never choose bananas over apples when given a choice between the two.  Such preferences can be strong (i.e., always prefers X to Y) or weak (i.e., indifferent between X and Y), but they are always consistent.  And those preferences can be modeled as widely or narrowly as you choose.  It could just be their explicit choices among a basket of goods, or you could incorporate social and situational factors like altruism, familial bonds, and cultural values.  They can be context dependent–one might prefer X to Y in Context A, and Y to X in Context B, but then one will always prefer X to Y in Context A and Y to X in Context B. It doesn’t matter: what their preferences actually are is irrelevant, no matter how ridiculous or unreasonable they might seem from the outside, so long as they are well-ordered and consistent.

This isn’t to say preferences can’t change for a rational actor.  They can, over time.  But they’re consistent, at the time a decision is made, across all time horizons–if you give a rational actor the choice between apples and bananas, it doesn’t matter whether they will receive the fruit now or a day from now.  They will always choose apples, until their preferences change overall.

An irrational actor, then, is by definition anyone who does not have well ordered and consistent preferences.  If an actor prefers apples to bananas when faced with immediate reward, but bananas to apples when they won’t get the reward until tomorrow, they’re economically irrational.  And the problem is, of course, that most of us exhibit such irrational preferences all the time.  For proof, we don’t have to look any further than our alarm clocks.

A rational actor prefers to get up at 6:30 AM, so he sets his alarm for 6:30 AM, and wakes up when it goes off.  End of story.  An irrational actor, on the other hand, prefers to get up at 6:30 AM when he sets the alarm, but when it actually goes off, he hits the snooze button a few times and gets up 15 minutes later.  His preferences have flipped–what he preferred when he set the alarm and what he preferred when it came time to actually get up were very different, and not because his actual preferences have changed at all.  Rather, he will make the same decisions day after day after day, because his preferences aren’t consistent over different time horizons.  The existence of the snooze button is due to the fact human beings do not, in general, exhibit economically rational preferences.  We can model such behavior with fancy mathematical tricks like quasi-hyperbolic discounting, but they’re by definition irrational in economic terminology.

And that’s why behavioral economics is now a major field–at some point between Richard Thaler’s Ph.D research in the late 1970s and his tenure as the President of the American Economics Association a couple years ago, most economists began to realize the limitations of models based on the unrealistic assumption of economic rationality.  And so they began to start trying to model decision making more in keeping with how people actually act.  Thaler last year predicted that “behavioral economics” will cease to exist as a separate field within three decades, because virtually all economics is now moving towards a behavioral basis.

In future editions of this series, we’ll look at other commonly misunderstood economic terms, including the other three assumptions I mentioned: marginality, maximization, and perfect information.