Blog

On Social Contracts and Game Theory

The “social contract” is a theory of political philosophy, formalized by Enlightenment thinkers like Rousseau (who coined the term), Hobbes, Locke, and their contemporaries, but tracing its roots back to well before the birth of Christ.  Social contract theories can be found across many cultures, such as in the writings of ancient Buddhists like Asoka and Mahavatsu, and ancient Greeks like Plato and Epicurus.  The idea of the social contract is that individual members of a society either explicitly or implicity (by being members of that society) exchange some of their absolute freedom for protection of their fundamental rights.  This is generally used to justify the legitimacy of a governmental authority, as the entity to which individuals surrender some freedoms in agreement for the authority protecting their other rights.

At its most basic, then, the social contract can be defined as “an explicit or implicit agreement that society—or its representatives in the form of governmental authorities—has the legitimate right to hold members of said society accountable for violations of each other’s rights.”  Rather than every member of a society having to fend for themselves, they agree to hold each other accountable, which by necessity means accepting limitations on their own freedom to act as they please (because if their actions violate others’ rights, they’ve agreed to be held accountable).

The purpose of this article isn’t to rehash the philosophical argument for and against social contract theory.  It’s to point out that the evidence strongly demonstrates social contracts aren’t philosophy at all, but rather—much like economic markets—a fundamental aspect of human organization, a part of the complex system we call society that arose through evolutionary necessity and is by no means unique to human beings.  That without it, we would never have succeeded as a species.  And that whether you feel you’ve agreed to any social contract or not is irrelevant, because the only way to be rid of it is to do away with society entirely.  To do so, we’re going to turn to game theory and experimental economics.

In 2003, experimental economists Ernst Fehr and Urs Fischbacher of the University of Zurich published a paper they titled “The Nature of Human Altruism.”  It’s a fascinating meta-study, examining the experimental and theoretical evidence of altruistic behavior to understand why humans will often go out of their way to help others, even at personal costs.  There are many interesting conclusions in the paper, but I want to focus on one, specifically—the notion of “altruistic punishment,” that is, taking actions to punish others’ for perceived unfair or unacceptable behavior even when it costs the punisher something.  In various experiments for real money, with sometimes as much as three months’ income at stake, humans will hurt themselves (paying their own money or forfeiting offered money) to punish those they feel are acting unfairly.  The more unfair the action, the more willing people are to pay to punish them.  Fehr and Fischbacher sought to understand why this is the case, and their conclusion plays directly into the concept of a social contract.

 

A decisive feature of hunter-gatherer societies is that cooperation is not restricted to bilateral interactions.  Food-sharing, cooperative hunting, and warfare involve large groups of dozens or hundreds of individuals…By definition, a public good can be consumed by every group member regardless of the member’s contribution to the good.  Therefore, each member has an incentive to free-ride on the contributions of others…In public good experiments that are played only once, subjects typically contribute between 40 and 60% of their endowment, although selfish individuals are predicted to contribute nothing.  There is also strong evidence that higher expectations about others’ contributions induce individual subjects to contribute more.  Cooperation is, however, rarely stable and deteriorates to rather low levels if the game is played repeatedly (and anonymously) for ten rounds. 

The most plausible interpretation of the decay of cooperation is based on the fact that a large percentage of the subjects are strong reciprocators [i.e., they will cooperate if others cooperated in the previous round, but not cooperate if others did not cooperate in the previous round, a strategy also called “tit for tat’] but that there are also many total free-riders who never contribute anything.  Owing to the existence of strong reciprocators, the ‘average’ subject increases his contribution levels in response to expected increases in the average contribution of other group members.  Yet, owing to the existence of selfish subjects, the intercept and steepness of this relationship is insufficient to establish an equilibrium with high cooperation.  In round one, subjects typically have optimistic expectations about others’ cooperation but, given the aggregate pattern of behaviors, this expectation will necessarily be disappointed, leading to a breakdown of cooperation over time.

This breakdown of cooperation provides an important lesson…If strong reciprocators believe that no one else will cooperate, they will also not cooperate.  To maintain cooperation in [multiple person] interactions, the upholding of the believe that all or most members of the group will cooperate is thus decisive. 

Any mechanism that generates such a belief has to provide cooperation incentives for the selfish individuals.  The punishment of non-cooperators in repeated interactions, or altruistic punishment [in single interactions], provide two such possibilities.  If cooperators have the opportunity to target their punishment directly towards those who defect they impose strong sanctions on the defectors.  Thus, in the presence of targeted punishment opportunities, strong reciprocators are capable of enforcing widespread cooperation by deterring potential non-cooperators.  In fact, it can be shown theoretically that even a minority of strong reciprocators suffices to discipline a majority of selfish individuals when direct punishment is possible.  (Fehr and Fischbacher, 786-7)

 

In short, groups that lack the ability to hold their members accountable for selfish behavior and breaking the rules of fair interaction will soon break down as everyone devolves to selfish behavior in response to others’ selfishness.  Only the ability to punish members for violating group standards of fairness (and conversely, to reward members for fair behavior and cooperation) keeps the group functional and productive for everyone.*  Thus, quite literally, experimental economics tells us that some form of basic social contract—the authority of members of your group to hold you accountable for your choices in regards to your treatment of other members of the group, for the benefit of all—is not just a nice thing to have, but a basic necessity for a society to form and survive.  One might even say the social contract is an inherent emergent property of complex human social interaction.

But it isn’t unique to humans.  There are two major forms of cooperative behavior in animals: hive/colony behavior, and social group behavior.  Insects tend to favor hives and colonies, in which individuals are very simple agents that are specialized to perform some function, and there is little to no intelligent decision making on the part of individuals at all.  Humans are social—individuals are intelligent decision makers, but we survive and thrive better in groups, cooperating with members of our group in competition with other groups.  But so are other primates—apes and monkeys have small scale societies with leaders and accountability systems for violations of accepted behavior.  Wolf packs have leaders and accountability systems.  Lion prides have leaders and accountability systems.  Virtually every social animal you care to name has, at some level, an accountability system resembling what we call a social contract.  Without the ability to hold each other accountable, a group quickly falls apart and individuals must take care of themselves without relying on the group.

There is strong evidence that humans, like other social animals, have developed our sense of fairness and our willingness to punish unfair group members—and thus our acceptance that we ourselves can be punished for unfairness—not through philosophy, but through evolutionary necessity.  Solitary animals do not have a need for altruistic punishment.  Social animals do.  But as Fehr and Fischbacher also point out, “most animal species exhibit little division of labor and cooperation is limited to small groups.  Even in other primate societies, cooperation is orders of magnitude less developed than it is among humans, despite our close, common ancestry.”  So why is it that we’re so much more cooperative, and thus more successful, than other cooperative animals?  It is, at least in part, because we have extended our concept of altruistic punishment beyond that of other species:

 

Recent [sociobiological] models of cultural group selection or of gene-culture coevolution could provide a solution to the puzzle of strong reciprocity and large-scale human cooperation.  They are based on the idea that norms and institutions—such as food-sharing norms or monogamy—are sustained by punishment and decisively weaken the within-group selection against the altruistic trait.  If altruistic punishment is ruled out, cultural group selection is not capable of generating cooperation in large groups.  Yet, when punishment of [both] non-cooperators and non-punishers [those who let non-cooperation continue without punishment] is possible, punishment evolves and cooperation in much larger groups can be maintained.  (Fehr and Fischbacher, 789-90)

We don’t just punish non-cooperators.  We also punish those who let non-cooperators get away with it.  In large groups, that’s essential: in a series of computer simulations of multi-person prisoners’ dilemma games with group conflicts and different degrees of altruistic punishment, Fehr and Fischbacher found that no group larger than 16 individuals could sustain long term cooperation without punishing non-cooperators.  When they allowed punishment of non-cooperators, groups of up to 32 could sustain at least 40% cooperation.  But when they allowed punishment of both non-cooperators AND non-punishers, even groups of several hundred individuals could establish high (70-80%) rates of long-term cooperation.  Thus, that’s the key to building large societies: a social contract that allows the group to punish members for failing to cooperate, and for failing to enforce the rules of cooperation.

It doesn’t much matter if you feel the social contract is invalid because you never signed or agreed to it, any more than you feel the market is unfair because you never agreed to it.  The social contract isn’t an actual contract: it’s an emergent property of the system of human interaction, developed over millennia by evolution to sustain cooperation in large groups.  Whatever form it takes, whether it’s an association policing its own members for violating group norms, or a monarch acting as a third-party arbitrator enforcing the laws, or a democracy voting on appropriate punishment for individual members who’ve violated their agreed-upon standards of behavior, there is no long-term successful human society that does not feature some form of social contract, any more than there is a long-term successful human society that does not feature some form of trading of goods and services.  The social contract isn’t right or wrong.  It just is.  Sorry, Lysander Spooner.

*Note: none of this is to say what structure is best for enforcing group standards, nor what those group standards should be beyond the basic notion of fairness and in-group cooperation.  The merits and downsides of various governmental forms, and of various governmental interests, are an argument better left to philosophers and political theorists, and are far beyond the scope of this article.  My point is merely that SOME form of social authority to punish non-cooperators is an inherent aspect of every successful human society, and is an evolutionary necessity.

Tulips, Traffic Jams, and Tempests (Part 2): The Properties of Complexity

In the first installment of this series, I discussed some well-known phenomena that are emergent effects of complex systems, and gave a general definition of complexity.  In this installment, we’re going to delve a little deeper and look at some common properties and characteristics of complex systems.  Understanding such properties helps us understand what are the types of complex systems and what kinds of tools we have available to study complexity, which will be the topic of the third installment of the series.

There are four common properties that can be found in all complex systems:

  • Simple Components (Agents)
  • Nonlinear Interaction
  • Self-organization
  • Emergence

But what do these mean, and what do they look like?  Let’s examine each in turn.

 

SIMPLE COMPONENTS (AGENTS):

One of the most interesting things about complex systems is that they aren’t composed of complex parts.  They’re built from relatively simple components, compared to the system as a whole.  Human society is fantastically complex, but its individual components are just single human beings—which are themselves fantastically complex compared to the cells that are their fundamental building blocks.  Hurricanes are built of nothing more than air and water particles.  These components are also known as agents.  The two terms are interchangeable, but I prefer agents in general and that will be the term used throughout the rest of this post; the usual distinction among those who use both terms is that agents can make decisions and components cannot.  But computer simulations show that even when agents can only make one or two very simple deterministic responses with no actual decision-making process beyond “IF…THEN…,” enough of them interacting will result in intricate complexity.  We see this in nature, too—an individual ant is one of the simplest animals around, driven entirely by instincts that lead it to respond predictably to encountered stimuli, but an ant colony is a complex system that builds cities, forms a society, and even wages war.  The wonder of complex systems is that they spring not from complexity, but from relative simplicity, interacting.  But there must be many of them—a single car on a road network is not a complex system, but thousands of them are, which leads us to our next property.

 

NONLINEAR INTERACTION:

For complexity to arise from simple agents, there must be lots of them interacting, and these interactions must be nonlinear.  This nonlinearity results not from single interactions, but from the possibility that any one interaction can (and often does) cause a chain reaction of follow-on interactions with more agents, so a single decision or change can sometimes have wide-ranging effects.

In technical terms, nonlinear systems are those in which the change of the output is not proportional to the change of the input—that is, when you change what goes it, what comes out does not always grow or shrink proportionately to that original change.  In layman’s terms, the system’s response to the same input might be wildly different depending on the state or context of the system at the time.  Sometimes a small change has large effects.  Sometimes a large change is absorbed by the system with little to no effect at all.

This is important to understand for two reasons.  First is that, when dealing with complex systems, responses to actions and changes might be very different than those the actor originally expected or intended.  Even in complex systems, most of the time changes and decisions have the expected result.  But sometimes not, and when the system has a large number of interactions, the number of unexpected results can start to have a significant impact on the system as a whole.

The other reason this is important is that nonlinearity is the root of mathematical chaos.  Chaos is defined as seemingly random behavior with sensitive dependence on initial conditions—in nonlinear systems, under the right conditions, prediction is impossible, even theoretically.  One would have to know with absolute precision the starting conditions of every aspect of the system, and considering that the uncertainty principle means that it’s physically impossible to do so according to the laws of physics, perfect prediction of a complex system is impossible: to see what happens in a complex system of agents interacting in a nonlinear fashion, you must let it play out.  Otherwise, the best you can do is an approximation that loses accuracy the further and further you get from the starting point.  This sensitivity to initial conditions is commonly simplified as the “butterfly effect,” where even small changes can have large impacts across the system as a whole.

In short, the reason the weather man in most places can’t tell you next week’s weather very accurately isn’t because he’s bad at his job, but because weather (except in certain climates with stable weather patterns) literally cannot be predicted very well, and it gets harder and harder the further out you try to do so.  That’s just the nature of the system they’re working with.  It’s remarkable they’ve managed to get as good as they have, actually, considering that meteorologists only began to understand the chaotic principles underlying weather systems when Lorenz discovered them by accident in 1961.  Complex systems are inherently unpredictable, because they consist of a large number of nonlinear interactions.

 

SELF-ORGANIZATION

Complex systems do not have central control.  Rather, the agents interact with each other, giving rise to a self-organized network (which in turn shapes the nonlinearity of the interactions among the agents of the network).  This is a spontaneous ordering process, and requires no direction or design from internal or external controllers.   All complex systems are networks of connected nodes—the nodes are the agents and the connections are their interactions—whether they’re networks of interacting particles in a weather system or networks of interacting human beings in an economy.

The structure of the system arises from the network.  Often it takes the form of nested complex systems: a society is a system of human beings, which is a system of cells, each level of which is itself a complex system.  Mathematically, the term for this is a fractal—complex systems tend to have a fractal structure, which is a common feature of self-organized systems in general.  Some complex systems are networks of simple systems; others are networks of complicated systems; many are networks of complex sub-systems and complicated sub-systems and simple sub-systems all interacting together.  A traffic light is a simple system; a car is a complicated system; a human driver is a complex system, the traffic system is a network of many individual examples of all three of these sub-systems interacting as agents.  And it is entirely self-organized: the human beings who act as drivers are also the agents who plan and build the road system that guides their interactions as drivers, by means of other complex systems such as the self-organized political system in a given area.

 

EMERGENCE

Emergent properties, as discussed in part one of this series, are those aspects of a system that may not be determined merely from isolating the agents—the system is greater than the sum of its parts.  An individual neuron is very simple, capable of nothing more than firing individual electrical signals to other neurons.  But put a hundred billion of them together, and you have a brain capable of conscious thought, of decision-making, of art and math and philosophy.  A single car with a single driver is easy to understand, but put thousands of them on the road network at the same time, and you have traffic—and its own resulting emergent phenomena like congestion and gridlock.  Two people trading goods and services are simple, but millions of them create market bubbles and crashes.  This is the miracle of complexity: nonlinear networks of relatively simple agents self-organize and produce emergent phenomena that could not exist without the system itself.

Some common emergent properties include information processing and group decision-making, nonlinear dynamics (often shaped by feedback loops that dampen or amplify the effects of behaviors of individual agents), hierarchical structures (such as families and groups which cooperate among themselves and compete with each other at various levels of a social system), and evolutionary and adaptive processes.  A hurricane, for example, is an emergent property in which many water and air molecules interact under certain conditions and with certain inputs (such as heat energy from sunlight), enter a positive feedback loop that amplifies their interactions, and become far more than the sum of their parts, until the conditions change (such as hitting land and losing access to a ready supply of warm water), at which point they enter a negative feedback loop that eventually limits its growth and later dictates its decline back to nonexistence.  Adam Smith’s “Invisible Hand” is an emergent property of the complex systems we call “economies,” in which individual actions within a nonlinear network of agents are moderated by feedback loops and self-organized hierarchical structures to produce common goods through self-interested behavior.  Similarly, the failures of that Invisible Hand such as a speculative bubbles and market crashes are themselves emergent behaviors of the economic system, that cannot exist without the system itself.

 

 

Now that we’ve established the common properties of complex systems, in the next article we’ll look at a couple different types, what the differences are, and what tools we can use to model them properly.

On Nazis and Socialists

I commonly run into the argument that the Nazis were clearly left wing, because “Socialism is right there in their name.”  It’s getting old, because it ignores literally everything else about them.  Bottom Line: yes, they were socialists, but no, they were not leftists.

Part of the problem is that there’s no good accepted narrow definition of socialism–it ranges from Marxist-style Communism to Soviet-style command economies to Scandinavian-style public welfare states. A few months ago the American Economics Association’s Journal of Economic Perspectives published a paper trying answer the question of whether modern China is socialist, and it was fascinating because first they had to establish a working definition of socialism. Even today, there’s serious ongoing debate about that in academic economics circles.

But in the broad sense, Nazis were socialist, in that the government controlled the economy towards its own goals–the Reich ran the factories and mines and basically the entire supply chain and directed how resources and products would be used at the macro level.

That said, the Nazis explicitly rejected what we’ve come to think of as the “left-right” spectrum in favor of what political theorists call a “third way,” which married leftist-style government control of the economy to right-wing-style government control of social lives in a militaristic fascism focused on directing all social and economic aspects of the country towards the needs of the Fatherland. Nationalism (right) + Socialism (left) = National Socialism. Funny how that works. Thus, it’s a great straw man, because BOTH sides can legitimately point to aspects of Nazism and say “See?! They were the other side!” When the reality is they were neither.

Note: neo-Nazis, on the other hand, generally ignore the economic aspects of National Socialism in favor of the eugenicist racism and militaristic nationalism, and ARE legitimately classified as right-wing extremists.

The more you know.

Opinions, Assholes, and Believability

My next post was going to be a continuation of my introduction to complexity, and I promise that I’ll get around to that eventually, but a few days ago I was made aware of an exchange on Facebook that got me thinking, and I’d like to take a moment to lay out my thoughts on the matter.

I personally did not witness this exchange, but a friend of mine took a screenshot of the first part of the conversation (before the original commenter apparently deleted the thread).  First, some context: this occurred after a firearms industry page (Keepers Concealment, a maker of high quality holsters) shared a video of Ernest Langdon demonstrating the “Super Test,” a training drill that requires a shooter to fire rapidly and accurately at various ranges.  Ernest Langdon is indisputably one of the best handgun shooters in the world.  That’s an objective fact, and he has the competition results and measurable skills to prove it.  He is ranked as a Grand Master in the US Practical Shooting Association, a Distinguished Master in the International Defensive Pistol Association, and has won 10 National Championship Shooting titles and 2 World Speed Shooting titles.  All of which explains why when some nobody on Facebook (who we shall refer to as “Mr. Blue” as per my color-coded redacting) made this comment, quite a few people who know who Ernest Langdon is raised their collective eyebrows:

Opinions Screenshot

Mr. Blue, who as mentioned is a nobody in the shooting world with exactly zero grounds to critique Ernest Langdon, still for some reason felt the appropriate response to this video of the one the best shooters to have ever walked the face of the earth was to provide unsolicited advice on how he could improve.  Then, when incredulous individuals who actually know what they’re talking about point out exactly how arrogantly stupid that response is to this particular video, another person, Mr. Red, chimes in to claim that if we accept no one is above reproach, then “it’s fair for people (even those who can’t do better), to critique what they see in the video.”  To which I want to respond: no, it is not.

I agree entirely with Ray Dalio, the founder of Bridgewater Associates—the world’s largest hedge fund—when he says, “While everyone has the right to have questions and theories, only believable people have the right to have opinions. If you can’t successfully ski down a difficult slope, you shouldn’t tell others how to do it, though you can ask questions about it and even express your views about possible ways if you make clear that you are unsure.”  What that means is not that you can’t form an opinion.  It means that just because you have the right to HAVE an opinion doesn’t mean you have the right to express it and expect for anyone to take it seriously.  Just because you happen to be a breathing human being doesn’t make you credible, and the opinions of those who don’t know what they’re talking about are nothing more than a waste of time that serves only to prove that you’re an idiot.  Like the old saying says, “Better to remain silent and be thought a fool than to speak and remove all doubt.”

But Mr. Red’s comment goes to an attitude that lies at the heart of stupidity: the idea that everyone’s opinion is equally valid and worth expressing, and all have a right to be heard and taken seriously.  This certainly isn’t a new phenomenon: Isaac Asimov wrote about a cult of ignorance in an article back in 1980: “The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.’”  But new or not, it very much drives the willingness of ignorant nobodies to “correct” and “critique” genuine experts.  Mr. Blue has no idea of the thousands of hours of training Ernest Langdon has put into perfecting his grip and recoil management and trigger control, the hundreds of thousands of rounds of ammunition he’s put down range to hone his technique and become one of the best in the world at what he does.  Mr. Blue has put nowhere near that amount of time and effort into his own training—I know this, because if he had he’d also be one of the best shooters in the world, instead of some random nobody on Facebook.  But despite that vast gulf of experience and expertise, Mr. Blue still thinks he can and should provide unsolicited advice on how Ernest Langdon can be better.  And then doesn’t understand why others are laughing at him, and another commenter rides to the rescue, offended at the very notion people are dismissive of the critique of a nobody.

This is the same mindset that leads to people who barely graduated high school presuming to lecture the rest of us on why the experts are wrong on politics, on science, on economics, on medicine.  This is the mindset that leads to anti-vaccination movements bringing back measles outbreaks in the United States.  This is the mindset Sylvia Nasar described when she wrote “Frustrated as he was by his lack of a university education, particularly his ignorance of the works of Adam Smith, Thomas Mathus, David Ricardo, and other British political economics, [he] was nonetheless perfectly confident that British economics was deeply flawed.  In one of the last essays he wrote before leaving England, he hastily roughed out the essential elements of a rival doctrine.  Modestly, he called this fledgling effort ‘Outlines of a Critique of Political Economy.’”  The subject she was writing about?  Friedrich Engels, friend and collaborator of Karl Marx, and co-author of Das Kapital.  Is there any wonder that the system they came up with has never worked in practice?

While the conversation that inspired this line of thought was in the shooting world, I see it all the time in many, many different fields.  Novice weightlifters “critiquing” world record holders.  Undergraduate students “critiquing” tenured professors in their area of expertise.  Fans who’ve never stepped into a cage in their lives expounding upon what a professional fighter in the UFC “did wrong” as if they have the slightest idea what it’s like to step into the Octogon and put it all on the line in a professional MMA fight.  People with zero credibility believing they have the standing to offer unsolicited advice to genuine, established experts.  This isn’t to say that experts are infallible, or that criticism is always unfounded.  But to have your opinion respected, it must be believable, and if you lack that standing you’d damn well better be absolutely certain your criticism is well-founded and supported by strong evidence, because that’s all you have to go on at that point.  Appeal to authority is a logical fallacy, but unless you’ve got the evidence to back up your argument, the benefit of the doubt is going to go to the expert who has spent a lifetime in the field, versus the nobody who chooses to provide unsolicited commentary.

When you have an opinion on a technical subject, and you find yourself moved to express it in a public forum, please, just take a second and reflect.  “Do I have any standing to express this opinion and have it be believable, or is it well-supported by documented and cited evidence in such a way that it overcomes my lack of relative expertise?  Do I have a right for anyone to pay attention to my thoughts on this subject?  Or am I just another ignorant asshole spewing word diarrhea for the sake of screaming into the void and pretending I matter, that I’m not a lost soul drifting my way through existential meaninglessness, that my life has purpose and I’m special?”  Don’t be that guy.

Opinions and assholes, man.  Everyone’s got ‘em, and most of them stink.

Well, Actually… (A Rebuttal to a Rebuttal)

In June, researchers from the University of Washington released a National Bureau of Economic Research working paper entitled “Minimum Wage Increases, Wages, and Low-Wage Employment: Evidence from Seattle” (Jardim et al, 2017).  It made a lot of headlines, for the claim it made that the increased minimum wage in Seattle (up to $13 this year, and planned to increase to $15 within the next 18 months) has cost low-wage workers money by reducing employment hours across the board.  Essentially, Jardim and her colleagues showed rather convincingly through an in-depth econometric analysis that while wages for the average low-income worker increased per hour, their hours were cut to an extent that the losses exceeded the gains for a reduced total income.  It’s an impressive case for what I argued in my first “Well, It’s Complicated” article playing out in reality.

However, not everyone is convinced.  A friend of mine alerted me to an article by Rebecca Smith, J.D., of the National Employment Law Project that argues the study MUST be bullshit, because it doesn’t square with what she sees as reality.  In the article, Ms. Smith makes six specific claims in her effort to rebut the study.  Unfortunately for her, all these claims do is demonstrate she either doesn’t know how to read an econometric paper, or she didn’t actually read it that closely, because four are easily disproven by the paper itself, and the other two are irrelevant.

Specifically, she claimed the following:

  • The paper’s findings cannot “be squared with the reality of Seattle’s economy,” because “At 2.5 percent unemployment, Seattle is very near full employment. A Seattle Times story from earlier this month reported a restaurant owner’s Facebook confession that due to the tight labor market ‘I’d give my right pinkie up for an awesome dishwasher.’ Earlier this year, Jimmy John’s advertised for delivery drivers at $20 per hour.”

 

  • “By the UW team’s own admission, nearly 40 percent of the city’s low-wage workforce is excluded from the data: workers at multisite employers like Nordstrom, Starbucks, or even restaurants with a few locations like Dick’s.”

 

  • “Even worse, any time a worker left a job with a single-site employer for one with a chain, that was treated as a “lost job” that was blamed on the minimum wage — and that likely happened a lot since the minimum wage was higher for those large employers.”

 

  • “…Every time an employer raised its pay above $19 per hour — like Jimmy John’s did — it was counted not as a better job, but as a low-wage job lost as a result of the minimum wage.”

 

  • “The truth is, low-wage workers are making real gains in Seattle’s labor market. In almost all categories of traditionally low-wage work, there are more employers in the market than at any time in the city’s history. There are more coffee shops, restaurants and hotels in Seattle than ever before. The work is getting done. And the largest (and best-paid) workforce in the history of the city is doing it.”

 

  • “Nor can the study be reconciled with the wide body of rigorous research — including a recent study of Seattle’s restaurant industry by University of California economist Michael Reich, one of the country’s foremost minimum-wage researchers — that finds that minimum-wage-increases studies have not led to any appreciable job losses.”

 

Let’s look at each of these in turn.

 


 

THE CLAIM: This paper doesn’t match the reality of Seattle’s 2.5% unemployment rate, which is driving up wages regardless of the minimum wage increases due to high labor demand.

First, this isn’t an attack on the paper itself, just an expression of incredulity that demonstrates Ms. Smith apparently doesn’t understand how statistical analysis works—there are MANY factors that go into overall unemployment rates, and the minimum wage is just one of them.  Thus, the paper seeks to isolate unemployment and reduced employment hours in a given sector, and the overall unemployment rate is irrelevant to the analysis.

Second, Seattle’s unemployment rate is not 2.5%, and has not been 2.5% in a long time: the Bureau of Labor Statistics lists it as 2.9% in April 2017, its lowest point in the past year, and trended back up to 3.2% by May.  You don’t get to just make up numbers to refute points you don’t like.

Third, just to emphasize that this unemployment rate is not caused by the minimum wage increase, let’s compare Seattle to other cities.  At 3.2% unemployment in May, Seattle was tied with five other US cities: Detroit, San Diego, Orlando, San Antonio, and Washington, D.C.  All of these cities have their own minimum wages that vary between $8.10 and $13.75—but for a proper comparison, these rates have to be adjusted for cost of living.  When so adjusted, the lowest paid workers were those in Orlando, making the equivalent of a worker in Seattle taking home $10.94/hour.  The highest were those in San Antonio, with the equivalent of $19.39/hour at Seattle prices.  For comparison, workers actually IN Seattle were making just $13/hour in May—the average for all six cities was $13.28.  With such a range, can the “high” minimum wage be driving the employment rate that’s identical among all of them?  These six cities all tied for 11th place in lowest unemployment rates in the nation that month.  How about the best three?  First place goes to Denver, with a minimum wage of $11.53 (adjusted for Seattle cost of living).  Second to Nashville, at $9.72.  Third to Indianapolis, at $9.93.  I’d take a step back and reconsider any claim that the $13 minimum wage in Seattle is at all relevant to the overall employment rate, given that when you compare apples to apples, there is no apparent correlation at all.  Instead, let’s stick to what the paper was about: the impact of total income on low-income workers, given per hour wage increases versus changes in hours worked.

 


 

The Claim: The paper excluded 40% of the city’s low-wage workforce by ignoring all multisite employers.

Quite simply, no, it did not.  The paper did NOT exclude all multisite employers.  It excluded SOME multisite employers.  And those employers don’t account for “nearly 40% of the city’s low-wage workforce,” but rather 38% of the ENTIRE workforce across the state as a whole—no mention is made of their proportion within Seattle itself.  And if Ms. Smith had read closely, she’d realize that not only does this make perfect sense, but if anything it just as likely biased the results to UNDERESTIMATING the loss in employment hours for low-wage workers.

“The data identify business entities as UI account holders. Firms with multiple locations have the option of establishing a separate account for each location, or a common account. Geographic identification in the data is at the account level. As such, we can uniquely identify business location only for single-site firms and those multi-site firms opting for separate accounts by location. We therefore exclude multi-site single-account businesses from the analysis, referring henceforth to the remaining firms as “single-site” businesses. As shown in Table 2, in Washington State as a whole, single-site businesses comprise 89% of firms and employ 62% of the entire workforce (which includes 2.7 million employees in an average quarter).

Multi-location firms may respond differently to local minimum wage laws. On the one hand, firms with establishments inside and outside of the affected jurisdiction could more easily absorb the added labor costs from their affected locations, and thus would have less incentive to respond by changing their labor demand. On the other hand, such firms would have an easier time relocating work to their existing sites outside of the affected jurisdiction, and thus might reduce labor demand more than single-location businesses. Survey evidence collected in Seattle at the time of the first minimum wage increase, and again one year later, increase suggests that multi location firms were in fact more likely to plan and implement staff reductions. Our employment results may therefore be biased towards zero.”  (Jardim et al., pp 14-15).

Essentially, the nature of the data required they eliminate 11% of firms in Washington State before beginning their analysis, because there was literally no way to tell which of their sites (and therefore which of their reported employees) were located within the city of Seattle.  Multi-site firms that reported employment hours by individual site were absolutely included, just not those that aggregate their employment hours across all locations.  But that’s okay, because on the one hand, such firms can potentially absorb increased labor costs at their Seattle sites, but on the other they can more easily shift work to sites outside the affected area and thus reduce labor demand within Seattle in response to increased wage bills.  And surveys suggest that such firms are more likely to lay off workers in Seattle than other firms—hence, excluding them from the data is just as likely to make the employment reduction estimates LOWER than they’d be if the firms were included as they are to bias the estimates positively.  Ms. Smith’s objection on this point only serves to prove she went looking for things to object to, rather than reading in depth before jumping to conclusions.

 


 

The Claim: Workers leaving included firms for excluded firms was treated as job loss.

Literally no, it was not.  The analysis was based on total reported employment hours and not on total worker employment.  When employers lose workers to other firms, they don’t change their labor demand.  Either other workers get more hours or someone new is hired to cover the lost worker’s hours.  If hours DO decrease when a worker leaves, that means the employer has reduced its labor demand and sees no need to replace those hours.  In which case, it IS “job loss” in the sense of reduced total employment hours.

 


 

The Claim: When employers raised wages above $19/hour, it was treated as job loss.

Again, literally no, it was not.  Not only does the paper have an extensive three-page section addressing why and how they chose the primary analysis threshold of $19/hour, they also discuss in their results section how they checked their results against other thresholds up to $25/hour.  In short, a lot of previous research has conclusively shown that increasing minimum wages has a cascading effect up the wage chain: not only are minimum wage workers directly affected by it, but also workers who make above minimum wage—but the results decrease the further the wage level gets from the minimum.  Jardim et al did a lot of in-depth analysis to determine the most appropriate level to cut off their workforce sector of interest, and determined the cascading effects became negligible at around $18/hour—and they chose $19/hour to be conservative in case their estimates were incorrect.  And they STILL compared their results to thresholds ranging from $11/hour to $25/hour and proved the effects of the $13 minimum wage were statistically significant regardless of the chosen threshold.

 


 

The Claim: Low-wage workers are making gains, because in almost all categories of traditionally low-wage work, there are more employers in the market than at any time in the city’s history.

Simply irrelevant.  Number of employers has zero effect on number of hours worked for each worker.  Again, the analysis was based on total labor demand for low-wage workers as expressed in total employment hours across all sectors.  The number of firms makes no difference to how many labor hours each firm is demanding per worker.

 


 

The Claim: This study cannot be reconciled with the body of previous research, including Reich’s recent study of restaurant labor in Seattle, that indicates minimum wage increases don’t lead to job losses.

There are two parts of my response to this.  First, that body of previous research is MUCH more divided than Ms. Smith seems to believe, but that’s to be expected from someone who so demonstrably cherry-picks statements to support her point.  While one school of thought, led by researchers like Card and Krueger (the so-called New Minimum Wage Theorists), believes their research supports Ms. Smith’s argument, their claims have consistently been rebutted on methodological grounds by other researchers like Wascher and Neumark.  Over 70% of economists looking at the conflicting evidence have come down in support of the hypothesis that minimum wage increases lead to job loss among minimum wage workers, as cited by Mankiw in Principles of Economics.  I discuss both points of view more extensively in “Well, It’s Complicated #1.”

Second, the paper has a two and a half page section entitled “Reconciling these estimates with prior work,” where the authors discuss this issue quite in depth.  Including pointing out that when they limit their analysis to those methods used by previous researchers, their results are consistent with those researchers’ results, and they, too, support Reich’s conclusions in regards to the restaurant industry specifically.  In short, yes, this study ABSOLUTELY can be reconciled with the body of previous research.  That body just doesn’t say what Ms. Smith apparently believes it does.

 


 

So where does that leave us?  Quite simply, Ms. Smith is wrong.  Absolutely none of her criticisms of the paper hold water.  Actually, this is one of the most impressive econometric studies I’ve ever read—it even uses the Synthetic Controls methodology that I’ve previously criticized (see my article, “Lies, Damn Lies, and Statistics”), but it uses it in the intended limited and narrowly-focused manner in which it provides useful results.  And it does an excellent job of demonstrating that despite the booming Seattle economy, the rapid increase in the city’s minimum wage has hurt the very employees it intended to help, reducing their total monthly income by an average of 6.6%.

 



 

Original paper can be found here: http://www.nber.org/papers/w23532

 

Lies, Damn Lies, and Statistics: A Methodological Assessment

Last month, a National Bureau of Economic Research working paper made headlines across the internet when it claimed to demonstrate that so-called “Right to Carry” (RTC) laws increased violent and property crime rates above where they would have been without the passage of such laws.  Now, most science reporting is done by people with zero technical background in the advanced statistical techniques used by the paper’s authors, so I was a bit skeptical it actually said what they were claiming it said.  Fortunately, I DO have such a technical background, and for several years now I’ve been following with great interest the academic arguments about the effects of legal guns on crime rates.  And after having read the paper in question (Right-to-Carry Laws and Violent Crime: A Comprehensive Assessment Using Panel Data and a State-Level Synthetic Controls Analysis. Donohue, Aneja, and Weber. 2017), I’ve come to the conclusion that I was both right and wrong.  Wrong in that the paper’s authors drew the conclusion stated by the journalists—they do, in fact, claim their data shows RTC laws increase crime.  But right in that the data doesn’t actually show that when you read it with a more critical eye.  Therefore, I’m going to take this opportunity to teach a lesson in why you shouldn’t trust paper abstracts or jump to the “conclusions” section, but should instead examine the data and analysis yourself.

Disclaimer: I am a firearms enthusiast and active in the firearms community at large.  However, I am also a scientist, and absolutely made my very best efforts to set that bias aside in reading this paper, and give it the benefit of the doubt.  Whether I succeeded or not is up to you to decide, but I believe my objections to the authors’ conclusions are based solely on methodological grounds and will stand up to the scrutiny of any objective observer.  Unfortunately, I cannot say the same about Professor Donohue and his co-authors, as their own personal bias against guns is quite evident from their concluding paragraphs.  Because of that bias, I firmly believe this paper is a perfect example of “Lies, Damn Lies, and Statistics.”

The paper itself is really divided into two sections: a standard multiple regression analysis and then a newer counterfactual method called “synthetic control analysis.”  The authors claim both analyses show that RTC laws increase crime.  I disagree, at least with the extent they believe this to be true.  Let’s look at each in turn.

First, the regression analysis.  The meat of this analysis is comparing four different models (and three variations of those models) for a total of seven specifications.  Multiple regression analysis is a powerful tool to analyze observational data and attempt to control for several variables to see what impact each had on the target dependent variable.  In this paper, Donohue et al. build their own model specification (DAW), as well as comparing it to three pre-existing models from other researchers (BC, LM, MM).  They looked at the effects of states’ passage of RTC laws on three dependent variables: murder rates, violent crime rates, and property crime rates.  The key point of their research is that it goes beyond previous papers in its data set: where previous research has stopped at the year 2000, this paper looks at how the results change when the models are fed an additional 14 years of data, looking from 1977-2014.

The problem here is that the authors claim their panel data analysis consistently shows a statistically significant increase in violent crime when using the longer time horizon ending in 2014.  This is a problem because, quite bluntly, no, it does not.  The DAW variable specification (their new, original model built for this analysis) DOES find an increase in violent crime and property crime rates (though not murder, which they acknowledge).  But the spline model of the same variables finds no statistically significant correlation whatsoever.  They even acknowledge this in their paper: “RTC laws on average increased violent crime by 9.5 percent and property crime by 6.8 percent in the years following adoption according to the dummy model, but again showed no statistically significant effect in the spline model.” (DAW 8).  But then they never mention it again or seek to address why the spline model—an alternative method that’s often preferred over polynomial interpolation for technical reasons—achieves such different results.  This spline model was built from the National Research Council report in 2004, and they used it earlier (sans other regressors) to show that the NRC’s conclusions it tentatively showed a decrease in crime rates associated with RTC laws disappear when the data set is extended to 2014.  But when they re-run it with their own variables, the lack of statistical significance is mentioned in a single line and then never brought up again.

In fact, the spline model is used comparatively for all four regression specifications, and the only cases in which it finds ANY statistical significance are the two the authors themselves discredit as methodologically unsound (LM and MM in their original versions).  But this point is never addressed—the polynomial “Dummy Variable Model” specification and the spline models all dramatically disagree, no matter WHAT set of variables they choose.  This, to me, strongly suggests that any conclusions drawn from the panel data regression analysis is highly suspect and the choice of specification deserves further review before they can be believed one way or the other.  Regression analysis is always extremely sensitive to specification, and results can shift dramatically based on what variables are included, what are omitted, and how they’re specified.  Unfortunately, the paper does not seem to discuss any testing for functional form misspecification (such as a Ramsey RESET test), so it is unclear if the authors compared their chosen model specification to other potential functional forms.  There’s no discussion, for example, of whether the polynomial or spline models are better and why.  This is a huge gap in the analysis that I would like to see addressed before I’m willing to accept any conclusions therefrom.*

Additionally, panel data suffers from some of the same limitations as cross-sectional data, including a need for large data sets to be credible.  In this case, the analysis only looked at 33 states (those that passed RTC laws between 1977 and 2004), making any conclusions drawn from the limited N=33 data set tentative at best.  This is not necessarily the authors’ fault—much data is only available at the state level, so it’s much harder to do a broader assessment with more data points (e.g., by county).  But it certainly does increase the grain of salt with which the analysis should be taken.  Despite that, the authors seem quite willing to draw sweeping conclusions when they should, by rights, be a lot more cautious about conclusive claims.**

The second part of the paper is even more problematic.  In short, they build a counterfactual model of each state that passed an RTC law in the specified time period, and then compare the predicted crime rates in those simulated states versus the observed crime rates in their real world counterparts.  This is certainly an interesting statistical technique, and is mathematically ingenious.  It might even be a useful tool for certain applications.  Unfortunately, counterfactual analysis, no matter how refined, suffers a fundamental flaw: by its very nature, it assumes the effects of a single event can be assessed in isolation.  In reality, as I’ve discussed before, human social systems are complex systems.  One major legal change will have dramatic effects across the board—that policy in turn drives many decisions down the line, so plucking out the one policy of interest and assuming all post-counterfactual decisions will remain the same is blatantly ridiculous.  It’s the statistical equivalent of saying “If only Pickett’s Charge had succeeded, the South would have won the Civil War.”  Well, no, because everything that happened AFTER Pickett’s Charge would have been completely different, so we can only make the vaguest guesses about what MAY have happened.

But that’s precisely what the authors are attempting to do here, and put the stamp of mathematical certainty on it to boot.  They built models of each RTC state in the target period by comparing several key crime-rate-related variables to control states without RTC laws, and then assessed the predicted crime rate in that model against the actual reported crime rates in reality to make a causal claim about the RTC laws’ effects on those crime rates.  They decided their models were good fits by comparing how well they tracked the fluctuations in crime rates in the years prior to the RTC (the counterfactual point), and if they were similar enough, they claim it’s a good predictive model.  But that fails to account for the cascading changes that would have occurred AFTER the counterfactual point by the nature of a complex system.  The entire analysis rests on an incredibly flawed assumption, and thus NO conclusive answers can be derived from it.  At best, it raises an interesting question.

The paper isn’t worthless, by any means.  The panel data analysis does a good job showing that NO specification, including John Lott’s original model from which he built his flawed “More Guns, Less Crime” thesis, supports a claim that RTC laws decrease crime rates.  But that’s about all it does.  It hints at the possibility RTC laws may increase violent and property crime rates (though not murder).  It certainly doesn’t conclusively demonstrate that claim, but it raises enough doubt that others researchers should tackle it in much more depth.  Similarly, the counterfactual “synthetic controls” analysis by no means proves a causal relationship between RTC laws and crime rates for the reasons explained above, but it raises an interesting question that should be examined further.

No, the problem is that the authors pay only lip service to the limitations of their analysis and instead make sweeping claims their data does not necessarily support: “The fact that two different types of statistical data—panel data regression and synthetic controls—with varying strengths and shortcomings and with different model specifications both yield consistent and strongly statistically significant evidence that RTC laws increase violent crime constitutes persuasive evidence that any beneficial effects from gun carrying are likely substantially outweighed by the increases in violent crime that these laws stimulate.”  (DAW, 39).  The problem is that the panel data regression is unclear given the discrepancies between the Dummy Variable and Spline Models, and less than solid given the low N value for cross-sectional comparisons; and that the synthetic controls rests on a flawed assumption about the nature of the social systems being modeled.

These limitations, combined with the many other papers looking at other types of regressions (such as the impacts of gun ownership in general on violent crime rates) that have been unable to find statistically significant correlations between legal gun prevalence and violent crime rates, make me extremely skeptical of this paper.  To be fair, it has yet to undergo peer review (it’s a working paper, after all), and it’s certainly possible many of my objections will be rectified in the final published version.  But right now, the best I can say for the data is that it raises some questions worth answering.  And it certainly doesn’t support the authors’ claim that their analysis is persuasive evidence of anything.  At least, not nearly as persuasive as they’d have you believe.

That’s why I said, at the beginning, never trust an abstract or a conclusion section: read the analysis for yourself, and only then see what the authors have to say about it.  Because there’s a great deal of truth to the old saying, “There are three kinds of lies: lies, damned lies, and statistics.”  Statistics are a powerful tool.  But even with the best intentions they’re easily manipulated, and even more easily misunderstood.

 


*For those of you who don’t speak “stats geek,” what this paragraph means is that essentially the authors compared two different types of models, which had dramatically different conclusions, and they kinda ignored that fact entirely and moved past it.  And then didn’t discuss anywhere in the paper itself or any of the appendices why they chose one over the other, or why they specified any of their models the way they did versus other options.  It isn’t damning, but it’s certainly suspiciously like a Jedi handwave: “This IS what our data says, trust us.”

**Again, for the non-statisticians, larger data sets tend to produce more reliable estimates–the larger your data set, the more likely it is that your model’s estimates approach reality.  Small data sets are inherently less reliable, and 33 observations per year in the panel data is a tiny data set.

 

The original paper is available here for anyone who cares to examine it for themselves: http://www.nber.org/papers/w23510

Voodoo Economics (Well, It’s Complicated #3)

  • Feldstein (1986)
  • Feldstein and Elmendorf (1989)
  • Garrison and Lee (1992)
  • Engen and Skinner (1992)
  • Slemrod (1995)
  • Auerbach and Slemrod (1997)
  • Mendoza et al. (1997)
  • Padovano and Galli (2001)
  • Gale and Potter (2002)
  • Desai and Goolsbee (2004)
  • Gale and Orszag (2005)
  • Eissa (2008)
  • Mertens and Ravn (2010)
  • Huang (2012)
  • Favero and Giavazzi (2012)
  • Yagan (2015)
  • Mertens (2015)
  • Zidar (2015)
  • Gale and Samwick (2017)

All of these studies, meta-studies, and papers have one thing in common: they all look at the effect of tax cuts on supply-side economic growth, either in the US specifically or in developed countries in general, by assessing empirical data.  They look at both narrow framed cuts like the Bush cuts in 2001/2003, and at broad framed reforms like the Reagan cuts in 1981 and 1986.  And ALL of them find the empirical data shows effectively no statistically significant correlation between tax cuts and supply-side economic growth.  None.  Zip, zero, zilch, nada.

The theory of supply side economics, also known as top down economics, investment-side economics, trickle-down economics, or Reaganomics, is elegant.  It sounds good.  It fits the rational models of Chicago school neoclassical economists to a T. In short, it says tax cuts stimulate economic growth by freeing up capital for investment and spending.

The problem is that little to no evidence supports it actually working that way in the real world.  At all.  The closest we get is Romer and Romer (2010), which supports the idea that demand will increase in the short term in response to unexpected tax cuts, but stops short of any evidence demonstrating actual long term growth, especially on the supply side.

As Gale and Samwick (2017) puts it: “The argument that income tax cuts raise growth is repeated so often that it is sometimes taken as gospel.  However, theory, evidence, and simulation studies tell a different and more complicated story.  Tax cuts offer the potential to raise economic growth by improving incentives to work, save, and invest.  But they also create income effects that reduce the need to engage in productive economic activity, and they may subsidize old capital, which provides windfall gains to asset holders that undermine incentives for new activity.  In addition, tax cuts…not accompanied by spending cuts…will typically raise the federal budget deficit.  The increase in the deficit will reduce national saving…and raise interest rates, which will negatively affect investment.  The net effect of the tax cuts on growth is thus theoretically uncertain and depends on both the structure of the tax cut itself and the timing and structure of its financing.”

If you want to argue that taxes are a moral evil, fine.  I’m not going to delve into the philosophy of social choice theory here.  That’s your call.  But if you want to support your philosophical argument by saying the economics are on your side, that “basic economics” tell us tax cuts are always good for the economy by boosting growth, that I’ll refute all day every day, because it just ain’t true.

The simplistic story told by “supply side economics” advocates is pure political bullshit, unsupported by evidence or theory in a complex and nuanced reality, no matter how many Thomas Sowell books you’ve read.  In the face of the real world, it’s complicated.

How Is the Money Supply Like Gastric Acid? (Well, It’s Complicated #2)

Discussions of banking and financial regulation, at least on social media with non-experts, tend to take one of two fairly absolutist views: either bankers are inherently malevolent extortionists who do nothing more than taking advantage of honest workers’ effort and thus need to be reined in by the watchful eye of government regulators, or the free market is a glorious paradise in which regulation is nothing more than an inefficient and unnecessary evil that hurts everyone by making the market less effective that it could be in producing wealth.  I hate to be the one to tell you, but neither view is correct.  Sorry.  But to see why, let’s take each in turn, and look at some examples.

First, there is little to no evidence that bankers are evil.  In fact, such views have been perpetuated throughout human history, and even underlie many anti-Semitic conspiracy theories (as until the modern era, Christian usury laws meant Jews were the primary financiers of medieval and renaissance Europe).  But the truth is the banking sector is a fundamental base of trade: it provides liquidity and investment capital that allows businesses to operate and expand.  Without investors, only the rich could afford to start businesses.  Thus the financial sector is not only not evil, it is intimately intertwined with every transaction in the modern world.  It allows for the existence of everything from start-up capital to pension funds to widespread home ownership.  Bankers want to make money, sure.  But by and large there’s no evidence they’re any more evil than their fellow non-financial industry citizens.  The existence of occasional bad actors like Bernie Madoff does not refute the vast amount of good that modern financial systems have done to develop economies and build general wealth and fuel trade and growth around the world.

But even with that firmly established, it does not mean regulation is unnecessary.  Even with the absolute best intentions, individual agents in the finance industry operate in a complex system.  Markets are highly interconnected and interdependent networks, and even if we grant the classical assumption that each agent is perfectly rational, the system in which they work means that the market does not act like a classical model would predict.  Rather, because of the high level of interconnectivity and interdependence, thousands of individually rational actors making perfectly rational decisions to optimize their own utility in their local environment interact in complex and often unpredictable ways, and feed off of each other.  What Agent A does in New York affects an investment decision of Agent B in London, which in turn influences the choices of Agent C in Tokyo, and so on for millions of decisions, rippling across the globe.  And all of these decisions are based on outside information as well, like weather patterns (for agricultural commodities futures) or political stability estimates or individual corporate strategies.  This network effect leads to emergent properties like speculative bubbles and market crashes and credit crunches and supply bottlenecks.  We see such inefficient trends even in 100% mechanically deterministic, perfectly rational simulations in agent-based computational models.  It’s even more inefficient when we introduce irrationality and the quirks of human individual and social behavior, like tendencies toward collusion and coercive practices and gaming the system through asymmetrical information and other “unfair” advantages. (For further information, please see the references I’ve listed at the end of this article.  I’ll also be elaborating more on the topic in my continuing Complex Systems series.)

All of these features of markets, especially with the actual human elements, can and do lead to widespread harm, from massive financial losses in crashes even to widespread starvation and death in the case of depressions and economic collapse.  So what, then, can we do to try to control such inefficient emergent properties like irrational bubbles and crashes as supply and demand get out of sync?

To answer this question, let’s briefly turn from economics, and turn instead to the human digestive system, using a metaphor first suggested to me by my dad.  Now, to be clear up front, I am neither a biologist nor a physiologist, so this will be a simplified metaphor to illustrate a point, rather than an examination of the mechanics of digestion.  But the human digestive system has evolved in such a way that it can control the amount of gastric acid in the stomach at any given time.  It does this because, when we were hunters and gatherers, we did not have a reliable source of food, so often our nutrient intake came in brief feasts—after a successful hunt or a profitable foraging effort—punctuated by long periods without food.  Thus the stomach needed to be able to adjust the level of acid, to digest food when it showed up, but avoid hurting itself when there was no food present.  Too much acid without food, and we get ulcers.  Too little acid when there IS food, and we can’t digest efficiently and have to sit around waiting for the food to dissolve slowly.  But the digestive system evolved a way to regulate the level of acid and adjust it as conditions change: keep it low during periods without food, ramp it up as necessary when food shows up, and then lower again to protect itself when the job is done.  This remarkable regulatory system gave us the flexibility to succeed as a species when we didn’t have a reliable food intake, and without it we’d likely have died off long before we figured out agriculture.  It’s not a perfect system: we still sometimes get ulcers, and we still sometimes have digestive problems if we gorge ourselves too fast and the system has to catch up after the fact.  But it works, pretty well, most of the time.

Now take that concept and apply it to the economy.  In this metaphor, food is market demand, and the acid is the money supply: it allows the market to process the demand as necessary.  But much like the stomach acid, a single constant level doesn’t work well.  Too much money supply, and we get massive inflation, and no one can afford anything regardless of demand.  Too little, and no one has money to buy things and trade grinds to a halt, and we might even get deflation (where people know their money will be in more demand in the future, so they’d prefer to hold on to it rather than spend it now).  The money supply, like our metaphorical gastric acid, has to be appropriate to the market’s requirements at the present time.  Therefore, the ability to adjust the money supply is essential to a smoothly functioning economy.  Money supply regulation helps the economy, by and large, by letting the market efficiently process demand through trade, without excessive inflation or deflation.

Now, much like the gastric acid regulatory system, money supply regulation isn’t perfect.  Generally it’s done by central banks like the Federal Reserve, which is a favorite target of free market advocates who are convinced the Fed has made the market worse and attributes many market problems, such as bubbles and crashes, to its interference.  However, there’s some decent evidence showing that’s not the case at all.  About a year and a half ago, I ran some numbers to see if the Fed has really made things worse.  What I found was that in the United States, prior to the founding of the Federal Reserve, depressions and recessions occurred on average every 4.33 years, lasting an average of 2.16 years each, with an average 22.8% peak-to-trough loss of business activity.  Since the founding of the Federal Reserve in 1913, they have occurred every 5.76 years, lasting an average 1.08 years, with an average peak-to-trough loss of only 10.1%.  If we look only at the period since the end of the Great Depression—an event which led to the creation of macroeconomic theory and its application by central banks—they drop to an average of 11 months every 6.33 years, with a peak-to-trough loss of a remarkably low 4.2%.  Now, I freely admit this was not a scientific, econometric analysis.  I did not control for confounding variables, so I’m not going to argue the Fed has itself caused the lower volatility of the markets since 1913.  But I’m not alone in noticing this trend: in financial economics, the period of approximately 1950-2007 is known as the “Great Moderation,” and prior to the 2007-08 crash, some financial and macroeconomic theorists firmly believed we’d “solved” the problem of major recessions, largely through high-level monetary and fiscal policy regulation.  Clearly, we have not (there’s a reason the Great Moderation ended in 2007).  But it’s virtually impossible to look at the empirical data and proclaim that the Fed somehow made things worse.  And there’s a very strong indication that policies such as regulating the money supply HAS dramatically reduced market volatility by matching the metaphorical acid level to the metaphorical food level.

Much like the digestive systems, however, it’s not a perfect system.  The experts and the regulators don’t always get it right.  Everyone makes mistakes and every system fails sometimes—especially when trying to control complex systems like economic markets.  Bubbles and crashes have not gone away even with a guiding hand on the wheel of the money supply.  There’s even some strong evidence that several Federal Reserve policies, combined with the independent actions of other regulators, inadvertently fueled the housing market bubble and risky financial practices that led to the the 2007-08 Wall Street collapse.  I’m certainly not arguing against regulatory reform.  I’m just saying that the idea regulation always makes things worse does not stand up to even the most cursory examination.  Sure, it certainly can make things worse—micromanaging policies add an unnecessary and often harmful regulatory burden that makes companies less effective and the market worse overall—but, if applied carefully and gently in the areas it CAN help, then it can also reduce volatility and decrease the negative effects when market agents get it wrong and everything goes bad.  Bankers aren’t inherently evil actors who exploit those less fortunate than themselves, but complex systems like financial markets mean even when everyone is acting with the best intentions, things can go very wrong in a hurry, and effective regulatory systems can help prevent them doing so or mitigate the harm when they do.

The money supply is just one example of a regulatory system that can help the market as a whole, if used carefully.  It’s certainly not the only one—others include limiting collusion and coercive behavior, reducing the impact of asymmetrical information in decision-making so “insiders” can’t take unfair advantage of the rest of the market, and other regulations that act as referees to keep the market as fair as possible.  But there are clearly harmful and wasteful regulations, too, like burdensome tax requirements and unnecessary micromanaging rules.  “Regulation” is such a broad term that no pithy one line explanation can possible capture the whole picture, and each needs to be examined individually in the context of how markets actually work to understand whether or not its valuable.  Like the title says, it’s complicated.

 


Further reading:

Eric Beinhocker, The Origin of Wealth: Evolution, Complexity, and the Radical Remaking of Economics, Harvard Business School Press, 2006

W. Brian Arthur, Complexity and the Economy, Oxford University Press, 2014

 

 

Tulips, Traffic Jams, and Tempests (Part 1): An Introduction to Complexity

In the early 1600s, during the Dutch Golden Age, tulips—a flower which had been introduced to Europe less than a century before—had become a status symbol, a luxury item coveted by all who wanted to flaunt their wealth.  At the same time, the Dutch were busy inventing modern financial instruments.  This became a dangerous combination when, in the mid-1630s, speculators entered the tulip market and futures prices on tulip bulbs—a durable commodity, given their longevity—began to skyrocket.  At its peak, in early 1637, single bulbs of the most coveted varietals traded for prices 10-15 times the annual salary of a skilled craftsman (roughly the equivalent of $500,000 to $800,000 today).  Even common varietals could sell for double or triple such a craftsman’s salary.  And then, in February 1637, almost overnight, prices dropped by 99.9999%, the market collapsed, the contracts were never honored, and tulip trading effectively stopped.  It’s generally considered the first recorded example of a speculative bubble.  For centuries, theorists have argued various explanations, from outside forces (a bubonic plague outbreak led traders to avoid a routine auction in Haarlem), to rational markets (prices matching demand and never separating wildly from the intrinsic value of the commodity), to legal changes in the futures and options market about the structure of contracts (meaning futures buyers would no longer be obligated to honor the full contract).  The Tulip Mania is one of the most famous stories in economics, and no one really knows why it happened in the first place.

Driving home from work, I (and probably most of you) often notice a curious phenomenon, which most of us just take for granted at this point.  Every evening at rush hour, my commute slows down.  Even when there’s no accident blocking a lane or two, even when the on-ramps are metered to ensure there aren’t dozens of cars trying to merge into the lane at once, even when there’s no dangerous weather, even when everyone is theoretically trying to get home as fast as they safely can, the cars around me on the highway are moving well below the speed limit.  We call this phenomenon “congestion” or a “traffic jam,” and everyone has just learned to deal with it.  Scientists have tried to model traffic for decades, with everything from fluid dynamics to phase theory.  Economists have likened it to “tragedy of the commons” models.  But no one has been able to produce a good mathematical model that matches empirical observations and can explain where it comes from in the first place in the absence of external triggering events.

Every summer, when the water in the north Atlantic is warm enough, and the winds are just right, and the atmospheric pressure is just right, sometimes—about a dozen times a year between June and November—a storm that, at any other time would remain just a storm, picks up speed and begins cyclonic motion.  And if the conditions are just right (and no one is quite sure what “just right” means), that cyclone will develop into a hurricane.  These massive tempests are to the original storm what the Great Chicago Fire was to the lantern that first lit the flames.  While the normal storm would have made some people wet and maybe knocked some trees over, hurricanes can cause widespread death and destruction among whatever’s in their paths, whether it’s fishing villages in the Caribbean or the New Orleans metropolis.  And, much like the Tulip Mania or traffic jams, while scientists have gotten reasonably good at identifying risk factors, no one is really sure what causes an ordinary storm to become a hurricane.  It requires the perfect combination of the right factors in the right place at the right time.  We can identify the (mostly) necessary conditions, but even when all of them are present, often a hurricane never appears.  Sometimes one appears even when they aren’t all there.  And yet, despite this apparent randomness, it happens like clockwork, a dozen or so times a year in the same six month timeframe.

Why do we care?  What do Dutch tulip markets, highway congestion, and tropical cyclones have in common?  The answer is all are natural features of what we call “complex” systems.  In this series of articles, we’ll look at what complex systems are and how they differ from complicated systems.  Markets, urban commutes, and weather patterns are all examples of different types of complex systems, and sometimes complex systems inherently exhibit unpredictable, wild, seemingly inexplicable behavior like bubbles and crashes, congestion and slowdowns, and out of control feedback loops.  Not because anyone wants them, or because they design for them, or they screwed up and designed the system badly.  But because that’s the nature of complexity.

Complexity is a difficult term to define, even though it’s been widely used in various scientific disciplines for decades.  In the next article of this series we’ll look at the defining characteristics of a complex system.  But for now, we’ll stick to the broad overview.  Complexity is the state in which the components of a system interact in multiple ways and produce “emergence,” or an end state greater than the sum of its parts.  Cars, buses, a multi-lane highway, public transportation, on- and off-ramps, surface streets, traffic lights, pedestrians, and so on are the components of the system.  They all interact in many different ways in a densely interconnected and interdependent system—what happens in one area can have wide-ranging affects across multiple areas of the system as a whole.  And thus, even though everyone hates traffic jams and everyone just wants to get home as efficiently as possible, the traffic jam nonetheless appears, like clockwork, every evening at rush hour.  Congestion is an emergent property of the commuting system.  It is more than the sum of its parts, completely different that the pieces making it up, the cars and the roads and so on.  That’s complexity, in a nutshell.

Contrast this to the other major type of systems, which we call “simple” and “complicated.”  A simple system is something like a simple machine.  A pendulum is a simple system.  A lever is a simple system.  In these, the system is the sum of its parts.  It allows us to do things we could not do without the system, but it is additive.  There are limited interactions, and they operate by well-defined rules.  A complicated system is just the extension of this, composed of many simple systems linked together.  Whereas the defining feature of a complex system is interconnectivity, a complicated system is defined by layers.  Hierarchical systems like military organizations are complicated systems: they may be very difficult to work through and figure out what goes where, but when you figure it out, you can see all the relationships and know what effects an action in one area will have elsewhere.  Many engineering problems deal with complicated systems, and thus humans have become quite skilled at understanding these types of systems: we use mathematical tools like differential equations and Boolean logic, and can distill the system into its essential components, which allows us to manipulate the system and solve problems.  It may be difficult and take an awful lot of math and ingenuity, but at the end of the day, the problems are solvable with such tools.

Complex problems, however, are not solvable with the traditional tools we use to address complicated systems, because by their very nature they work in fundamentally different ways.  As I already mentioned, they are defined not by the components and layers, but by the interconnectivity and interdependency of those components.  The connections matter more than the pieces that are connected, because those connections allow for emergent properties greater than the sum of the parts.  They allow for butterfly effects and feedback loops and inexplicable changes.  Complex systems are not all the same—complexity can occur in deterministic physical systems like weather patterns and ocean currents, or in nondeterministic social systems like ecosystems and commodities markets and traffic patterns, and even in deterministic virtual systems like computer simulations.  Because, again, what matters for complexity is the connectivity, not the components.

And because complex problems are not solvable with the tools we use to solve complicated problems, we often get unexpected results, causing even worse problems despite our best intentions.  This fundamental misunderstanding of how complex systems work has led to everything from inner city gridlock to economic collapse.  Researchers have only been studying complexity for about three decades now, but it has revolutionized understanding in fields ranging from computer science and physics to economics and climatology.  It’s amazing what you can do when you start asking the right questions.

In the next article, we’ll look at the characteristics of complex systems and a couple different types of them.  Then we’ll look at the tools we use to understand them.  And finally, since I’m an economist and this is my blog, we’ll look at the relatively new field of complexity economics and try to understand some the lessons learned about how markets actually work.

The Age of Hype

“We’re the middle children of history, man. No purpose or place. We have no Great War. No Great Depression. Our Great War’s a spiritual war… our Great Depression is our lives. We’ve all been raised on television to believe that one day we’d all be millionaires, and movie gods, and rock stars. But we won’t. And we’re slowly learning that fact. And we’re very, very pissed off.”

Chuck Palahniuk, Fight Club

Objectively, most of the major fights faced in 2017, on any major front, seem trivial.

ISIS is not an existential threat to the United States, the way Nazi Germany and the Soviet Union once were. Even the Russian security state struggles to do much beyond exert their influence in spheres they once had locked down and now are content to compete in.

On the front of civil rights, we’ve moved into an increasingly nebulous area of oppression vs. oppressors, where the oppression in question is… use of a bathroom? Who can use racial slurs? Perhaps the most hyped up one, Police killings of minorities, is best emblematic of this — the actual amount of unarmed people killed by police is exceptionally low for a nation of 320 million people.

Economically, we’re told American manufacturing is dying (despite an all-time high output in manufacturing products), we’re told the banks control everything in a way they never have before (which must be quite mirthful to the ghost of J. P. Morgan), and we’re told that ruin and bankruptcy are imminent on all fronts.

Politically, we’re quick to portray our political opponents as traitors, enemies, sycophants of foes far worse. A quick tour of political-leaning Facebook pages will find you a great host of people content to believe that Democrats are tools of radical socialism — or that Republicans are the tools of the far right in a way that suggests an American Reich is imminent.  Blood on the streets is coming any day now, because Youtube videos of Black Bloc Anarchists mixing it up with guys in MAGA hats have told us so.

What these issues all have in common, though, is that they’re all blown way out of proportion.

This isn’t to say that none of these are legitimate problems — excepting the accusations of widespread traitors among American politicians, most of these are very real problems.

But they’re not the colossal struggle that was World War II, or the American Civil Rights movement of the 1960s.

Good luck suggesting that to folks with strong opinions on this.

The US has a long tradition of the cult of the rebel — it’s in our national DNA and our very founding was an act of rebellion. It’s therefore unsurprising that so many Americans like to cast themselves as noble rebels against an evil empire — a common thread from burnt-out hippies to anti-government militias to Alex Jones to Bill Maher. When that’s overlayed with this overplayed sense of urgency, though, there is a very real problem that is only starting to emerge.

As anyone who’s taken a driving course can tell you, overcorrection is often just as fatal as not correcting. We’re entering an age of McCarthyism — everyone is a secret enemy in some way — they’re complicit in climate change, they’re racist or sexist, they’re authoritarian, they’re out to take your money and rip you off. The palettes differ from political affiliation to political affiliation, but the underlying trend is there.

Perhaps more disturbingly on the macro, and nearly unprecedented in history, it has become difficult to differentiate between what issues are important and what issues are not.

Imagine, for a second, that you are a Congressional Representative. It is completely conceivable, on a daily basis, that you will receive calls, letters, and requests on, at minimum, five broadstrokes issues: the economy, foreign policy, social policy, government accountability, and campaign promises. Each of these may have twenty or thirty different facets, and many tie together.

How do you prioritize? Can you prioritize? If half of your district is writing about healthcare while the other half is writing you about their taxes being too high and you’ve got a campaign promise about bringing back the Lockheed Plant that you can only get done if your pals in Arkansas get their new Army Reserve Training Center in this year’s defense budget, how do you spend your day? And that’s to say nothing about the recent fear over a recent mass shooting in your state, the impending budget decisions that your party whip expects you to back even though you know that your two biggest donors are completely against several of the provisions…

It’s no surprise that Americans have a low impression of Congress. With so many narratives out there, each thinking it’s top billing, everyone feels marginalized by the government.

The kicker is, the government is, honest to God, doing the best it humanly can given the circumstances. While this line might invite snark from libertarians and anarchists, it is worth considering that it is hard to imagine a form of government that could conceivably use the time of one Congressional session to solve the American healthcare crisis, defeat ISIS, fix immigration (either through reform or better security), make the military more efficient, expand LGBT rights while respecting religious rights, confront automation-displacement, solve economic anxiety, reduce the gap between the rich and the poor, enforce existing environmental law, enhance American education, etc., etc. It is truly a Herculean set of tasks, and empirically more than most previous governments had to oversee.

Our founders planned for a decentralized system, with many of these issues being solved closest to home. Federalism is still the best way to deal with such a problem. What’s concerning, however, is that for many Americans, they are no longer interested in a decentralized approach, especially as it pertains to the president.

Consider that Donald Trump was elected partially on the idea that he would reduce the McCarthyist hydra that is modern political correctness — this, on its face, seems reasonable to want to confront.

But how on Earth would a president be able to confront prevailing social trends? Sure, JFK may be partially responsible for America giving up the hat as a daily wear item, but Presidents generally are not trendsetters or people who adjust the social temperature of the nation. They are executives presiding over the government.

But to those who believe political correctness is an existential threat, it seems reasonable to bank as much as they can on as many different approaches as possible — elect an anti-PC president, force anti-PC legislation through congress, whine about it on Facebook to their friends so everyone knows about the great threat of PC. But consider that any time spent jousting at this windmill is time that is not spent confronting one of the other many problems that other voters prize over this. That drags their confidence down, and this idea that the President is expected to impact it drags the overall national opinion of the President down. That’s not including any partisan backlash from taking one side or another.

So this odd situation presents itself, where the president and congress are attempting to do as the voters asked — but if it’s not quick enough, not executed perfectly, then fickle public opinion turns against the very thing that was requested, and before it can be repealed, the American Voter is already demanding something new (after all, he’s besieged on all sides by supposedly existential threats).

So voters get burnt out. They despair. Their problems are ignored. Their doom is imminent. They turn to drugs or alcohol. They disengage. No one, they think, understands them or cares about them.

The Palahniuk quote at the beginning summarizes their plight well.

Where I struggle is that I don’t have an answer on how to fix, or reduce this. I’m not sure it will be. Post-modern politics looks to continue indefinitely into the future, and only get worse as more problems pile up, each hyped up to be the next World War II, the next Civil Rights movement.

In an era of choosing your own narrative with all evidence being somehow equal, it is a dark time to be an empiricist.

Note: This post was originally published at Philip S. Bolger’s Medium page.  It is reprinted with his permission.
https://medium.com/@philip.s.bolger/the-age-of-hype-48e0466d6379#.gbt334yus