Blog

Tulips, Traffic Jams, and Tempests (Part 1): An Introduction to Complexity

In the early 1600s, during the Dutch Golden Age, tulips—a flower which had been introduced to Europe less than a century before—had become a status symbol, a luxury item coveted by all who wanted to flaunt their wealth.  At the same time, the Dutch were busy inventing modern financial instruments.  This became a dangerous combination when, in the mid-1630s, speculators entered the tulip market and futures prices on tulip bulbs—a durable commodity, given their longevity—began to skyrocket.  At its peak, in early 1637, single bulbs of the most coveted varietals traded for prices 10-15 times the annual salary of a skilled craftsman (roughly the equivalent of $500,000 to $800,000 today).  Even common varietals could sell for double or triple such a craftsman’s salary.  And then, in February 1637, almost overnight, prices dropped by 99.9999%, the market collapsed, the contracts were never honored, and tulip trading effectively stopped.  It’s generally considered the first recorded example of a speculative bubble.  For centuries, theorists have argued various explanations, from outside forces (a bubonic plague outbreak led traders to avoid a routine auction in Haarlem), to rational markets (prices matching demand and never separating wildly from the intrinsic value of the commodity), to legal changes in the futures and options market about the structure of contracts (meaning futures buyers would no longer be obligated to honor the full contract).  The Tulip Mania is one of the most famous stories in economics, and no one really knows why it happened in the first place.

Driving home from work, I (and probably most of you) often notice a curious phenomenon, which most of us just take for granted at this point.  Every evening at rush hour, my commute slows down.  Even when there’s no accident blocking a lane or two, even when the on-ramps are metered to ensure there aren’t dozens of cars trying to merge into the lane at once, even when there’s no dangerous weather, even when everyone is theoretically trying to get home as fast as they safely can, the cars around me on the highway are moving well below the speed limit.  We call this phenomenon “congestion” or a “traffic jam,” and everyone has just learned to deal with it.  Scientists have tried to model traffic for decades, with everything from fluid dynamics to phase theory.  Economists have likened it to “tragedy of the commons” models.  But no one has been able to produce a good mathematical model that matches empirical observations and can explain where it comes from in the first place in the absence of external triggering events.

Every summer, when the water in the north Atlantic is warm enough, and the winds are just right, and the atmospheric pressure is just right, sometimes—about a dozen times a year between June and November—a storm that, at any other time would remain just a storm, picks up speed and begins cyclonic motion.  And if the conditions are just right (and no one is quite sure what “just right” means), that cyclone will develop into a hurricane.  These massive tempests are to the original storm what the Great Chicago Fire was to the lantern that first lit the flames.  While the normal storm would have made some people wet and maybe knocked some trees over, hurricanes can cause widespread death and destruction among whatever’s in their paths, whether it’s fishing villages in the Caribbean or the New Orleans metropolis.  And, much like the Tulip Mania or traffic jams, while scientists have gotten reasonably good at identifying risk factors, no one is really sure what causes an ordinary storm to become a hurricane.  It requires the perfect combination of the right factors in the right place at the right time.  We can identify the (mostly) necessary conditions, but even when all of them are present, often a hurricane never appears.  Sometimes one appears even when they aren’t all there.  And yet, despite this apparent randomness, it happens like clockwork, a dozen or so times a year in the same six month timeframe.

Why do we care?  What do Dutch tulip markets, highway congestion, and tropical cyclones have in common?  The answer is all are natural features of what we call “complex” systems.  In this series of articles, we’ll look at what complex systems are and how they differ from complicated systems.  Markets, urban commutes, and weather patterns are all examples of different types of complex systems, and sometimes complex systems inherently exhibit unpredictable, wild, seemingly inexplicable behavior like bubbles and crashes, congestion and slowdowns, and out of control feedback loops.  Not because anyone wants them, or because they design for them, or they screwed up and designed the system badly.  But because that’s the nature of complexity.

Complexity is a difficult term to define, even though it’s been widely used in various scientific disciplines for decades.  In the next article of this series we’ll look at the defining characteristics of a complex system.  But for now, we’ll stick to the broad overview.  Complexity is the state in which the components of a system interact in multiple ways and produce “emergence,” or an end state greater than the sum of its parts.  Cars, buses, a multi-lane highway, public transportation, on- and off-ramps, surface streets, traffic lights, pedestrians, and so on are the components of the system.  They all interact in many different ways in a densely interconnected and interdependent system—what happens in one area can have wide-ranging affects across multiple areas of the system as a whole.  And thus, even though everyone hates traffic jams and everyone just wants to get home as efficiently as possible, the traffic jam nonetheless appears, like clockwork, every evening at rush hour.  Congestion is an emergent property of the commuting system.  It is more than the sum of its parts, completely different that the pieces making it up, the cars and the roads and so on.  That’s complexity, in a nutshell.

Contrast this to the other major type of systems, which we call “simple” and “complicated.”  A simple system is something like a simple machine.  A pendulum is a simple system.  A lever is a simple system.  In these, the system is the sum of its parts.  It allows us to do things we could not do without the system, but it is additive.  There are limited interactions, and they operate by well-defined rules.  A complicated system is just the extension of this, composed of many simple systems linked together.  Whereas the defining feature of a complex system is interconnectivity, a complicated system is defined by layers.  Hierarchical systems like military organizations are complicated systems: they may be very difficult to work through and figure out what goes where, but when you figure it out, you can see all the relationships and know what effects an action in one area will have elsewhere.  Many engineering problems deal with complicated systems, and thus humans have become quite skilled at understanding these types of systems: we use mathematical tools like differential equations and Boolean logic, and can distill the system into its essential components, which allows us to manipulate the system and solve problems.  It may be difficult and take an awful lot of math and ingenuity, but at the end of the day, the problems are solvable with such tools.

Complex problems, however, are not solvable with the traditional tools we use to address complicated systems, because by their very nature they work in fundamentally different ways.  As I already mentioned, they are defined not by the components and layers, but by the interconnectivity and interdependency of those components.  The connections matter more than the pieces that are connected, because those connections allow for emergent properties greater than the sum of the parts.  They allow for butterfly effects and feedback loops and inexplicable changes.  Complex systems are not all the same—complexity can occur in deterministic physical systems like weather patterns and ocean currents, or in nondeterministic social systems like ecosystems and commodities markets and traffic patterns, and even in deterministic virtual systems like computer simulations.  Because, again, what matters for complexity is the connectivity, not the components.

And because complex problems are not solvable with the tools we use to solve complicated problems, we often get unexpected results, causing even worse problems despite our best intentions.  This fundamental misunderstanding of how complex systems work has led to everything from inner city gridlock to economic collapse.  Researchers have only been studying complexity for about three decades now, but it has revolutionized understanding in fields ranging from computer science and physics to economics and climatology.  It’s amazing what you can do when you start asking the right questions.

In the next article, we’ll look at the characteristics of complex systems and a couple different types of them.  Then we’ll look at the tools we use to understand them.  And finally, since I’m an economist and this is my blog, we’ll look at the relatively new field of complexity economics and try to understand some the lessons learned about how markets actually work.

The Age of Hype

“We’re the middle children of history, man. No purpose or place. We have no Great War. No Great Depression. Our Great War’s a spiritual war… our Great Depression is our lives. We’ve all been raised on television to believe that one day we’d all be millionaires, and movie gods, and rock stars. But we won’t. And we’re slowly learning that fact. And we’re very, very pissed off.”

Chuck Palahniuk, Fight Club

Objectively, most of the major fights faced in 2017, on any major front, seem trivial.

ISIS is not an existential threat to the United States, the way Nazi Germany and the Soviet Union once were. Even the Russian security state struggles to do much beyond exert their influence in spheres they once had locked down and now are content to compete in.

On the front of civil rights, we’ve moved into an increasingly nebulous area of oppression vs. oppressors, where the oppression in question is… use of a bathroom? Who can use racial slurs? Perhaps the most hyped up one, Police killings of minorities, is best emblematic of this — the actual amount of unarmed people killed by police is exceptionally low for a nation of 320 million people.

Economically, we’re told American manufacturing is dying (despite an all-time high output in manufacturing products), we’re told the banks control everything in a way they never have before (which must be quite mirthful to the ghost of J. P. Morgan), and we’re told that ruin and bankruptcy are imminent on all fronts.

Politically, we’re quick to portray our political opponents as traitors, enemies, sycophants of foes far worse. A quick tour of political-leaning Facebook pages will find you a great host of people content to believe that Democrats are tools of radical socialism — or that Republicans are the tools of the far right in a way that suggests an American Reich is imminent.  Blood on the streets is coming any day now, because Youtube videos of Black Bloc Anarchists mixing it up with guys in MAGA hats have told us so.

What these issues all have in common, though, is that they’re all blown way out of proportion.

This isn’t to say that none of these are legitimate problems — excepting the accusations of widespread traitors among American politicians, most of these are very real problems.

But they’re not the colossal struggle that was World War II, or the American Civil Rights movement of the 1960s.

Good luck suggesting that to folks with strong opinions on this.

The US has a long tradition of the cult of the rebel — it’s in our national DNA and our very founding was an act of rebellion. It’s therefore unsurprising that so many Americans like to cast themselves as noble rebels against an evil empire — a common thread from burnt-out hippies to anti-government militias to Alex Jones to Bill Maher. When that’s overlayed with this overplayed sense of urgency, though, there is a very real problem that is only starting to emerge.

As anyone who’s taken a driving course can tell you, overcorrection is often just as fatal as not correcting. We’re entering an age of McCarthyism — everyone is a secret enemy in some way — they’re complicit in climate change, they’re racist or sexist, they’re authoritarian, they’re out to take your money and rip you off. The palettes differ from political affiliation to political affiliation, but the underlying trend is there.

Perhaps more disturbingly on the macro, and nearly unprecedented in history, it has become difficult to differentiate between what issues are important and what issues are not.

Imagine, for a second, that you are a Congressional Representative. It is completely conceivable, on a daily basis, that you will receive calls, letters, and requests on, at minimum, five broadstrokes issues: the economy, foreign policy, social policy, government accountability, and campaign promises. Each of these may have twenty or thirty different facets, and many tie together.

How do you prioritize? Can you prioritize? If half of your district is writing about healthcare while the other half is writing you about their taxes being too high and you’ve got a campaign promise about bringing back the Lockheed Plant that you can only get done if your pals in Arkansas get their new Army Reserve Training Center in this year’s defense budget, how do you spend your day? And that’s to say nothing about the recent fear over a recent mass shooting in your state, the impending budget decisions that your party whip expects you to back even though you know that your two biggest donors are completely against several of the provisions…

It’s no surprise that Americans have a low impression of Congress. With so many narratives out there, each thinking it’s top billing, everyone feels marginalized by the government.

The kicker is, the government is, honest to God, doing the best it humanly can given the circumstances. While this line might invite snark from libertarians and anarchists, it is worth considering that it is hard to imagine a form of government that could conceivably use the time of one Congressional session to solve the American healthcare crisis, defeat ISIS, fix immigration (either through reform or better security), make the military more efficient, expand LGBT rights while respecting religious rights, confront automation-displacement, solve economic anxiety, reduce the gap between the rich and the poor, enforce existing environmental law, enhance American education, etc., etc. It is truly a Herculean set of tasks, and empirically more than most previous governments had to oversee.

Our founders planned for a decentralized system, with many of these issues being solved closest to home. Federalism is still the best way to deal with such a problem. What’s concerning, however, is that for many Americans, they are no longer interested in a decentralized approach, especially as it pertains to the president.

Consider that Donald Trump was elected partially on the idea that he would reduce the McCarthyist hydra that is modern political correctness — this, on its face, seems reasonable to want to confront.

But how on Earth would a president be able to confront prevailing social trends? Sure, JFK may be partially responsible for America giving up the hat as a daily wear item, but Presidents generally are not trendsetters or people who adjust the social temperature of the nation. They are executives presiding over the government.

But to those who believe political correctness is an existential threat, it seems reasonable to bank as much as they can on as many different approaches as possible — elect an anti-PC president, force anti-PC legislation through congress, whine about it on Facebook to their friends so everyone knows about the great threat of PC. But consider that any time spent jousting at this windmill is time that is not spent confronting one of the other many problems that other voters prize over this. That drags their confidence down, and this idea that the President is expected to impact it drags the overall national opinion of the President down. That’s not including any partisan backlash from taking one side or another.

So this odd situation presents itself, where the president and congress are attempting to do as the voters asked — but if it’s not quick enough, not executed perfectly, then fickle public opinion turns against the very thing that was requested, and before it can be repealed, the American Voter is already demanding something new (after all, he’s besieged on all sides by supposedly existential threats).

So voters get burnt out. They despair. Their problems are ignored. Their doom is imminent. They turn to drugs or alcohol. They disengage. No one, they think, understands them or cares about them.

The Palahniuk quote at the beginning summarizes their plight well.

Where I struggle is that I don’t have an answer on how to fix, or reduce this. I’m not sure it will be. Post-modern politics looks to continue indefinitely into the future, and only get worse as more problems pile up, each hyped up to be the next World War II, the next Civil Rights movement.

In an era of choosing your own narrative with all evidence being somehow equal, it is a dark time to be an empiricist.

Note: This post was originally published at Philip S. Bolger’s Medium page.  It is reprinted with his permission.
https://medium.com/@philip.s.bolger/the-age-of-hype-48e0466d6379#.gbt334yus

In Memoriam: Kenneth J. Arrow

A personal hero of mine passed away yesterday.

Ken Arrow was an economist, best known for his work on General Equilibrium (also called the Arrow-Debreu Model), for which he became the youngest person ever to win the Nobel Prize in Economics in 1972.  He also did extensive work on social choice theory, creating “Arrow’s Impossibility Theorem,” which proved mathematically that a fair voting system is not compatible with aggregate preference ordering when there are more than two options.  He developed the Fundamental Theorems of Welfare Economics, mathematically confirming Adam Smith’s “Invisible Hand” hypothesis in ideal markets.  Essentially, he spent the first part of his extensive career formalizing the mathematics underlying virtually every rationalist model in the latter half of the 20th century, providing the basis for Neoclassical Theory for decades.  And then, presented with overwhelming evidence that actual markets do not adhere to general equilibrium behavior, rather than allowing himself to be trapped by the elegance of his own theories, he spent the rest of his life trying to understand why.

He worked on endogenous growth theory and information economics, trying to understand phenomena that traditional rational models could not predict.  He was an early and outspoken advocate for modern behavioralist models after they first came to his attention at a seminar presentation by Richard Thaler in the late 1970s. He was a close friend and collaborator of W. Brian Arthur, the father of Complexity Economics, enthusiastically recognizing the potential of complexity theory in revolutionizing our understanding of market dynamics, even though that would mean his own Nobel Prize winning theory about how markets work was completely wrong.  Ken Arrow was never afraid to listen to new evidence and admit the possibility of his own errors and misunderstandings. When he saw something that explained the evidence better, he never hesitated to pursue it wherever it led.

Because he was a scientist. I know no higher praise.

Farewell, Professor.

On Rationality (Economic Terminology, #1)

As an economist, I often find myself talking past people when trying to explain complicated economic theories.  Surprisingly, this is less because of the in-depth knowledge required, and far more because we aren’t using the same terminology.  Many words used in economic contexts have very different meanings than their common usage.  Utility and value, for one.  Margin, for another.  And perhaps the most common source of confusion is the concept of rationality.

In common usage, “rational” basically means “reasonable” or “logical.”  The dictionary definition, according to a quick google check, is “based on or in accordance with reason or logic.”  Essentially, in common usage a rational person is someone who thinks things through and comes to a reasonable or logical conclusion.  Seems simple enough, right?

But not so in economics.  Traditional economic theory rests on four basic assumptions–rationality, maximization, marginality, and perfect information.  And the first of those, rationality, is the single biggest source of confusion when I try to discuss economic theory with non-economists.

To an economist, “rational” does not in the slightest sense mean “reasonable” or “logical.”  A rational actor is merely one who has well-ordered and consistent preferences.  That’s it.  That’s the entirety of economic rationality.  An economically rational actor who happens to prefer apples to oranges, and oranges to bananas, will never choose bananas over apples when given a choice between the two.  Such preferences can be strong (i.e., always prefers X to Y) or weak (i.e., indifferent between X and Y), but they are always consistent.  And those preferences can be modeled as widely or narrowly as you choose.  It could just be their explicit choices among a basket of goods, or you could incorporate social and situational factors like altruism, familial bonds, and cultural values.  They can be context dependent–one might prefer X to Y in Context A, and Y to X in Context B, but then one will always prefer X to Y in Context A and Y to X in Context B. It doesn’t matter: what their preferences actually are is irrelevant, no matter how ridiculous or unreasonable they might seem from the outside, so long as they are well-ordered and consistent.

This isn’t to say preferences can’t change for a rational actor.  They can, over time.  But they’re consistent, at the time a decision is made, across all time horizons–if you give a rational actor the choice between apples and bananas, it doesn’t matter whether they will receive the fruit now or a day from now.  They will always choose apples, until their preferences change overall.

An irrational actor, then, is by definition anyone who does not have well ordered and consistent preferences.  If an actor prefers apples to bananas when faced with immediate reward, but bananas to apples when they won’t get the reward until tomorrow, they’re economically irrational.  And the problem is, of course, that most of us exhibit such irrational preferences all the time.  For proof, we don’t have to look any further than our alarm clocks.

A rational actor prefers to get up at 6:30 AM, so he sets his alarm for 6:30 AM, and wakes up when it goes off.  End of story.  An irrational actor, on the other hand, prefers to get up at 6:30 AM when he sets the alarm, but when it actually goes off, he hits the snooze button a few times and gets up 15 minutes later.  His preferences have flipped–what he preferred when he set the alarm and what he preferred when it came time to actually get up were very different, and not because his actual preferences have changed at all.  Rather, he will make the same decisions day after day after day, because his preferences aren’t consistent over different time horizons.  The existence of the snooze button is due to the fact human beings do not, in general, exhibit economically rational preferences.  We can model such behavior with fancy mathematical tricks like quasi-hyperbolic discounting, but they’re by definition irrational in economic terminology.

And that’s why behavioral economics is now a major field–at some point between Richard Thaler’s Ph.D research in the late 1970s and his tenure as the President of the American Economics Association a couple years ago, most economists began to realize the limitations of models based on the unrealistic assumption of economic rationality.  And so they began to start trying to model decision making more in keeping with how people actually act.  Thaler last year predicted that “behavioral economics” will cease to exist as a separate field within three decades, because virtually all economics is now moving towards a behavioral basis.

In future editions of this series, we’ll look at other commonly misunderstood economic terms, including the other three assumptions I mentioned: marginality, maximization, and perfect information.

Dumbocrats and Republican’ts (Part 1): The Trouble with Dogma

American politics is currently beset by a problem.  Well, many, but for right now we’re going to focus on just one: polarization.  There’s a perception, with some evidence, that American politics is currently more polarized than at any other point in recent memory—certainly since the 1960s.  And this is a problem, because polarization leads to gridlock, to civil unrest, to social breakdowns, and even, in extreme cases, to civil war.  Religious polarization in Christian Europe led to a series of conflicts known as the Wars of Religion—the most famous being the 30 Years War, in which more than 8 million people died.  Polarization over slavery and trade issues between Northern and Southern states led to the American Civil War in the 1860s.  Most of us can agree that polarization is, in general, a bad thing for a society.  The question, though, is what to do about it.  And to answer that, first we have to look at what polarization is and what it is not.  Only then can we start to identify potential routes to solve the problem.

Let’s start with what it is not.  Polarization is not merely a particularly widespread and vehement disagreement.  Disagreement just means that different people have drawn different conclusions.  This, by itself, is healthy.  Societies without disagreement drive headlong into madness, fueled by groupthink and demagoguery.  Fascist and totalitarian societies suppress dissent because it slows or stops their efforts to achieve their perfect visions. Disagreement arises naturally—highly intelligent people, even those with a shared culture, can look at the same evidence, in the exact same context, and come to radically different conclusions because they weight different cultural values more highly than others, because they prioritize different goals over others, because they have different life experiences with which to color their judgements.  That’s healthy.  The discussions and debates arising from such disagreements are how groups and societies figure out how best to proceed in a manner that supports the goals and values of the group as a whole.

So if polarization isn’t just disagreement, what is it?  Polarization is a state of affairs where the fact other groups disagree with your group becomes more important than the source of that disagreement.  Essentially, polarization is where disagreeing groups are no longer willing to discuss and debate their disagreements and come to a compromise that accounts for everyone’s concerns, but instead everyone draws their line in the sand and refuses to budge.  Polarization is what occurs when we stop recognizing that disagreement is a natural and healthy aspect of a diverse society, and we start treating our viewpoints as dogma rather than platforms.  Platforms can be adjusted in the face of new evidence and reasonable arguments.  People who ascribe to a platform can compromise with people who ascribe to other platforms, for the mutual good of all involved.  But dogma is immutable and unchangeable.  People who ascribe to dogma cannot compromise, no matter what evidence or arguments they encounter.  Their minds are made up, and they will not be swayed.

Polarization occurs when dogma sets in.  Because when your beliefs are dogmatic, anyone who disagrees is no longer a fellow intelligent human being who just happens to have slightly different values and experiences coloring their beliefs.  When your beliefs are dogmatic, anyone who disagrees is at best an idiot who just doesn’t understand, and at worst a heretic who must be purged for the safety of your dogma.  When your beliefs are dogmatic, there’s no longer any value hearing what the other side has to say, and instead you turn to echo chambers that do nothing but reinforce the dogma you already believe.

Where does dogma come from?  Why do people ascribe to dogmatic beliefs when there is so much information available in the modern world?  It’s largely because critical thinking is difficult.  It’s not that people are stupid, but rather that when there IS so much information available, it’s hard to process it and tell the wheat from the chaff without a filter.  And dogmatic beliefs, distilled to simple talking points by those echo chambers like media sources and groups of friends and family, provide just such a filter with which people can try to understand a highly complex world by fitting it to their worldviews.  Dogma is comfortable.  Dogma makes sense.  Dogma tells us why we’re right, why our values are the right values and our beliefs are the right beliefs.  And that’s not to mention the draw of being part of the in-group: choosing and ascribing to a dogma lets you fit in with a crowd and gain respect at the low, low cost of merely repeating the same soundbites over and over again.  It’s self-reinforcing, especially in the world of modern 24 hour news networks, a thousand “news” websites to cater to any given belief system, and social media networks that let us surround ourselves with comfortable consensus and block those who might question our beliefs.  It’s no real mystery why people are drawn to dogmatic beliefs—the very things that could show them the error of their ways are the reasons they prefer their heads in the sand.

But most people would agree that dogma is bad, that critical thinking is good, even when they’re manifestly dogmatic themselves.  How can they be comfortable with that cognitive dissonance?  Well, quite simply, because they don’t even recognize it.  It’s much easier to identify dogmatic beliefs in others than in ourselves.  We all like to think we’ve thought through our positions and come to the right conclusions through logic and evidence, even when we quite clearly haven’t.  Hence the phenomenon of conservatives referring to “dumbocrats” and “libtards,” and liberals responding with “republican’ts” and “fascists.”  I’ve lost track of how many times I’ve seen conservatives assert liberalism is a mental disorder, and liberals say the exact same about conservatism, both sides laughing from their supposed superior mental position.  Self-reflection is actually incredibly difficult.  It takes a lot of effort.  It’s uncomfortable.  So we don’t do it.

Now that we’ve established what dogma is, where it comes from, and why people ascribe to it despite professing otherwise, in the next post in this series we’ll look at what we can do about it.

Well, It’s Complicated (#1): The Dose Makes the Poison

Minimum wages are a rather contentious subject in American politics.  On the one side, those in favor tout their benefits to low-income labor, such as increased purchasing power (see, for example, the current “living wage” movement pushing for a federal $15/hour minimum). On the other, those opposed cite “basic economics” to argue that increased wages lead to increased unemployment and inflation, thus resulting in no real-income benefit while simultaneously hurting those laid off or unable to find a job as hiring decreases.  The problem, of course, is that both arguments are hugely oversimplified and thus easy for the opposing sides to attack and “debunk.”  So it’s hard to tell who’s right and who’s wrong without a good foundation in economic theory and empirical research.  In order to figure it out, let’s take a closer look at each argument, and see where that theory and empirical evidence leads us.

First, let’s examine the pro-minimum wage camp.  There are several arguments, but in general they boil down to a couple main points.  One, increased wages lead to increased purchasing power, which in turn benefits not only the workers with higher real wages, but the economy as a whole as their spending has a multiplier effect.  Essentially, this means they get more money for the same labor, so they spend more money, which increases demand across the market, increasing supply, and everyone is wealthier.  It’s essentially the same argument as that in favor of tax cuts, just focused slightly differently: increased income results in increased spending, making everyone richer in real terms.  Two, paying workers a “living wage” decreases the need to subsidize them through government assistance, freeing up that money to be spent elsewhere (either by cutting taxes or by redirecting the funds to other projects).  Basically, if workers can support themselves without government subsidies, it benefits everyone.  And three, it’s morally the right thing to do, as we have a social imperative to lift the poorest members of our society out of the struggles of poverty.

Perhaps surprisingly to those who cite “basic economics” to refute these claims, there’s actually some decent economic support for them.  In fact, theorists and empirical researchers have both found evidence in favor of the benefits of low-level minimum wages.  To understand why, I’m going to take a second to briefly explain where minimum wages fit into conventional economic theory, because there are a few concepts that we need to clarify.

Wages, despite the way most people think of them, are to economists nothing more than another word for a price.  Specifically, they’re the price of labor: the worker plays the role of producer and seller, the employer plays the role of buyer and consumer, and the product is the worker’s labor, which is just a service like any other.  Thus, the worker owns the “supply curve” and the employer owns the “demand curve,” and the equilibrium price of labor—the wage—is the meeting point between the two, just like everyone sees in the standard supply-demand curves in their introductory economics courses in high school and college.

But there’s a key part that those introductory econ teachers often leave out when explaining supply-demand curves and equilibrium prices.  Namely, how we actually determine what that price is in the real world—given that we can’t see the supply and demand on a handy graph when trying to buy or sell a product or a service, how do prices actually get set?  The answer is, generally speaking, in one of two ways.  Either the seller sets a price, and potential buyers decide whether that price is lower than the maximum they would each be willing to pay for the good or service in question (and sellers adjust up or down according to the feedback they get from how many people are buying), or the buyer makes an offer, and the seller decides whether that offer is higher than the lowest they’d be willing to accept to provide the good or service (and buyers, then, adjust up or down according to the feedback they get from whether sellers will sell to them or not).  Sometimes this is rather abstract, like at a grocery store, where there’s no direct interaction over prices except the choice whether or not to buy at a given price point.  Sometimes it’s a real-time interaction, like haggling at a flea market.  But whether it’s a negotiation or a simple binary purchase decision, in the absence of major shocks to either supply or demand, the price generally reflects a stable attractor that we model as an “equilibrium.”  (Of course, in reality the price is almost never at a true equilibrium, and that attractor can sometimes shift unexpectedly despite no major shocks, but we can get into the details of sticky pricing and status quo biases and endowment effects and complexity and all those other fun quirks another time.  The standard equilibrium model is a good enough generalization for our purposes here.)

And here’s another key concept stemming from that—as I mentioned, whether the seller or the buyer is setting the price, those choosing whether to make the transaction are weighing that price against the highest they’d pay (if the buyer is choosing) or the lowest they’d accept (if the seller is choosing).  That difference, between the agreed upon price and each party’s “reservation price,” is called the surplus.  The difference between the lowest amount the seller would accept and the actual transaction price is the “producer’s surplus.”  The difference between the highest amount the buyer would pay and the actual transaction price is the “consumer’s surplus.”  The two combined is the “total surplus” of the transaction—how much everyone is better off for having completed the deal.  If no one gets a surplus—if no one gains value from the trade—there’s no reason for anyone to make the trade.  Generally, freely made transactions are a win-win for everyone involved, or at least a win for someone and a loss for nobody.  Simple enough, right?

Wages typically are set in the latter manner I described: the buyer (employer) makes an offer, and the seller (worker) chooses whether it’s high enough for them to accept.  Sometimes there’s room for negotiation, sometimes not, but regardless of the final offer, it will never be higher than the value the employer places on the potential labor of that particular worker in that particular job, and the worker’s choice to accept or keep looking elsewhere depends on whether it’s higher than the lowest they’re willing to accept to do the job in question.  Thus, just like with any other price, there’s a surplus for both the producer and the consumer, and that’s the net benefit of the transaction.

With this concept of wages as just another price, then, we can see that minimum wages are a governmentally-imposed price control, specifically a price “floor” (meaning prices cannot go below the established minimum, regardless of supply and demand dynamics).  So what does that do to our standard model of equilibrium pricing and surplus?  Well, it depends where the floor is relative to the equilibrium price.  If market wages for a given worker in a given position with a given skill set are already higher than the new minimum, it has little to no effect at all.  But as the minimum increases, it starts to have very significant effects.

Once a price floor passes above the theoretical equilibrium price, it starts cutting into demand—consumers are no longer willing to purchase as much for the product at that price point, because now they’re getting less of a “consumer surplus.”  Because they’re no longer selling as much, producers can lose surplus, too—they get more from each sale, but they make fewer sales.  If the price floor continues increasing, the extra benefit from each individual sale gets drowned out by the lower and lower number of sales.  This lost surplus—the amount by which everyone in the transaction is worse off—is called “deadweight loss.”  It’s an inefficiency, which in economic terms means the market is no longer making everyone as well off as it theoretically could.

This inefficiency is the base of the argument against minimum wages.  Opponents point to that inefficiency, saying “Look, you’re just making everyone worse off!”  But what they’re missing, and why I say there’s some decent economic support for low-level minimum wages, is that the deadweight loss isn’t the whole picture.  They ignore that part of the producer surplus that *grows* as the price floor increases.  Since in our model producers are workers, what the whole picture tells us is that yes, unemployment will increase for minimum wage employees, but also those who now get such jobs will be better off than they would be otherwise.  And that’s exactly what the empirical evidence shows as well.  The overwhelming bulk of studies about the effects of minimum wages have found that unemployment among minimum wage workers increases as minimum wages rise, but also that those with jobs see a higher real income that results in an improved standard of living.  So the pro-minimum wage camp’s claims aren’t as easily refuted by “basic economics” as the opposition often claims.  Those workers DO have higher real incomes.  We DO see a multiplier effect as their purchasing boosts local markets.  We DO see a reduced need for subsidies among such workers (assuming they aren’t in that subset with skewed real incomes from the so-called “welfare cliff,” but that’s a side product of a poorly designed subsidy system and a topic for another time).  So it would seem the evidence is in favor of minimum wages after all, right?

This is where it gets, well, complicated.  Because remember how I said “low-level minimum wages”?  That’s the part proponents often seem to forget or misunderstand.  As I mentioned, as a price floor continues increasing, the amount of surplus added from each individual sale is rapidly drowned out by the amount of surplus lost as sales decrease.  For labor markets, this means that low level minimum wages can boost unskilled workers’ real incomes and standards of living, at the cost of small increases in unemployment.  Small increases in unemployment can be effectively mitigated with various social support programs (like those subsidies we mentioned), so overall most everyone still wins.  But as minimum wages continue increasing higher and higher above the theoretical equilibrium, the positive effects apply to fewer and fewer workers and unemployment increases and increases, and soon the minimum wage’s negative effects have swamped its positives and everyone is worse off than they started.

Soon, employers are dealing with higher labor costs, so they cut back new hires and cut costs elsewhere, maybe even laying off current employees or cancelling planned investments that would have provided jobs elsewhere.  The unemployment rate among low skilled and unskilled workers skyrockets.  Those workers who CAN find a minimum wage job may have some increased real income, but they have to wait much longer for raises and often see cutbacks in benefits.  Additionally, since standard of living is highly dependent on one’s community, in many cases that increased real income doesn’t lead to much improvement in quality of life given the high unemployment and low business investment around them.

The dose makes the poison.  At a low level, minimum wages can help far more than they hurt.  They can increase real incomes for the vast majority of unskilled workers, they can boost productivity, and they can even fuel economic growth through multiplier effects of the added spending in the market.  But seeing those good effects, well intentioned but misguided advocates push for higher and higher minimums, following the “more is better” theory, not realizing that the negatives will quickly overwhelm the positives and the minimum wage will hurt the very workers it was intended to help.  If one is going to implement a minimum wage, it needs to be carefully watched to ensure it stays in the sweet spot where it does good, but doesn’t creep up so high above the market equilibrium that the medicine turns to poison.

So who’s right in the minimum wage debate?  Well, that goes back to the third point raised by minimum wage advocates–that we have a moral responsibility to help lift the poorest members of our society from the struggles of poverty.  I’m not going to pretend to know whether that’s true, as it depends entirely on your own values and your own political philosophy.  But, as explained above, if you happen to value government intervention to help the poor, a minimum wage CAN be an effective tool in that effort.  But only if those implementing it remember that the medicine can just as easily become a poison if not used cautiously and monitored carefully to ensure it keeps pace with natural market dynamics.  So neither side is truly right or wrong, at least in terms of the economics.  It depends entirely on the context and circumstances, and what you value.  As the title of the feature says, it’s complicated.

________________________
Sources for claims of empirical evidence:

Neumark, David. “The Effects of Minimum Wages on Employment.” 2015

Liu, Shanshan et al. “Impact of the Minimum Wage on Youth Labor Markets.” 2015

Congressional Budget Office. “The Effects of a Minimum-Wage Increase on Employment and Family Income.” 2014

Litwin, Benjamin. “Determining the Effect of the Minimum Wage on Income Inequality.” 2015.

What Is Antistupid?

There are things which cannot be taught in ten easy lessons, nor popularized for the masses; they take years of skull sweat. This be treason in an age when ignorance has come into its own and one man’s opinion is as good as another’s. But there it is…The world is what it is—and doesn’t forgive ignorance.” -R.A. Heinlein, Glory Road

What is Antistupid?

This blog has been, conceptually, years in the making.  It is just an extension of a Quixotic quest I have been upon for most of the last decade—namely, a quest against stupidity, in all its forms, wherever I find it.

What do I mean by stupidity?  At its core, I suppose my definition of stupidity would center on laziness of thought.  Whether that takes the form of willful ignorance, deliberate rejection of empirical evidence and logic and the scientific method, a devotion to unthinking dogma, a refusal to confront one’s own cognitive biases, a preference for echo chambers and “truthiness” over verifiable facts, or any other version of lazy thought, all would qualify.  Stupidity is not just ignorance.  Stupidity is not just being wrong.  Stupidity is laziness.

This is not to say those guilty of stupidity are themselves inherently stupid.  I firmly believe the vast majority of people are innately intelligent and capable of critical thought and reason.  Studies of IQ test results have shown a general increase in scores for generation after generation, known as the Flynn Effect—a result not yet well understood, but fairly damning of the conclusion that people are just stupid.  If I thought people themselves were irredeemably stupid, there would be no point railing against stupidity.  It would be as much a waste of time as railing against the weather.

Rather, I believe that people are lazy.  That reason and objective assessment of the facts are much harder than emotion and heuristic thought processes, and we tend to default to the latter without deliberate effort.  There’s some strong evidence for this belief, from various cognitive and social psychology studies such as those cited by Daniel Kahneman in his book “Thinking, Fast and Slow” (2011) and Duncan Watts in his “Everything Is Obvious—Once You Know the Answer” (2011).  Our brains work very efficiently, but the ways they work tend to lead us toward lazy thought patterns unless we work very hard to counter these tendencies.  And often, our upbringing and education just reinforces those tendencies rather than showing us a better way.

But despite all evidence to the contrary, despite long experience, I believe there’s merit in confronting this laziness, in shining light on stupidity and revealing it for what it is, and trying to guide those willing to listen back to the path of intelligent thought and nuanced reason.   In trying to show them a better way, a way that has, slowly and in fits and starts over the millennia, lifted mankind from the muck and filth of subsistence and grinding poverty to the heights of civilization and prosperity.  Because I have not yet lost hope for humanity, and much like a religious missionary preaching faith to the resistant heathens, even small and occasional victories make the struggle worthwhile.

This blog will tackle this challenge in multiple ways.  It will examine complicated and complex issues and try to reveal the nuanced realities underlying the oversimplifications.  It will look at and try to understand new and interesting ideas.  It will review books and articles and studies and try to place them in context.  It will challenge prevalent modes of thought and maybe even wax philosophic on occasion.  But most of all, it will strive to be a beacon in the dark, a guiding light for anyone struggling to make sense of the complex world around them, for anyone seeking refuge from the sea of popular stupidity around them.

I do not pretend to always be right.  In fact, I am routinely wrong, and do not expect that trend to change.  The difference between me (and those like me) and most people is simply this: I try to figure out when I’m wrong, and learn from it, and be less wrong in the future.  And, more importantly, when confronted with stupidity, we do not merely reject it out of hand, but seek to examine it, to learn from it, and to use it to strengthen our own understanding. That’s Antistupid.  So let’s tilt some windmills.