In Memoriam: Kenneth J. Arrow

A personal hero of mine passed away yesterday.

Ken Arrow was an economist, best known for his work on General Equilibrium (also called the Arrow-Debreu Model), for which he became the youngest person ever to win the Nobel Prize in Economics in 1972.  He also did extensive work on social choice theory, creating “Arrow’s Impossibility Theorem,” which proved mathematically that a fair voting system is not compatible with aggregate preference ordering when there are more than two options.  He developed the Fundamental Theorems of Welfare Economics, mathematically confirming Adam Smith’s “Invisible Hand” hypothesis in ideal markets.  Essentially, he spent the first part of his extensive career formalizing the mathematics underlying virtually every rationalist model in the latter half of the 20th century, providing the basis for Neoclassical Theory for decades.  And then, presented with overwhelming evidence that actual markets do not adhere to general equilibrium behavior, rather than allowing himself to be trapped by the elegance of his own theories, he spent the rest of his life trying to understand why.

He worked on endogenous growth theory and information economics, trying to understand phenomena that traditional rational models could not predict.  He was an early and outspoken advocate for modern behavioralist models after they first came to his attention at a seminar presentation by Richard Thaler in the late 1970s. He was a close friend and collaborator of W. Brian Arthur, the father of Complexity Economics, enthusiastically recognizing the potential of complexity theory in revolutionizing our understanding of market dynamics, even though that would mean his own Nobel Prize winning theory about how markets work was completely wrong.  Ken Arrow was never afraid to listen to new evidence and admit the possibility of his own errors and misunderstandings. When he saw something that explained the evidence better, he never hesitated to pursue it wherever it led.

Because he was a scientist. I know no higher praise.

Farewell, Professor.

On Rationality (Economic Terminology, #1)

As an economist, I often find myself talking past people when trying to explain complicated economic theories.  Surprisingly, this is less because of the in-depth knowledge required, and far more because we aren’t using the same terminology.  Many words used in economic contexts have very different meanings than their common usage.  Utility and value, for one.  Margin, for another.  And perhaps the most common source of confusion is the concept of rationality.

In common usage, “rational” basically means “reasonable” or “logical.”  The dictionary definition, according to a quick google check, is “based on or in accordance with reason or logic.”  Essentially, in common usage a rational person is someone who thinks things through and comes to a reasonable or logical conclusion.  Seems simple enough, right?

But not so in economics.  Traditional economic theory rests on four basic assumptions–rationality, maximization, marginality, and perfect information.  And the first of those, rationality, is the single biggest source of confusion when I try to discuss economic theory with non-economists.

To an economist, “rational” does not in the slightest sense mean “reasonable” or “logical.”  A rational actor is merely one who has well-ordered and consistent preferences.  That’s it.  That’s the entirety of economic rationality.  An economically rational actor who happens to prefer apples to oranges, and oranges to bananas, will never choose bananas over apples when given a choice between the two.  Such preferences can be strong (i.e., always prefers X to Y) or weak (i.e., indifferent between X and Y), but they are always consistent.  And those preferences can be modeled as widely or narrowly as you choose.  It could just be their explicit choices among a basket of goods, or you could incorporate social and situational factors like altruism, familial bonds, and cultural values.  They can be context dependent–one might prefer X to Y in Context A, and Y to X in Context B, but then one will always prefer X to Y in Context A and Y to X in Context B. It doesn’t matter: what their preferences actually are is irrelevant, no matter how ridiculous or unreasonable they might seem from the outside, so long as they are well-ordered and consistent.

This isn’t to say preferences can’t change for a rational actor.  They can, over time.  But they’re consistent, at the time a decision is made, across all time horizons–if you give a rational actor the choice between apples and bananas, it doesn’t matter whether they will receive the fruit now or a day from now.  They will always choose apples, until their preferences change overall.

An irrational actor, then, is by definition anyone who does not have well ordered and consistent preferences.  If an actor prefers apples to bananas when faced with immediate reward, but bananas to apples when they won’t get the reward until tomorrow, they’re economically irrational.  And the problem is, of course, that most of us exhibit such irrational preferences all the time.  For proof, we don’t have to look any further than our alarm clocks.

A rational actor prefers to get up at 6:30 AM, so he sets his alarm for 6:30 AM, and wakes up when it goes off.  End of story.  An irrational actor, on the other hand, prefers to get up at 6:30 AM when he sets the alarm, but when it actually goes off, he hits the snooze button a few times and gets up 15 minutes later.  His preferences have flipped–what he preferred when he set the alarm and what he preferred when it came time to actually get up were very different, and not because his actual preferences have changed at all.  Rather, he will make the same decisions day after day after day, because his preferences aren’t consistent over different time horizons.  The existence of the snooze button is due to the fact human beings do not, in general, exhibit economically rational preferences.  We can model such behavior with fancy mathematical tricks like quasi-hyperbolic discounting, but they’re by definition irrational in economic terminology.

And that’s why behavioral economics is now a major field–at some point between Richard Thaler’s Ph.D research in the late 1970s and his tenure as the President of the American Economics Association a couple years ago, most economists began to realize the limitations of models based on the unrealistic assumption of economic rationality.  And so they began to start trying to model decision making more in keeping with how people actually act.  Thaler last year predicted that “behavioral economics” will cease to exist as a separate field within three decades, because virtually all economics is now moving towards a behavioral basis.

In future editions of this series, we’ll look at other commonly misunderstood economic terms, including the other three assumptions I mentioned: marginality, maximization, and perfect information.

Dumbocrats and Republican’ts (Part 1): The Trouble with Dogma

American politics is currently beset by a problem.  Well, many, but for right now we’re going to focus on just one: polarization.  There’s a perception, with some evidence, that American politics is currently more polarized than at any other point in recent memory—certainly since the 1960s.  And this is a problem, because polarization leads to gridlock, to civil unrest, to social breakdowns, and even, in extreme cases, to civil war.  Religious polarization in Christian Europe led to a series of conflicts known as the Wars of Religion—the most famous being the 30 Years War, in which more than 8 million people died.  Polarization over slavery and trade issues between Northern and Southern states led to the American Civil War in the 1860s.  Most of us can agree that polarization is, in general, a bad thing for a society.  The question, though, is what to do about it.  And to answer that, first we have to look at what polarization is and what it is not.  Only then can we start to identify potential routes to solve the problem.

Let’s start with what it is not.  Polarization is not merely a particularly widespread and vehement disagreement.  Disagreement just means that different people have drawn different conclusions.  This, by itself, is healthy.  Societies without disagreement drive headlong into madness, fueled by groupthink and demagoguery.  Fascist and totalitarian societies suppress dissent because it slows or stops their efforts to achieve their perfect visions. Disagreement arises naturally—highly intelligent people, even those with a shared culture, can look at the same evidence, in the exact same context, and come to radically different conclusions because they weight different cultural values more highly than others, because they prioritize different goals over others, because they have different life experiences with which to color their judgements.  That’s healthy.  The discussions and debates arising from such disagreements are how groups and societies figure out how best to proceed in a manner that supports the goals and values of the group as a whole.

So if polarization isn’t just disagreement, what is it?  Polarization is a state of affairs where the fact other groups disagree with your group becomes more important than the source of that disagreement.  Essentially, polarization is where disagreeing groups are no longer willing to discuss and debate their disagreements and come to a compromise that accounts for everyone’s concerns, but instead everyone draws their line in the sand and refuses to budge.  Polarization is what occurs when we stop recognizing that disagreement is a natural and healthy aspect of a diverse society, and we start treating our viewpoints as dogma rather than platforms.  Platforms can be adjusted in the face of new evidence and reasonable arguments.  People who ascribe to a platform can compromise with people who ascribe to other platforms, for the mutual good of all involved.  But dogma is immutable and unchangeable.  People who ascribe to dogma cannot compromise, no matter what evidence or arguments they encounter.  Their minds are made up, and they will not be swayed.

Polarization occurs when dogma sets in.  Because when your beliefs are dogmatic, anyone who disagrees is no longer a fellow intelligent human being who just happens to have slightly different values and experiences coloring their beliefs.  When your beliefs are dogmatic, anyone who disagrees is at best an idiot who just doesn’t understand, and at worst a heretic who must be purged for the safety of your dogma.  When your beliefs are dogmatic, there’s no longer any value hearing what the other side has to say, and instead you turn to echo chambers that do nothing but reinforce the dogma you already believe.

Where does dogma come from?  Why do people ascribe to dogmatic beliefs when there is so much information available in the modern world?  It’s largely because critical thinking is difficult.  It’s not that people are stupid, but rather that when there IS so much information available, it’s hard to process it and tell the wheat from the chaff without a filter.  And dogmatic beliefs, distilled to simple talking points by those echo chambers like media sources and groups of friends and family, provide just such a filter with which people can try to understand a highly complex world by fitting it to their worldviews.  Dogma is comfortable.  Dogma makes sense.  Dogma tells us why we’re right, why our values are the right values and our beliefs are the right beliefs.  And that’s not to mention the draw of being part of the in-group: choosing and ascribing to a dogma lets you fit in with a crowd and gain respect at the low, low cost of merely repeating the same soundbites over and over again.  It’s self-reinforcing, especially in the world of modern 24 hour news networks, a thousand “news” websites to cater to any given belief system, and social media networks that let us surround ourselves with comfortable consensus and block those who might question our beliefs.  It’s no real mystery why people are drawn to dogmatic beliefs—the very things that could show them the error of their ways are the reasons they prefer their heads in the sand.

But most people would agree that dogma is bad, that critical thinking is good, even when they’re manifestly dogmatic themselves.  How can they be comfortable with that cognitive dissonance?  Well, quite simply, because they don’t even recognize it.  It’s much easier to identify dogmatic beliefs in others than in ourselves.  We all like to think we’ve thought through our positions and come to the right conclusions through logic and evidence, even when we quite clearly haven’t.  Hence the phenomenon of conservatives referring to “dumbocrats” and “libtards,” and liberals responding with “republican’ts” and “fascists.”  I’ve lost track of how many times I’ve seen conservatives assert liberalism is a mental disorder, and liberals say the exact same about conservatism, both sides laughing from their supposed superior mental position.  Self-reflection is actually incredibly difficult.  It takes a lot of effort.  It’s uncomfortable.  So we don’t do it.

Now that we’ve established what dogma is, where it comes from, and why people ascribe to it despite professing otherwise, in the next post in this series we’ll look at what we can do about it.

Well, It’s Complicated (#1): The Dose Makes the Poison

Minimum wages are a rather contentious subject in American politics.  On the one side, those in favor tout their benefits to low-income labor, such as increased purchasing power (see, for example, the current “living wage” movement pushing for a federal $15/hour minimum). On the other, those opposed cite “basic economics” to argue that increased wages lead to increased unemployment and inflation, thus resulting in no real-income benefit while simultaneously hurting those laid off or unable to find a job as hiring decreases.  The problem, of course, is that both arguments are hugely oversimplified and thus easy for the opposing sides to attack and “debunk.”  So it’s hard to tell who’s right and who’s wrong without a good foundation in economic theory and empirical research.  In order to figure it out, let’s take a closer look at each argument, and see where that theory and empirical evidence leads us.

First, let’s examine the pro-minimum wage camp.  There are several arguments, but in general they boil down to a couple main points.  One, increased wages lead to increased purchasing power, which in turn benefits not only the workers with higher real wages, but the economy as a whole as their spending has a multiplier effect.  Essentially, this means they get more money for the same labor, so they spend more money, which increases demand across the market, increasing supply, and everyone is wealthier.  It’s essentially the same argument as that in favor of tax cuts, just focused slightly differently: increased income results in increased spending, making everyone richer in real terms.  Two, paying workers a “living wage” decreases the need to subsidize them through government assistance, freeing up that money to be spent elsewhere (either by cutting taxes or by redirecting the funds to other projects).  Basically, if workers can support themselves without government subsidies, it benefits everyone.  And three, it’s morally the right thing to do, as we have a social imperative to lift the poorest members of our society out of the struggles of poverty.

Perhaps surprisingly to those who cite “basic economics” to refute these claims, there’s actually some decent economic support for them.  In fact, theorists and empirical researchers have both found evidence in favor of the benefits of low-level minimum wages.  To understand why, I’m going to take a second to briefly explain where minimum wages fit into conventional economic theory, because there are a few concepts that we need to clarify.

Wages, despite the way most people think of them, are to economists nothing more than another word for a price.  Specifically, they’re the price of labor: the worker plays the role of producer and seller, the employer plays the role of buyer and consumer, and the product is the worker’s labor, which is just a service like any other.  Thus, the worker owns the “supply curve” and the employer owns the “demand curve,” and the equilibrium price of labor—the wage—is the meeting point between the two, just like everyone sees in the standard supply-demand curves in their introductory economics courses in high school and college.

But there’s a key part that those introductory econ teachers often leave out when explaining supply-demand curves and equilibrium prices.  Namely, how we actually determine what that price is in the real world—given that we can’t see the supply and demand on a handy graph when trying to buy or sell a product or a service, how do prices actually get set?  The answer is, generally speaking, in one of two ways.  Either the seller sets a price, and potential buyers decide whether that price is lower than the maximum they would each be willing to pay for the good or service in question (and sellers adjust up or down according to the feedback they get from how many people are buying), or the buyer makes an offer, and the seller decides whether that offer is higher than the lowest they’d be willing to accept to provide the good or service (and buyers, then, adjust up or down according to the feedback they get from whether sellers will sell to them or not).  Sometimes this is rather abstract, like at a grocery store, where there’s no direct interaction over prices except the choice whether or not to buy at a given price point.  Sometimes it’s a real-time interaction, like haggling at a flea market.  But whether it’s a negotiation or a simple binary purchase decision, in the absence of major shocks to either supply or demand, the price generally reflects a stable attractor that we model as an “equilibrium.”  (Of course, in reality the price is almost never at a true equilibrium, and that attractor can sometimes shift unexpectedly despite no major shocks, but we can get into the details of sticky pricing and status quo biases and endowment effects and complexity and all those other fun quirks another time.  The standard equilibrium model is a good enough generalization for our purposes here.)

And here’s another key concept stemming from that—as I mentioned, whether the seller or the buyer is setting the price, those choosing whether to make the transaction are weighing that price against the highest they’d pay (if the buyer is choosing) or the lowest they’d accept (if the seller is choosing).  That difference, between the agreed upon price and each party’s “reservation price,” is called the surplus.  The difference between the lowest amount the seller would accept and the actual transaction price is the “producer’s surplus.”  The difference between the highest amount the buyer would pay and the actual transaction price is the “consumer’s surplus.”  The two combined is the “total surplus” of the transaction—how much everyone is better off for having completed the deal.  If no one gets a surplus—if no one gains value from the trade—there’s no reason for anyone to make the trade.  Generally, freely made transactions are a win-win for everyone involved, or at least a win for someone and a loss for nobody.  Simple enough, right?

Wages typically are set in the latter manner I described: the buyer (employer) makes an offer, and the seller (worker) chooses whether it’s high enough for them to accept.  Sometimes there’s room for negotiation, sometimes not, but regardless of the final offer, it will never be higher than the value the employer places on the potential labor of that particular worker in that particular job, and the worker’s choice to accept or keep looking elsewhere depends on whether it’s higher than the lowest they’re willing to accept to do the job in question.  Thus, just like with any other price, there’s a surplus for both the producer and the consumer, and that’s the net benefit of the transaction.

With this concept of wages as just another price, then, we can see that minimum wages are a governmentally-imposed price control, specifically a price “floor” (meaning prices cannot go below the established minimum, regardless of supply and demand dynamics).  So what does that do to our standard model of equilibrium pricing and surplus?  Well, it depends where the floor is relative to the equilibrium price.  If market wages for a given worker in a given position with a given skill set are already higher than the new minimum, it has little to no effect at all.  But as the minimum increases, it starts to have very significant effects.

Once a price floor passes above the theoretical equilibrium price, it starts cutting into demand—consumers are no longer willing to purchase as much for the product at that price point, because now they’re getting less of a “consumer surplus.”  Because they’re no longer selling as much, producers can lose surplus, too—they get more from each sale, but they make fewer sales.  If the price floor continues increasing, the extra benefit from each individual sale gets drowned out by the lower and lower number of sales.  This lost surplus—the amount by which everyone in the transaction is worse off—is called “deadweight loss.”  It’s an inefficiency, which in economic terms means the market is no longer making everyone as well off as it theoretically could.

This inefficiency is the base of the argument against minimum wages.  Opponents point to that inefficiency, saying “Look, you’re just making everyone worse off!”  But what they’re missing, and why I say there’s some decent economic support for low-level minimum wages, is that the deadweight loss isn’t the whole picture.  They ignore that part of the producer surplus that *grows* as the price floor increases.  Since in our model producers are workers, what the whole picture tells us is that yes, unemployment will increase for minimum wage employees, but also those who now get such jobs will be better off than they would be otherwise.  And that’s exactly what the empirical evidence shows as well.  The overwhelming bulk of studies about the effects of minimum wages have found that unemployment among minimum wage workers increases as minimum wages rise, but also that those with jobs see a higher real income that results in an improved standard of living.  So the pro-minimum wage camp’s claims aren’t as easily refuted by “basic economics” as the opposition often claims.  Those workers DO have higher real incomes.  We DO see a multiplier effect as their purchasing boosts local markets.  We DO see a reduced need for subsidies among such workers (assuming they aren’t in that subset with skewed real incomes from the so-called “welfare cliff,” but that’s a side product of a poorly designed subsidy system and a topic for another time).  So it would seem the evidence is in favor of minimum wages after all, right?

This is where it gets, well, complicated.  Because remember how I said “low-level minimum wages”?  That’s the part proponents often seem to forget or misunderstand.  As I mentioned, as a price floor continues increasing, the amount of surplus added from each individual sale is rapidly drowned out by the amount of surplus lost as sales decrease.  For labor markets, this means that low level minimum wages can boost unskilled workers’ real incomes and standards of living, at the cost of small increases in unemployment.  Small increases in unemployment can be effectively mitigated with various social support programs (like those subsidies we mentioned), so overall most everyone still wins.  But as minimum wages continue increasing higher and higher above the theoretical equilibrium, the positive effects apply to fewer and fewer workers and unemployment increases and increases, and soon the minimum wage’s negative effects have swamped its positives and everyone is worse off than they started.

Soon, employers are dealing with higher labor costs, so they cut back new hires and cut costs elsewhere, maybe even laying off current employees or cancelling planned investments that would have provided jobs elsewhere.  The unemployment rate among low skilled and unskilled workers skyrockets.  Those workers who CAN find a minimum wage job may have some increased real income, but they have to wait much longer for raises and often see cutbacks in benefits.  Additionally, since standard of living is highly dependent on one’s community, in many cases that increased real income doesn’t lead to much improvement in quality of life given the high unemployment and low business investment around them.

The dose makes the poison.  At a low level, minimum wages can help far more than they hurt.  They can increase real incomes for the vast majority of unskilled workers, they can boost productivity, and they can even fuel economic growth through multiplier effects of the added spending in the market.  But seeing those good effects, well intentioned but misguided advocates push for higher and higher minimums, following the “more is better” theory, not realizing that the negatives will quickly overwhelm the positives and the minimum wage will hurt the very workers it was intended to help.  If one is going to implement a minimum wage, it needs to be carefully watched to ensure it stays in the sweet spot where it does good, but doesn’t creep up so high above the market equilibrium that the medicine turns to poison.

So who’s right in the minimum wage debate?  Well, that goes back to the third point raised by minimum wage advocates–that we have a moral responsibility to help lift the poorest members of our society from the struggles of poverty.  I’m not going to pretend to know whether that’s true, as it depends entirely on your own values and your own political philosophy.  But, as explained above, if you happen to value government intervention to help the poor, a minimum wage CAN be an effective tool in that effort.  But only if those implementing it remember that the medicine can just as easily become a poison if not used cautiously and monitored carefully to ensure it keeps pace with natural market dynamics.  So neither side is truly right or wrong, at least in terms of the economics.  It depends entirely on the context and circumstances, and what you value.  As the title of the feature says, it’s complicated.

________________________
Sources for claims of empirical evidence:

Neumark, David. “The Effects of Minimum Wages on Employment.” 2015

Liu, Shanshan et al. “Impact of the Minimum Wage on Youth Labor Markets.” 2015

Congressional Budget Office. “The Effects of a Minimum-Wage Increase on Employment and Family Income.” 2014

Litwin, Benjamin. “Determining the Effect of the Minimum Wage on Income Inequality.” 2015.

What Is Antistupid?

There are things which cannot be taught in ten easy lessons, nor popularized for the masses; they take years of skull sweat. This be treason in an age when ignorance has come into its own and one man’s opinion is as good as another’s. But there it is…The world is what it is—and doesn’t forgive ignorance.” -R.A. Heinlein, Glory Road

What is Antistupid?

This blog has been, conceptually, years in the making.  It is just an extension of a Quixotic quest I have been upon for most of the last decade—namely, a quest against stupidity, in all its forms, wherever I find it.

What do I mean by stupidity?  At its core, I suppose my definition of stupidity would center on laziness of thought.  Whether that takes the form of willful ignorance, deliberate rejection of empirical evidence and logic and the scientific method, a devotion to unthinking dogma, a refusal to confront one’s own cognitive biases, a preference for echo chambers and “truthiness” over verifiable facts, or any other version of lazy thought, all would qualify.  Stupidity is not just ignorance.  Stupidity is not just being wrong.  Stupidity is laziness.

This is not to say those guilty of stupidity are themselves inherently stupid.  I firmly believe the vast majority of people are innately intelligent and capable of critical thought and reason.  Studies of IQ test results have shown a general increase in scores for generation after generation, known as the Flynn Effect—a result not yet well understood, but fairly damning of the conclusion that people are just stupid.  If I thought people themselves were irredeemably stupid, there would be no point railing against stupidity.  It would be as much a waste of time as railing against the weather.

Rather, I believe that people are lazy.  That reason and objective assessment of the facts are much harder than emotion and heuristic thought processes, and we tend to default to the latter without deliberate effort.  There’s some strong evidence for this belief, from various cognitive and social psychology studies such as those cited by Daniel Kahneman in his book “Thinking, Fast and Slow” (2011) and Duncan Watts in his “Everything Is Obvious—Once You Know the Answer” (2011).  Our brains work very efficiently, but the ways they work tend to lead us toward lazy thought patterns unless we work very hard to counter these tendencies.  And often, our upbringing and education just reinforces those tendencies rather than showing us a better way.

But despite all evidence to the contrary, despite long experience, I believe there’s merit in confronting this laziness, in shining light on stupidity and revealing it for what it is, and trying to guide those willing to listen back to the path of intelligent thought and nuanced reason.   In trying to show them a better way, a way that has, slowly and in fits and starts over the millennia, lifted mankind from the muck and filth of subsistence and grinding poverty to the heights of civilization and prosperity.  Because I have not yet lost hope for humanity, and much like a religious missionary preaching faith to the resistant heathens, even small and occasional victories make the struggle worthwhile.

This blog will tackle this challenge in multiple ways.  It will examine complicated and complex issues and try to reveal the nuanced realities underlying the oversimplifications.  It will look at and try to understand new and interesting ideas.  It will review books and articles and studies and try to place them in context.  It will challenge prevalent modes of thought and maybe even wax philosophic on occasion.  But most of all, it will strive to be a beacon in the dark, a guiding light for anyone struggling to make sense of the complex world around them, for anyone seeking refuge from the sea of popular stupidity around them.

I do not pretend to always be right.  In fact, I am routinely wrong, and do not expect that trend to change.  The difference between me (and those like me) and most people is simply this: I try to figure out when I’m wrong, and learn from it, and be less wrong in the future.  And, more importantly, when confronted with stupidity, we do not merely reject it out of hand, but seek to examine it, to learn from it, and to use it to strengthen our own understanding. That’s Antistupid.  So let’s tilt some windmills.