On Social Contracts and Game Theory

The “social contract” is a theory of political philosophy, formalized by Enlightenment thinkers like Rousseau (who coined the term), Hobbes, Locke, and their contemporaries, but tracing its roots back to well before the birth of Christ.  Social contract theories can be found across many cultures, such as in the writings of ancient Buddhists like Asoka and Mahavatsu, and ancient Greeks like Plato and Epicurus.  The idea of the social contract is that individual members of a society either explicitly or implicity (by being members of that society) exchange some of their absolute freedom for protection of their fundamental rights.  This is generally used to justify the legitimacy of a governmental authority, as the entity to which individuals surrender some freedoms in agreement for the authority protecting their other rights.

At its most basic, then, the social contract can be defined as “an explicit or implicit agreement that society—or its representatives in the form of governmental authorities—has the legitimate right to hold members of said society accountable for violations of each other’s rights.”  Rather than every member of a society having to fend for themselves, they agree to hold each other accountable, which by necessity means accepting limitations on their own freedom to act as they please (because if their actions violate others’ rights, they’ve agreed to be held accountable).

The purpose of this article isn’t to rehash the philosophical argument for and against social contract theory.  It’s to point out that the evidence strongly demonstrates social contracts aren’t philosophy at all, but rather—much like economic markets—a fundamental aspect of human organization, a part of the complex system we call society that arose through evolutionary necessity and is by no means unique to human beings.  That without it, we would never have succeeded as a species.  And that whether you feel you’ve agreed to any social contract or not is irrelevant, because the only way to be rid of it is to do away with society entirely.  To do so, we’re going to turn to game theory and experimental economics.

In 2003, experimental economists Ernst Fehr and Urs Fischbacher of the University of Zurich published a paper they titled “The Nature of Human Altruism.”  It’s a fascinating meta-study, examining the experimental and theoretical evidence of altruistic behavior to understand why humans will often go out of their way to help others, even at personal costs.  There are many interesting conclusions in the paper, but I want to focus on one, specifically—the notion of “altruistic punishment,” that is, taking actions to punish others’ for perceived unfair or unacceptable behavior even when it costs the punisher something.  In various experiments for real money, with sometimes as much as three months’ income at stake, humans will hurt themselves (paying their own money or forfeiting offered money) to punish those they feel are acting unfairly.  The more unfair the action, the more willing people are to pay to punish them.  Fehr and Fischbacher sought to understand why this is the case, and their conclusion plays directly into the concept of a social contract.

 

A decisive feature of hunter-gatherer societies is that cooperation is not restricted to bilateral interactions.  Food-sharing, cooperative hunting, and warfare involve large groups of dozens or hundreds of individuals…By definition, a public good can be consumed by every group member regardless of the member’s contribution to the good.  Therefore, each member has an incentive to free-ride on the contributions of others…In public good experiments that are played only once, subjects typically contribute between 40 and 60% of their endowment, although selfish individuals are predicted to contribute nothing.  There is also strong evidence that higher expectations about others’ contributions induce individual subjects to contribute more.  Cooperation is, however, rarely stable and deteriorates to rather low levels if the game is played repeatedly (and anonymously) for ten rounds. 

The most plausible interpretation of the decay of cooperation is based on the fact that a large percentage of the subjects are strong reciprocators [i.e., they will cooperate if others cooperated in the previous round, but not cooperate if others did not cooperate in the previous round, a strategy also called “tit for tat’] but that there are also many total free-riders who never contribute anything.  Owing to the existence of strong reciprocators, the ‘average’ subject increases his contribution levels in response to expected increases in the average contribution of other group members.  Yet, owing to the existence of selfish subjects, the intercept and steepness of this relationship is insufficient to establish an equilibrium with high cooperation.  In round one, subjects typically have optimistic expectations about others’ cooperation but, given the aggregate pattern of behaviors, this expectation will necessarily be disappointed, leading to a breakdown of cooperation over time.

This breakdown of cooperation provides an important lesson…If strong reciprocators believe that no one else will cooperate, they will also not cooperate.  To maintain cooperation in [multiple person] interactions, the upholding of the believe that all or most members of the group will cooperate is thus decisive. 

Any mechanism that generates such a belief has to provide cooperation incentives for the selfish individuals.  The punishment of non-cooperators in repeated interactions, or altruistic punishment [in single interactions], provide two such possibilities.  If cooperators have the opportunity to target their punishment directly towards those who defect they impose strong sanctions on the defectors.  Thus, in the presence of targeted punishment opportunities, strong reciprocators are capable of enforcing widespread cooperation by deterring potential non-cooperators.  In fact, it can be shown theoretically that even a minority of strong reciprocators suffices to discipline a majority of selfish individuals when direct punishment is possible.  (Fehr and Fischbacher, 786-7)

 

In short, groups that lack the ability to hold their members accountable for selfish behavior and breaking the rules of fair interaction will soon break down as everyone devolves to selfish behavior in response to others’ selfishness.  Only the ability to punish members for violating group standards of fairness (and conversely, to reward members for fair behavior and cooperation) keeps the group functional and productive for everyone.*  Thus, quite literally, experimental economics tells us that some form of basic social contract—the authority of members of your group to hold you accountable for your choices in regards to your treatment of other members of the group, for the benefit of all—is not just a nice thing to have, but a basic necessity for a society to form and survive.  One might even say the social contract is an inherent emergent property of complex human social interaction.

But it isn’t unique to humans.  There are two major forms of cooperative behavior in animals: hive/colony behavior, and social group behavior.  Insects tend to favor hives and colonies, in which individuals are very simple agents that are specialized to perform some function, and there is little to no intelligent decision making on the part of individuals at all.  Humans are social—individuals are intelligent decision makers, but we survive and thrive better in groups, cooperating with members of our group in competition with other groups.  But so are other primates—apes and monkeys have small scale societies with leaders and accountability systems for violations of accepted behavior.  Wolf packs have leaders and accountability systems.  Lion prides have leaders and accountability systems.  Virtually every social animal you care to name has, at some level, an accountability system resembling what we call a social contract.  Without the ability to hold each other accountable, a group quickly falls apart and individuals must take care of themselves without relying on the group.

There is strong evidence that humans, like other social animals, have developed our sense of fairness and our willingness to punish unfair group members—and thus our acceptance that we ourselves can be punished for unfairness—not through philosophy, but through evolutionary necessity.  Solitary animals do not have a need for altruistic punishment.  Social animals do.  But as Fehr and Fischbacher also point out, “most animal species exhibit little division of labor and cooperation is limited to small groups.  Even in other primate societies, cooperation is orders of magnitude less developed than it is among humans, despite our close, common ancestry.”  So why is it that we’re so much more cooperative, and thus more successful, than other cooperative animals?  It is, at least in part, because we have extended our concept of altruistic punishment beyond that of other species:

 

Recent [sociobiological] models of cultural group selection or of gene-culture coevolution could provide a solution to the puzzle of strong reciprocity and large-scale human cooperation.  They are based on the idea that norms and institutions—such as food-sharing norms or monogamy—are sustained by punishment and decisively weaken the within-group selection against the altruistic trait.  If altruistic punishment is ruled out, cultural group selection is not capable of generating cooperation in large groups.  Yet, when punishment of [both] non-cooperators and non-punishers [those who let non-cooperation continue without punishment] is possible, punishment evolves and cooperation in much larger groups can be maintained.  (Fehr and Fischbacher, 789-90)

We don’t just punish non-cooperators.  We also punish those who let non-cooperators get away with it.  In large groups, that’s essential: in a series of computer simulations of multi-person prisoners’ dilemma games with group conflicts and different degrees of altruistic punishment, Fehr and Fischbacher found that no group larger than 16 individuals could sustain long term cooperation without punishing non-cooperators.  When they allowed punishment of non-cooperators, groups of up to 32 could sustain at least 40% cooperation.  But when they allowed punishment of both non-cooperators AND non-punishers, even groups of several hundred individuals could establish high (70-80%) rates of long-term cooperation.  Thus, that’s the key to building large societies: a social contract that allows the group to punish members for failing to cooperate, and for failing to enforce the rules of cooperation.

It doesn’t much matter if you feel the social contract is invalid because you never signed or agreed to it, any more than you feel the market is unfair because you never agreed to it.  The social contract isn’t an actual contract: it’s an emergent property of the system of human interaction, developed over millennia by evolution to sustain cooperation in large groups.  Whatever form it takes, whether it’s an association policing its own members for violating group norms, or a monarch acting as a third-party arbitrator enforcing the laws, or a democracy voting on appropriate punishment for individual members who’ve violated their agreed-upon standards of behavior, there is no long-term successful human society that does not feature some form of social contract, any more than there is a long-term successful human society that does not feature some form of trading of goods and services.  The social contract isn’t right or wrong.  It just is.  Sorry, Lysander Spooner.

*Note: none of this is to say what structure is best for enforcing group standards, nor what those group standards should be beyond the basic notion of fairness and in-group cooperation.  The merits and downsides of various governmental forms, and of various governmental interests, are an argument better left to philosophers and political theorists, and are far beyond the scope of this article.  My point is merely that SOME form of social authority to punish non-cooperators is an inherent aspect of every successful human society, and is an evolutionary necessity.

Tulips, Traffic Jams, and Tempests (Part 2): The Properties of Complexity

In the first installment of this series, I discussed some well-known phenomena that are emergent effects of complex systems, and gave a general definition of complexity.  In this installment, we’re going to delve a little deeper and look at some common properties and characteristics of complex systems.  Understanding such properties helps us understand what are the types of complex systems and what kinds of tools we have available to study complexity, which will be the topic of the third installment of the series.

There are four common properties that can be found in all complex systems:

  • Simple Components (Agents)
  • Nonlinear Interaction
  • Self-organization
  • Emergence

But what do these mean, and what do they look like?  Let’s examine each in turn.

 

SIMPLE COMPONENTS (AGENTS):

One of the most interesting things about complex systems is that they aren’t composed of complex parts.  They’re built from relatively simple components, compared to the system as a whole.  Human society is fantastically complex, but its individual components are just single human beings—which are themselves fantastically complex compared to the cells that are their fundamental building blocks.  Hurricanes are built of nothing more than air and water particles.  These components are also known as agents.  The two terms are interchangeable, but I prefer agents in general and that will be the term used throughout the rest of this post; the usual distinction among those who use both terms is that agents can make decisions and components cannot.  But computer simulations show that even when agents can only make one or two very simple deterministic responses with no actual decision-making process beyond “IF…THEN…,” enough of them interacting will result in intricate complexity.  We see this in nature, too—an individual ant is one of the simplest animals around, driven entirely by instincts that lead it to respond predictably to encountered stimuli, but an ant colony is a complex system that builds cities, forms a society, and even wages war.  The wonder of complex systems is that they spring not from complexity, but from relative simplicity, interacting.  But there must be many of them—a single car on a road network is not a complex system, but thousands of them are, which leads us to our next property.

 

NONLINEAR INTERACTION:

For complexity to arise from simple agents, there must be lots of them interacting, and these interactions must be nonlinear.  This nonlinearity results not from single interactions, but from the possibility that any one interaction can (and often does) cause a chain reaction of follow-on interactions with more agents, so a single decision or change can sometimes have wide-ranging effects.

In technical terms, nonlinear systems are those in which the change of the output is not proportional to the change of the input—that is, when you change what goes it, what comes out does not always grow or shrink proportionately to that original change.  In layman’s terms, the system’s response to the same input might be wildly different depending on the state or context of the system at the time.  Sometimes a small change has large effects.  Sometimes a large change is absorbed by the system with little to no effect at all.

This is important to understand for two reasons.  First is that, when dealing with complex systems, responses to actions and changes might be very different than those the actor originally expected or intended.  Even in complex systems, most of the time changes and decisions have the expected result.  But sometimes not, and when the system has a large number of interactions, the number of unexpected results can start to have a significant impact on the system as a whole.

The other reason this is important is that nonlinearity is the root of mathematical chaos.  Chaos is defined as seemingly random behavior with sensitive dependence on initial conditions—in nonlinear systems, under the right conditions, prediction is impossible, even theoretically.  One would have to know with absolute precision the starting conditions of every aspect of the system, and considering that the uncertainty principle means that it’s physically impossible to do so according to the laws of physics, perfect prediction of a complex system is impossible: to see what happens in a complex system of agents interacting in a nonlinear fashion, you must let it play out.  Otherwise, the best you can do is an approximation that loses accuracy the further and further you get from the starting point.  This sensitivity to initial conditions is commonly simplified as the “butterfly effect,” where even small changes can have large impacts across the system as a whole.

In short, the reason the weather man in most places can’t tell you next week’s weather very accurately isn’t because he’s bad at his job, but because weather (except in certain climates with stable weather patterns) literally cannot be predicted very well, and it gets harder and harder the further out you try to do so.  That’s just the nature of the system they’re working with.  It’s remarkable they’ve managed to get as good as they have, actually, considering that meteorologists only began to understand the chaotic principles underlying weather systems when Lorenz discovered them by accident in 1961.  Complex systems are inherently unpredictable, because they consist of a large number of nonlinear interactions.

 

SELF-ORGANIZATION

Complex systems do not have central control.  Rather, the agents interact with each other, giving rise to a self-organized network (which in turn shapes the nonlinearity of the interactions among the agents of the network).  This is a spontaneous ordering process, and requires no direction or design from internal or external controllers.   All complex systems are networks of connected nodes—the nodes are the agents and the connections are their interactions—whether they’re networks of interacting particles in a weather system or networks of interacting human beings in an economy.

The structure of the system arises from the network.  Often it takes the form of nested complex systems: a society is a system of human beings, which is a system of cells, each level of which is itself a complex system.  Mathematically, the term for this is a fractal—complex systems tend to have a fractal structure, which is a common feature of self-organized systems in general.  Some complex systems are networks of simple systems; others are networks of complicated systems; many are networks of complex sub-systems and complicated sub-systems and simple sub-systems all interacting together.  A traffic light is a simple system; a car is a complicated system; a human driver is a complex system, the traffic system is a network of many individual examples of all three of these sub-systems interacting as agents.  And it is entirely self-organized: the human beings who act as drivers are also the agents who plan and build the road system that guides their interactions as drivers, by means of other complex systems such as the self-organized political system in a given area.

 

EMERGENCE

Emergent properties, as discussed in part one of this series, are those aspects of a system that may not be determined merely from isolating the agents—the system is greater than the sum of its parts.  An individual neuron is very simple, capable of nothing more than firing individual electrical signals to other neurons.  But put a hundred billion of them together, and you have a brain capable of conscious thought, of decision-making, of art and math and philosophy.  A single car with a single driver is easy to understand, but put thousands of them on the road network at the same time, and you have traffic—and its own resulting emergent phenomena like congestion and gridlock.  Two people trading goods and services are simple, but millions of them create market bubbles and crashes.  This is the miracle of complexity: nonlinear networks of relatively simple agents self-organize and produce emergent phenomena that could not exist without the system itself.

Some common emergent properties include information processing and group decision-making, nonlinear dynamics (often shaped by feedback loops that dampen or amplify the effects of behaviors of individual agents), hierarchical structures (such as families and groups which cooperate among themselves and compete with each other at various levels of a social system), and evolutionary and adaptive processes.  A hurricane, for example, is an emergent property in which many water and air molecules interact under certain conditions and with certain inputs (such as heat energy from sunlight), enter a positive feedback loop that amplifies their interactions, and become far more than the sum of their parts, until the conditions change (such as hitting land and losing access to a ready supply of warm water), at which point they enter a negative feedback loop that eventually limits its growth and later dictates its decline back to nonexistence.  Adam Smith’s “Invisible Hand” is an emergent property of the complex systems we call “economies,” in which individual actions within a nonlinear network of agents are moderated by feedback loops and self-organized hierarchical structures to produce common goods through self-interested behavior.  Similarly, the failures of that Invisible Hand such as a speculative bubbles and market crashes are themselves emergent behaviors of the economic system, that cannot exist without the system itself.

 

 

Now that we’ve established the common properties of complex systems, in the next article we’ll look at a couple different types, what the differences are, and what tools we can use to model them properly.

On Nazis and Socialists

I commonly run into the argument that the Nazis were clearly left wing, because “Socialism is right there in their name.”  It’s getting old, because it ignores literally everything else about them.  Bottom Line: yes, they were socialists, but no, they were not leftists.

Part of the problem is that there’s no good accepted narrow definition of socialism–it ranges from Marxist-style Communism to Soviet-style command economies to Scandinavian-style public welfare states. A few months ago the American Economics Association’s Journal of Economic Perspectives published a paper trying answer the question of whether modern China is socialist, and it was fascinating because first they had to establish a working definition of socialism. Even today, there’s serious ongoing debate about that in academic economics circles.

But in the broad sense, Nazis were socialist, in that the government controlled the economy towards its own goals–the Reich ran the factories and mines and basically the entire supply chain and directed how resources and products would be used at the macro level.

That said, the Nazis explicitly rejected what we’ve come to think of as the “left-right” spectrum in favor of what political theorists call a “third way,” which married leftist-style government control of the economy to right-wing-style government control of social lives in a militaristic fascism focused on directing all social and economic aspects of the country towards the needs of the Fatherland. Nationalism (right) + Socialism (left) = National Socialism. Funny how that works. Thus, it’s a great straw man, because BOTH sides can legitimately point to aspects of Nazism and say “See?! They were the other side!” When the reality is they were neither.

Note: neo-Nazis, on the other hand, generally ignore the economic aspects of National Socialism in favor of the eugenicist racism and militaristic nationalism, and ARE legitimately classified as right-wing extremists.

The more you know.