On Social Contracts and Game Theory

The “social contract” is a theory of political philosophy, formalized by Enlightenment thinkers like Rousseau (who coined the term), Hobbes, Locke, and their contemporaries, but tracing its roots back to well before the birth of Christ.  Social contract theories can be found across many cultures, such as in the writings of ancient Buddhists like Asoka and Mahavatsu, and ancient Greeks like Plato and Epicurus.  The idea of the social contract is that individual members of a society either explicitly or implicity (by being members of that society) exchange some of their absolute freedom for protection of their fundamental rights.  This is generally used to justify the legitimacy of a governmental authority, as the entity to which individuals surrender some freedoms in agreement for the authority protecting their other rights.

At its most basic, then, the social contract can be defined as “an explicit or implicit agreement that society—or its representatives in the form of governmental authorities—has the legitimate right to hold members of said society accountable for violations of each other’s rights.”  Rather than every member of a society having to fend for themselves, they agree to hold each other accountable, which by necessity means accepting limitations on their own freedom to act as they please (because if their actions violate others’ rights, they’ve agreed to be held accountable).

The purpose of this article isn’t to rehash the philosophical argument for and against social contract theory.  It’s to point out that the evidence strongly demonstrates social contracts aren’t philosophy at all, but rather—much like economic markets—a fundamental aspect of human organization, a part of the complex system we call society that arose through evolutionary necessity and is by no means unique to human beings.  That without it, we would never have succeeded as a species.  And that whether you feel you’ve agreed to any social contract or not is irrelevant, because the only way to be rid of it is to do away with society entirely.  To do so, we’re going to turn to game theory and experimental economics.

In 2003, experimental economists Ernst Fehr and Urs Fischbacher of the University of Zurich published a paper they titled “The Nature of Human Altruism.”  It’s a fascinating meta-study, examining the experimental and theoretical evidence of altruistic behavior to understand why humans will often go out of their way to help others, even at personal costs.  There are many interesting conclusions in the paper, but I want to focus on one, specifically—the notion of “altruistic punishment,” that is, taking actions to punish others’ for perceived unfair or unacceptable behavior even when it costs the punisher something.  In various experiments for real money, with sometimes as much as three months’ income at stake, humans will hurt themselves (paying their own money or forfeiting offered money) to punish those they feel are acting unfairly.  The more unfair the action, the more willing people are to pay to punish them.  Fehr and Fischbacher sought to understand why this is the case, and their conclusion plays directly into the concept of a social contract.

 

A decisive feature of hunter-gatherer societies is that cooperation is not restricted to bilateral interactions.  Food-sharing, cooperative hunting, and warfare involve large groups of dozens or hundreds of individuals…By definition, a public good can be consumed by every group member regardless of the member’s contribution to the good.  Therefore, each member has an incentive to free-ride on the contributions of others…In public good experiments that are played only once, subjects typically contribute between 40 and 60% of their endowment, although selfish individuals are predicted to contribute nothing.  There is also strong evidence that higher expectations about others’ contributions induce individual subjects to contribute more.  Cooperation is, however, rarely stable and deteriorates to rather low levels if the game is played repeatedly (and anonymously) for ten rounds. 

The most plausible interpretation of the decay of cooperation is based on the fact that a large percentage of the subjects are strong reciprocators [i.e., they will cooperate if others cooperated in the previous round, but not cooperate if others did not cooperate in the previous round, a strategy also called “tit for tat’] but that there are also many total free-riders who never contribute anything.  Owing to the existence of strong reciprocators, the ‘average’ subject increases his contribution levels in response to expected increases in the average contribution of other group members.  Yet, owing to the existence of selfish subjects, the intercept and steepness of this relationship is insufficient to establish an equilibrium with high cooperation.  In round one, subjects typically have optimistic expectations about others’ cooperation but, given the aggregate pattern of behaviors, this expectation will necessarily be disappointed, leading to a breakdown of cooperation over time.

This breakdown of cooperation provides an important lesson…If strong reciprocators believe that no one else will cooperate, they will also not cooperate.  To maintain cooperation in [multiple person] interactions, the upholding of the believe that all or most members of the group will cooperate is thus decisive. 

Any mechanism that generates such a belief has to provide cooperation incentives for the selfish individuals.  The punishment of non-cooperators in repeated interactions, or altruistic punishment [in single interactions], provide two such possibilities.  If cooperators have the opportunity to target their punishment directly towards those who defect they impose strong sanctions on the defectors.  Thus, in the presence of targeted punishment opportunities, strong reciprocators are capable of enforcing widespread cooperation by deterring potential non-cooperators.  In fact, it can be shown theoretically that even a minority of strong reciprocators suffices to discipline a majority of selfish individuals when direct punishment is possible.  (Fehr and Fischbacher, 786-7)

 

In short, groups that lack the ability to hold their members accountable for selfish behavior and breaking the rules of fair interaction will soon break down as everyone devolves to selfish behavior in response to others’ selfishness.  Only the ability to punish members for violating group standards of fairness (and conversely, to reward members for fair behavior and cooperation) keeps the group functional and productive for everyone.*  Thus, quite literally, experimental economics tells us that some form of basic social contract—the authority of members of your group to hold you accountable for your choices in regards to your treatment of other members of the group, for the benefit of all—is not just a nice thing to have, but a basic necessity for a society to form and survive.  One might even say the social contract is an inherent emergent property of complex human social interaction.

But it isn’t unique to humans.  There are two major forms of cooperative behavior in animals: hive/colony behavior, and social group behavior.  Insects tend to favor hives and colonies, in which individuals are very simple agents that are specialized to perform some function, and there is little to no intelligent decision making on the part of individuals at all.  Humans are social—individuals are intelligent decision makers, but we survive and thrive better in groups, cooperating with members of our group in competition with other groups.  But so are other primates—apes and monkeys have small scale societies with leaders and accountability systems for violations of accepted behavior.  Wolf packs have leaders and accountability systems.  Lion prides have leaders and accountability systems.  Virtually every social animal you care to name has, at some level, an accountability system resembling what we call a social contract.  Without the ability to hold each other accountable, a group quickly falls apart and individuals must take care of themselves without relying on the group.

There is strong evidence that humans, like other social animals, have developed our sense of fairness and our willingness to punish unfair group members—and thus our acceptance that we ourselves can be punished for unfairness—not through philosophy, but through evolutionary necessity.  Solitary animals do not have a need for altruistic punishment.  Social animals do.  But as Fehr and Fischbacher also point out, “most animal species exhibit little division of labor and cooperation is limited to small groups.  Even in other primate societies, cooperation is orders of magnitude less developed than it is among humans, despite our close, common ancestry.”  So why is it that we’re so much more cooperative, and thus more successful, than other cooperative animals?  It is, at least in part, because we have extended our concept of altruistic punishment beyond that of other species:

 

Recent [sociobiological] models of cultural group selection or of gene-culture coevolution could provide a solution to the puzzle of strong reciprocity and large-scale human cooperation.  They are based on the idea that norms and institutions—such as food-sharing norms or monogamy—are sustained by punishment and decisively weaken the within-group selection against the altruistic trait.  If altruistic punishment is ruled out, cultural group selection is not capable of generating cooperation in large groups.  Yet, when punishment of [both] non-cooperators and non-punishers [those who let non-cooperation continue without punishment] is possible, punishment evolves and cooperation in much larger groups can be maintained.  (Fehr and Fischbacher, 789-90)

We don’t just punish non-cooperators.  We also punish those who let non-cooperators get away with it.  In large groups, that’s essential: in a series of computer simulations of multi-person prisoners’ dilemma games with group conflicts and different degrees of altruistic punishment, Fehr and Fischbacher found that no group larger than 16 individuals could sustain long term cooperation without punishing non-cooperators.  When they allowed punishment of non-cooperators, groups of up to 32 could sustain at least 40% cooperation.  But when they allowed punishment of both non-cooperators AND non-punishers, even groups of several hundred individuals could establish high (70-80%) rates of long-term cooperation.  Thus, that’s the key to building large societies: a social contract that allows the group to punish members for failing to cooperate, and for failing to enforce the rules of cooperation.

It doesn’t much matter if you feel the social contract is invalid because you never signed or agreed to it, any more than you feel the market is unfair because you never agreed to it.  The social contract isn’t an actual contract: it’s an emergent property of the system of human interaction, developed over millennia by evolution to sustain cooperation in large groups.  Whatever form it takes, whether it’s an association policing its own members for violating group norms, or a monarch acting as a third-party arbitrator enforcing the laws, or a democracy voting on appropriate punishment for individual members who’ve violated their agreed-upon standards of behavior, there is no long-term successful human society that does not feature some form of social contract, any more than there is a long-term successful human society that does not feature some form of trading of goods and services.  The social contract isn’t right or wrong.  It just is.  Sorry, Lysander Spooner.

*Note: none of this is to say what structure is best for enforcing group standards, nor what those group standards should be beyond the basic notion of fairness and in-group cooperation.  The merits and downsides of various governmental forms, and of various governmental interests, are an argument better left to philosophers and political theorists, and are far beyond the scope of this article.  My point is merely that SOME form of social authority to punish non-cooperators is an inherent aspect of every successful human society, and is an evolutionary necessity.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s