Gangsters and Guns: The Root Causes of Violence in Drug Trafficking (Well, It’s Complicated #5)

A common argument, especially among the more libertarian-minded, is that because a lot of gun violence in American comes from drug dealers, ending the “War on Drugs” would immediately lead to plunging gun violence, especially in poor inner city neighborhoods where the drug trade is rampant and controlled by violent gangs.  But the truth is that’s too simple.

While I do tend to agree that a significant percentage of gun violence is directly tied to the economics of the illegal drug supply chain, the problem is that “calling off the drug war” does not in itself mean “take steps to make it an aboveground legal market.” To see what I mean, let’s look at what motivates violence in the drug market.

Gangsters, by and large, don’t go around shooting each other for no reason (though sometimes they do–the profession attracts a lot of violent sociopaths). Generally, violence is used for two main purposes in black markets (of any kind, from drugs to human trafficking to gun running to illegal food sales in Venezuela): to maintain property rights (e.g., protect one’s stash, one’s territory, one’s supply chain, etc, against competition), and to enforce contract rights (e.g., to ensure people play by the rules and don’t try to rip you off, by ensuring they fear your reprisal).  Both can take the form of what is sometimes called “instrumental violence,” that is, violence directly related to economic activities.  But the latter–enforcing contract rights–can also account for a significant amount of what is called “expressive violence,” in that such violence serves to reinforce the gang’s power, control, and reputation in its territory, thus decreasing the likelihood competitors and customers will attempt to cross them.

To remove both of these sources of violence requires giving drug dealers alternate means to maintain their property rights and to enforce their contracts. This requires not merely ending prosecution and incarceration for nonviolent drug offenses, but actually giving the drug market access to legal structures: they need to be able to set up shop legally in exchange for rent, so they aren’t fighting over corners; they need to be able to enter legally binding contracts with suppliers; they need to be able to enforce their rights through the legal system–call the police when someone steals from them, sue a supplier for breach of contract when shipments go missing, etc.

Further, the individuals currently in the underground drug trade either need to be able to transition peacefully to the aboveground drug trade, or be removed from the market entirely–if the barriers to entry into the legal market are too high, the illegal market will continue to operate until it is either economically untenable (through competition with legal alternatives) or shut down by law enforcement. And we’ve seen how well the latter works in practice through the past half century of the Drug War, so it may not be the best option except for the most egregiously violent criminals. By giving current illegal drug dealers the ability to transition to a legal, profitable, and regulated drug market and shifting the burden for enforcing property and contract rights onto the government, they no longer need to risk felony charges for doing so themselves (and for that matter they no longer need to risk getting shot themselves in disputes with other illegal drug traffickers). Only those individuals enamored of a “gangster lifestyle” would have an incentive to continue criminal activities instead of shifting to the safer and profitable legal drug trade, thus greatly decreasing the overall rate of violence currently fueled by drugs being a highly profitable black market.

This will not solve gang activity, nor will it solve all gang related violent crime.  There are certainly other criminal enterprises that many gangs involve themselves in–prostitution, gun running, illegal gambling, extortion, racketeering, etc.  Legalizing and regulating the drug trade, even with low barriers to entry to allow current drug traffickers to transition to legal markets, will have no effect on these other sources of revenue and their associated criminal violence.  But it will absolutely decrease overall violence as gangs get out of the drug trade and its associated high levels of street violence fades away, in favor of the relatively low violence in other criminal markets that rely less on direct territorial control of prized retail locations.

Ending the Drug War alone won’t affect the primary motivations for drug related violent crime, because the crime isn’t generally caused by criminals trying to avoid prosecution and incarceration.  Violence in the drug trade isn’t caused directly by the drug war; it’s caused more broadly by the fact the drug trade is a black market controlled by organized crime. Because it’s a black market, those in the business of selling drugs have no alternatives to ensure they stay in business but to make sure everyone plays by the rules or gets shot.  That’s the problem that needs to be addressed in order to solve the associated outcome of high levels of violence.

On Social Contracts and Game Theory

The “social contract” is a theory of political philosophy, formalized by Enlightenment thinkers like Rousseau (who coined the term), Hobbes, Locke, and their contemporaries, but tracing its roots back to well before the birth of Christ.  Social contract theories can be found across many cultures, such as in the writings of ancient Buddhists like Asoka and Mahavatsu, and ancient Greeks like Plato and Epicurus.  The idea of the social contract is that individual members of a society either explicitly or implicity (by being members of that society) exchange some of their absolute freedom for protection of their fundamental rights.  This is generally used to justify the legitimacy of a governmental authority, as the entity to which individuals surrender some freedoms in agreement for the authority protecting their other rights.

At its most basic, then, the social contract can be defined as “an explicit or implicit agreement that society—or its representatives in the form of governmental authorities—has the legitimate right to hold members of said society accountable for violations of each other’s rights.”  Rather than every member of a society having to fend for themselves, they agree to hold each other accountable, which by necessity means accepting limitations on their own freedom to act as they please (because if their actions violate others’ rights, they’ve agreed to be held accountable).

The purpose of this article isn’t to rehash the philosophical argument for and against social contract theory.  It’s to point out that the evidence strongly demonstrates social contracts aren’t philosophy at all, but rather—much like economic markets—a fundamental aspect of human organization, a part of the complex system we call society that arose through evolutionary necessity and is by no means unique to human beings.  That without it, we would never have succeeded as a species.  And that whether you feel you’ve agreed to any social contract or not is irrelevant, because the only way to be rid of it is to do away with society entirely.  To do so, we’re going to turn to game theory and experimental economics.

In 2003, experimental economists Ernst Fehr and Urs Fischbacher of the University of Zurich published a paper they titled “The Nature of Human Altruism.”  It’s a fascinating meta-study, examining the experimental and theoretical evidence of altruistic behavior to understand why humans will often go out of their way to help others, even at personal costs.  There are many interesting conclusions in the paper, but I want to focus on one, specifically—the notion of “altruistic punishment,” that is, taking actions to punish others’ for perceived unfair or unacceptable behavior even when it costs the punisher something.  In various experiments for real money, with sometimes as much as three months’ income at stake, humans will hurt themselves (paying their own money or forfeiting offered money) to punish those they feel are acting unfairly.  The more unfair the action, the more willing people are to pay to punish them.  Fehr and Fischbacher sought to understand why this is the case, and their conclusion plays directly into the concept of a social contract.

 

A decisive feature of hunter-gatherer societies is that cooperation is not restricted to bilateral interactions.  Food-sharing, cooperative hunting, and warfare involve large groups of dozens or hundreds of individuals…By definition, a public good can be consumed by every group member regardless of the member’s contribution to the good.  Therefore, each member has an incentive to free-ride on the contributions of others…In public good experiments that are played only once, subjects typically contribute between 40 and 60% of their endowment, although selfish individuals are predicted to contribute nothing.  There is also strong evidence that higher expectations about others’ contributions induce individual subjects to contribute more.  Cooperation is, however, rarely stable and deteriorates to rather low levels if the game is played repeatedly (and anonymously) for ten rounds. 

The most plausible interpretation of the decay of cooperation is based on the fact that a large percentage of the subjects are strong reciprocators [i.e., they will cooperate if others cooperated in the previous round, but not cooperate if others did not cooperate in the previous round, a strategy also called “tit for tat’] but that there are also many total free-riders who never contribute anything.  Owing to the existence of strong reciprocators, the ‘average’ subject increases his contribution levels in response to expected increases in the average contribution of other group members.  Yet, owing to the existence of selfish subjects, the intercept and steepness of this relationship is insufficient to establish an equilibrium with high cooperation.  In round one, subjects typically have optimistic expectations about others’ cooperation but, given the aggregate pattern of behaviors, this expectation will necessarily be disappointed, leading to a breakdown of cooperation over time.

This breakdown of cooperation provides an important lesson…If strong reciprocators believe that no one else will cooperate, they will also not cooperate.  To maintain cooperation in [multiple person] interactions, the upholding of the believe that all or most members of the group will cooperate is thus decisive. 

Any mechanism that generates such a belief has to provide cooperation incentives for the selfish individuals.  The punishment of non-cooperators in repeated interactions, or altruistic punishment [in single interactions], provide two such possibilities.  If cooperators have the opportunity to target their punishment directly towards those who defect they impose strong sanctions on the defectors.  Thus, in the presence of targeted punishment opportunities, strong reciprocators are capable of enforcing widespread cooperation by deterring potential non-cooperators.  In fact, it can be shown theoretically that even a minority of strong reciprocators suffices to discipline a majority of selfish individuals when direct punishment is possible.  (Fehr and Fischbacher, 786-7)

 

In short, groups that lack the ability to hold their members accountable for selfish behavior and breaking the rules of fair interaction will soon break down as everyone devolves to selfish behavior in response to others’ selfishness.  Only the ability to punish members for violating group standards of fairness (and conversely, to reward members for fair behavior and cooperation) keeps the group functional and productive for everyone.*  Thus, quite literally, experimental economics tells us that some form of basic social contract—the authority of members of your group to hold you accountable for your choices in regards to your treatment of other members of the group, for the benefit of all—is not just a nice thing to have, but a basic necessity for a society to form and survive.  One might even say the social contract is an inherent emergent property of complex human social interaction.

But it isn’t unique to humans.  There are two major forms of cooperative behavior in animals: hive/colony behavior, and social group behavior.  Insects tend to favor hives and colonies, in which individuals are very simple agents that are specialized to perform some function, and there is little to no intelligent decision making on the part of individuals at all.  Humans are social—individuals are intelligent decision makers, but we survive and thrive better in groups, cooperating with members of our group in competition with other groups.  But so are other primates—apes and monkeys have small scale societies with leaders and accountability systems for violations of accepted behavior.  Wolf packs have leaders and accountability systems.  Lion prides have leaders and accountability systems.  Virtually every social animal you care to name has, at some level, an accountability system resembling what we call a social contract.  Without the ability to hold each other accountable, a group quickly falls apart and individuals must take care of themselves without relying on the group.

There is strong evidence that humans, like other social animals, have developed our sense of fairness and our willingness to punish unfair group members—and thus our acceptance that we ourselves can be punished for unfairness—not through philosophy, but through evolutionary necessity.  Solitary animals do not have a need for altruistic punishment.  Social animals do.  But as Fehr and Fischbacher also point out, “most animal species exhibit little division of labor and cooperation is limited to small groups.  Even in other primate societies, cooperation is orders of magnitude less developed than it is among humans, despite our close, common ancestry.”  So why is it that we’re so much more cooperative, and thus more successful, than other cooperative animals?  It is, at least in part, because we have extended our concept of altruistic punishment beyond that of other species:

 

Recent [sociobiological] models of cultural group selection or of gene-culture coevolution could provide a solution to the puzzle of strong reciprocity and large-scale human cooperation.  They are based on the idea that norms and institutions—such as food-sharing norms or monogamy—are sustained by punishment and decisively weaken the within-group selection against the altruistic trait.  If altruistic punishment is ruled out, cultural group selection is not capable of generating cooperation in large groups.  Yet, when punishment of [both] non-cooperators and non-punishers [those who let non-cooperation continue without punishment] is possible, punishment evolves and cooperation in much larger groups can be maintained.  (Fehr and Fischbacher, 789-90)

We don’t just punish non-cooperators.  We also punish those who let non-cooperators get away with it.  In large groups, that’s essential: in a series of computer simulations of multi-person prisoners’ dilemma games with group conflicts and different degrees of altruistic punishment, Fehr and Fischbacher found that no group larger than 16 individuals could sustain long term cooperation without punishing non-cooperators.  When they allowed punishment of non-cooperators, groups of up to 32 could sustain at least 40% cooperation.  But when they allowed punishment of both non-cooperators AND non-punishers, even groups of several hundred individuals could establish high (70-80%) rates of long-term cooperation.  Thus, that’s the key to building large societies: a social contract that allows the group to punish members for failing to cooperate, and for failing to enforce the rules of cooperation.

It doesn’t much matter if you feel the social contract is invalid because you never signed or agreed to it, any more than you feel the market is unfair because you never agreed to it.  The social contract isn’t an actual contract: it’s an emergent property of the system of human interaction, developed over millennia by evolution to sustain cooperation in large groups.  Whatever form it takes, whether it’s an association policing its own members for violating group norms, or a monarch acting as a third-party arbitrator enforcing the laws, or a democracy voting on appropriate punishment for individual members who’ve violated their agreed-upon standards of behavior, there is no long-term successful human society that does not feature some form of social contract, any more than there is a long-term successful human society that does not feature some form of trading of goods and services.  The social contract isn’t right or wrong.  It just is.  Sorry, Lysander Spooner.

*Note: none of this is to say what structure is best for enforcing group standards, nor what those group standards should be beyond the basic notion of fairness and in-group cooperation.  The merits and downsides of various governmental forms, and of various governmental interests, are an argument better left to philosophers and political theorists, and are far beyond the scope of this article.  My point is merely that SOME form of social authority to punish non-cooperators is an inherent aspect of every successful human society, and is an evolutionary necessity.

The Age of Hype

“We’re the middle children of history, man. No purpose or place. We have no Great War. No Great Depression. Our Great War’s a spiritual war… our Great Depression is our lives. We’ve all been raised on television to believe that one day we’d all be millionaires, and movie gods, and rock stars. But we won’t. And we’re slowly learning that fact. And we’re very, very pissed off.”

Chuck Palahniuk, Fight Club

Objectively, most of the major fights faced in 2017, on any major front, seem trivial.

ISIS is not an existential threat to the United States, the way Nazi Germany and the Soviet Union once were. Even the Russian security state struggles to do much beyond exert their influence in spheres they once had locked down and now are content to compete in.

On the front of civil rights, we’ve moved into an increasingly nebulous area of oppression vs. oppressors, where the oppression in question is… use of a bathroom? Who can use racial slurs? Perhaps the most hyped up one, Police killings of minorities, is best emblematic of this — the actual amount of unarmed people killed by police is exceptionally low for a nation of 320 million people.

Economically, we’re told American manufacturing is dying (despite an all-time high output in manufacturing products), we’re told the banks control everything in a way they never have before (which must be quite mirthful to the ghost of J. P. Morgan), and we’re told that ruin and bankruptcy are imminent on all fronts.

Politically, we’re quick to portray our political opponents as traitors, enemies, sycophants of foes far worse. A quick tour of political-leaning Facebook pages will find you a great host of people content to believe that Democrats are tools of radical socialism — or that Republicans are the tools of the far right in a way that suggests an American Reich is imminent.  Blood on the streets is coming any day now, because Youtube videos of Black Bloc Anarchists mixing it up with guys in MAGA hats have told us so.

What these issues all have in common, though, is that they’re all blown way out of proportion.

This isn’t to say that none of these are legitimate problems — excepting the accusations of widespread traitors among American politicians, most of these are very real problems.

But they’re not the colossal struggle that was World War II, or the American Civil Rights movement of the 1960s.

Good luck suggesting that to folks with strong opinions on this.

The US has a long tradition of the cult of the rebel — it’s in our national DNA and our very founding was an act of rebellion. It’s therefore unsurprising that so many Americans like to cast themselves as noble rebels against an evil empire — a common thread from burnt-out hippies to anti-government militias to Alex Jones to Bill Maher. When that’s overlayed with this overplayed sense of urgency, though, there is a very real problem that is only starting to emerge.

As anyone who’s taken a driving course can tell you, overcorrection is often just as fatal as not correcting. We’re entering an age of McCarthyism — everyone is a secret enemy in some way — they’re complicit in climate change, they’re racist or sexist, they’re authoritarian, they’re out to take your money and rip you off. The palettes differ from political affiliation to political affiliation, but the underlying trend is there.

Perhaps more disturbingly on the macro, and nearly unprecedented in history, it has become difficult to differentiate between what issues are important and what issues are not.

Imagine, for a second, that you are a Congressional Representative. It is completely conceivable, on a daily basis, that you will receive calls, letters, and requests on, at minimum, five broadstrokes issues: the economy, foreign policy, social policy, government accountability, and campaign promises. Each of these may have twenty or thirty different facets, and many tie together.

How do you prioritize? Can you prioritize? If half of your district is writing about healthcare while the other half is writing you about their taxes being too high and you’ve got a campaign promise about bringing back the Lockheed Plant that you can only get done if your pals in Arkansas get their new Army Reserve Training Center in this year’s defense budget, how do you spend your day? And that’s to say nothing about the recent fear over a recent mass shooting in your state, the impending budget decisions that your party whip expects you to back even though you know that your two biggest donors are completely against several of the provisions…

It’s no surprise that Americans have a low impression of Congress. With so many narratives out there, each thinking it’s top billing, everyone feels marginalized by the government.

The kicker is, the government is, honest to God, doing the best it humanly can given the circumstances. While this line might invite snark from libertarians and anarchists, it is worth considering that it is hard to imagine a form of government that could conceivably use the time of one Congressional session to solve the American healthcare crisis, defeat ISIS, fix immigration (either through reform or better security), make the military more efficient, expand LGBT rights while respecting religious rights, confront automation-displacement, solve economic anxiety, reduce the gap between the rich and the poor, enforce existing environmental law, enhance American education, etc., etc. It is truly a Herculean set of tasks, and empirically more than most previous governments had to oversee.

Our founders planned for a decentralized system, with many of these issues being solved closest to home. Federalism is still the best way to deal with such a problem. What’s concerning, however, is that for many Americans, they are no longer interested in a decentralized approach, especially as it pertains to the president.

Consider that Donald Trump was elected partially on the idea that he would reduce the McCarthyist hydra that is modern political correctness — this, on its face, seems reasonable to want to confront.

But how on Earth would a president be able to confront prevailing social trends? Sure, JFK may be partially responsible for America giving up the hat as a daily wear item, but Presidents generally are not trendsetters or people who adjust the social temperature of the nation. They are executives presiding over the government.

But to those who believe political correctness is an existential threat, it seems reasonable to bank as much as they can on as many different approaches as possible — elect an anti-PC president, force anti-PC legislation through congress, whine about it on Facebook to their friends so everyone knows about the great threat of PC. But consider that any time spent jousting at this windmill is time that is not spent confronting one of the other many problems that other voters prize over this. That drags their confidence down, and this idea that the President is expected to impact it drags the overall national opinion of the President down. That’s not including any partisan backlash from taking one side or another.

So this odd situation presents itself, where the president and congress are attempting to do as the voters asked — but if it’s not quick enough, not executed perfectly, then fickle public opinion turns against the very thing that was requested, and before it can be repealed, the American Voter is already demanding something new (after all, he’s besieged on all sides by supposedly existential threats).

So voters get burnt out. They despair. Their problems are ignored. Their doom is imminent. They turn to drugs or alcohol. They disengage. No one, they think, understands them or cares about them.

The Palahniuk quote at the beginning summarizes their plight well.

Where I struggle is that I don’t have an answer on how to fix, or reduce this. I’m not sure it will be. Post-modern politics looks to continue indefinitely into the future, and only get worse as more problems pile up, each hyped up to be the next World War II, the next Civil Rights movement.

In an era of choosing your own narrative with all evidence being somehow equal, it is a dark time to be an empiricist.

Note: This post was originally published at Philip S. Bolger’s Medium page.  It is reprinted with his permission.
https://medium.com/@philip.s.bolger/the-age-of-hype-48e0466d6379#.gbt334yus

Dumbocrats and Republican’ts (Part 1): The Trouble with Dogma

American politics is currently beset by a problem.  Well, many, but for right now we’re going to focus on just one: polarization.  There’s a perception, with some evidence, that American politics is currently more polarized than at any other point in recent memory—certainly since the 1960s.  And this is a problem, because polarization leads to gridlock, to civil unrest, to social breakdowns, and even, in extreme cases, to civil war.  Religious polarization in Christian Europe led to a series of conflicts known as the Wars of Religion—the most famous being the 30 Years War, in which more than 8 million people died.  Polarization over slavery and trade issues between Northern and Southern states led to the American Civil War in the 1860s.  Most of us can agree that polarization is, in general, a bad thing for a society.  The question, though, is what to do about it.  And to answer that, first we have to look at what polarization is and what it is not.  Only then can we start to identify potential routes to solve the problem.

Let’s start with what it is not.  Polarization is not merely a particularly widespread and vehement disagreement.  Disagreement just means that different people have drawn different conclusions.  This, by itself, is healthy.  Societies without disagreement drive headlong into madness, fueled by groupthink and demagoguery.  Fascist and totalitarian societies suppress dissent because it slows or stops their efforts to achieve their perfect visions. Disagreement arises naturally—highly intelligent people, even those with a shared culture, can look at the same evidence, in the exact same context, and come to radically different conclusions because they weight different cultural values more highly than others, because they prioritize different goals over others, because they have different life experiences with which to color their judgements.  That’s healthy.  The discussions and debates arising from such disagreements are how groups and societies figure out how best to proceed in a manner that supports the goals and values of the group as a whole.

So if polarization isn’t just disagreement, what is it?  Polarization is a state of affairs where the fact other groups disagree with your group becomes more important than the source of that disagreement.  Essentially, polarization is where disagreeing groups are no longer willing to discuss and debate their disagreements and come to a compromise that accounts for everyone’s concerns, but instead everyone draws their line in the sand and refuses to budge.  Polarization is what occurs when we stop recognizing that disagreement is a natural and healthy aspect of a diverse society, and we start treating our viewpoints as dogma rather than platforms.  Platforms can be adjusted in the face of new evidence and reasonable arguments.  People who ascribe to a platform can compromise with people who ascribe to other platforms, for the mutual good of all involved.  But dogma is immutable and unchangeable.  People who ascribe to dogma cannot compromise, no matter what evidence or arguments they encounter.  Their minds are made up, and they will not be swayed.

Polarization occurs when dogma sets in.  Because when your beliefs are dogmatic, anyone who disagrees is no longer a fellow intelligent human being who just happens to have slightly different values and experiences coloring their beliefs.  When your beliefs are dogmatic, anyone who disagrees is at best an idiot who just doesn’t understand, and at worst a heretic who must be purged for the safety of your dogma.  When your beliefs are dogmatic, there’s no longer any value hearing what the other side has to say, and instead you turn to echo chambers that do nothing but reinforce the dogma you already believe.

Where does dogma come from?  Why do people ascribe to dogmatic beliefs when there is so much information available in the modern world?  It’s largely because critical thinking is difficult.  It’s not that people are stupid, but rather that when there IS so much information available, it’s hard to process it and tell the wheat from the chaff without a filter.  And dogmatic beliefs, distilled to simple talking points by those echo chambers like media sources and groups of friends and family, provide just such a filter with which people can try to understand a highly complex world by fitting it to their worldviews.  Dogma is comfortable.  Dogma makes sense.  Dogma tells us why we’re right, why our values are the right values and our beliefs are the right beliefs.  And that’s not to mention the draw of being part of the in-group: choosing and ascribing to a dogma lets you fit in with a crowd and gain respect at the low, low cost of merely repeating the same soundbites over and over again.  It’s self-reinforcing, especially in the world of modern 24 hour news networks, a thousand “news” websites to cater to any given belief system, and social media networks that let us surround ourselves with comfortable consensus and block those who might question our beliefs.  It’s no real mystery why people are drawn to dogmatic beliefs—the very things that could show them the error of their ways are the reasons they prefer their heads in the sand.

But most people would agree that dogma is bad, that critical thinking is good, even when they’re manifestly dogmatic themselves.  How can they be comfortable with that cognitive dissonance?  Well, quite simply, because they don’t even recognize it.  It’s much easier to identify dogmatic beliefs in others than in ourselves.  We all like to think we’ve thought through our positions and come to the right conclusions through logic and evidence, even when we quite clearly haven’t.  Hence the phenomenon of conservatives referring to “dumbocrats” and “libtards,” and liberals responding with “republican’ts” and “fascists.”  I’ve lost track of how many times I’ve seen conservatives assert liberalism is a mental disorder, and liberals say the exact same about conservatism, both sides laughing from their supposed superior mental position.  Self-reflection is actually incredibly difficult.  It takes a lot of effort.  It’s uncomfortable.  So we don’t do it.

Now that we’ve established what dogma is, where it comes from, and why people ascribe to it despite professing otherwise, in the next post in this series we’ll look at what we can do about it.