In the first installment of this series, I discussed some well-known phenomena that are emergent effects of complex systems, and gave a general definition of complexity. In this installment, we’re going to delve a little deeper and look at some common properties and characteristics of complex systems. Understanding such properties helps us understand what are the types of complex systems and what kinds of tools we have available to study complexity, which will be the topic of the third installment of the series.
There are four common properties that can be found in all complex systems:
- Simple Components (Agents)
- Nonlinear Interaction
But what do these mean, and what do they look like? Let’s examine each in turn.
SIMPLE COMPONENTS (AGENTS):
One of the most interesting things about complex systems is that they aren’t composed of complex parts. They’re built from relatively simple components, compared to the system as a whole. Human society is fantastically complex, but its individual components are just single human beings—which are themselves fantastically complex compared to the cells that are their fundamental building blocks. Hurricanes are built of nothing more than air and water particles. These components are also known as agents. The two terms are interchangeable, but I prefer agents in general and that will be the term used throughout the rest of this post; the usual distinction among those who use both terms is that agents can make decisions and components cannot. But computer simulations show that even when agents can only make one or two very simple deterministic responses with no actual decision-making process beyond “IF…THEN…,” enough of them interacting will result in intricate complexity. We see this in nature, too—an individual ant is one of the simplest animals around, driven entirely by instincts that lead it to respond predictably to encountered stimuli, but an ant colony is a complex system that builds cities, forms a society, and even wages war. The wonder of complex systems is that they spring not from complexity, but from relative simplicity, interacting. But there must be many of them—a single car on a road network is not a complex system, but thousands of them are, which leads us to our next property.
For complexity to arise from simple agents, there must be lots of them interacting, and these interactions must be nonlinear. This nonlinearity results not from single interactions, but from the possibility that any one interaction can (and often does) cause a chain reaction of follow-on interactions with more agents, so a single decision or change can sometimes have wide-ranging effects.
In technical terms, nonlinear systems are those in which the change of the output is not proportional to the change of the input—that is, when you change what goes it, what comes out does not always grow or shrink proportionately to that original change. In layman’s terms, the system’s response to the same input might be wildly different depending on the state or context of the system at the time. Sometimes a small change has large effects. Sometimes a large change is absorbed by the system with little to no effect at all.
This is important to understand for two reasons. First is that, when dealing with complex systems, responses to actions and changes might be very different than those the actor originally expected or intended. Even in complex systems, most of the time changes and decisions have the expected result. But sometimes not, and when the system has a large number of interactions, the number of unexpected results can start to have a significant impact on the system as a whole.
The other reason this is important is that nonlinearity is the root of mathematical chaos. Chaos is defined as seemingly random behavior with sensitive dependence on initial conditions—in nonlinear systems, under the right conditions, prediction is impossible, even theoretically. One would have to know with absolute precision the starting conditions of every aspect of the system, and considering that the uncertainty principle means that it’s physically impossible to do so according to the laws of physics, perfect prediction of a complex system is impossible: to see what happens in a complex system of agents interacting in a nonlinear fashion, you must let it play out. Otherwise, the best you can do is an approximation that loses accuracy the further and further you get from the starting point. This sensitivity to initial conditions is commonly simplified as the “butterfly effect,” where even small changes can have large impacts across the system as a whole.
In short, the reason the weather man in most places can’t tell you next week’s weather very accurately isn’t because he’s bad at his job, but because weather (except in certain climates with stable weather patterns) literally cannot be predicted very well, and it gets harder and harder the further out you try to do so. That’s just the nature of the system they’re working with. It’s remarkable they’ve managed to get as good as they have, actually, considering that meteorologists only began to understand the chaotic principles underlying weather systems when Lorenz discovered them by accident in 1961. Complex systems are inherently unpredictable, because they consist of a large number of nonlinear interactions.
Complex systems do not have central control. Rather, the agents interact with each other, giving rise to a self-organized network (which in turn shapes the nonlinearity of the interactions among the agents of the network). This is a spontaneous ordering process, and requires no direction or design from internal or external controllers. All complex systems are networks of connected nodes—the nodes are the agents and the connections are their interactions—whether they’re networks of interacting particles in a weather system or networks of interacting human beings in an economy.
The structure of the system arises from the network. Often it takes the form of nested complex systems: a society is a system of human beings, which is a system of cells, each level of which is itself a complex system. Mathematically, the term for this is a fractal—complex systems tend to have a fractal structure, which is a common feature of self-organized systems in general. Some complex systems are networks of simple systems; others are networks of complicated systems; many are networks of complex sub-systems and complicated sub-systems and simple sub-systems all interacting together. A traffic light is a simple system; a car is a complicated system; a human driver is a complex system, the traffic system is a network of many individual examples of all three of these sub-systems interacting as agents. And it is entirely self-organized: the human beings who act as drivers are also the agents who plan and build the road system that guides their interactions as drivers, by means of other complex systems such as the self-organized political system in a given area.
Emergent properties, as discussed in part one of this series, are those aspects of a system that may not be determined merely from isolating the agents—the system is greater than the sum of its parts. An individual neuron is very simple, capable of nothing more than firing individual electrical signals to other neurons. But put a hundred billion of them together, and you have a brain capable of conscious thought, of decision-making, of art and math and philosophy. A single car with a single driver is easy to understand, but put thousands of them on the road network at the same time, and you have traffic—and its own resulting emergent phenomena like congestion and gridlock. Two people trading goods and services are simple, but millions of them create market bubbles and crashes. This is the miracle of complexity: nonlinear networks of relatively simple agents self-organize and produce emergent phenomena that could not exist without the system itself.
Some common emergent properties include information processing and group decision-making, nonlinear dynamics (often shaped by feedback loops that dampen or amplify the effects of behaviors of individual agents), hierarchical structures (such as families and groups which cooperate among themselves and compete with each other at various levels of a social system), and evolutionary and adaptive processes. A hurricane, for example, is an emergent property in which many water and air molecules interact under certain conditions and with certain inputs (such as heat energy from sunlight), enter a positive feedback loop that amplifies their interactions, and become far more than the sum of their parts, until the conditions change (such as hitting land and losing access to a ready supply of warm water), at which point they enter a negative feedback loop that eventually limits its growth and later dictates its decline back to nonexistence. Adam Smith’s “Invisible Hand” is an emergent property of the complex systems we call “economies,” in which individual actions within a nonlinear network of agents are moderated by feedback loops and self-organized hierarchical structures to produce common goods through self-interested behavior. Similarly, the failures of that Invisible Hand such as a speculative bubbles and market crashes are themselves emergent behaviors of the economic system, that cannot exist without the system itself.
Now that we’ve established the common properties of complex systems, in the next article we’ll look at a couple different types, what the differences are, and what tools we can use to model them properly.