Nature uses only the longest threads to weave her patterns, so each small piece of her fabric reveals the organization of the entire tapestry.
—Richard Feynman, The Character of Physical Law [Feynman, 1994, p. 28]
How can structure emerge from chaos? Why isn’t everything just “noise”? And how does “complexity” differ from “chaos”?
Imagine a droplet of honey falling into a bowl of yogurt. After you stirred the yogurt a few times, circles of long threads of honey emerge. Some more stirring and it seems like the honey drop is distributed more or less evenly in the yogurt. But the situation has not necessarily become chaotic or random: if it were possible to stir the yogurt in the opposite direction in exactly the same way, you would end up again with a single drop. And that is how nature can best be described: as a sea of waves representing an infinite series of stirrings or foldings.
To better understand the concept of chaos, let us look at a selection of examples where we intuitively expect them to be random, while the phenomena are actually based on simple repeating rules. If we discover these rules, we can explain nature’s complexity without having to fall back on ideas of randomness or supernatural causes.
When looking at trees, we see fine-grained structures. Like our hierarchy tree of concepts from Philosophy for Heroes: Knowledge, this indicates that there is a lot of information hiding within those structures. We assume that their appearance must be stored somewhere in their genetic code. But that seems unlikely given that trees branch out so much over time.
FRACTAL · A fractal is a self-similar pattern created through repeatedly applying the same rule on itself.
A simple example of a fractal is the “Koch snowflake” where you start with a triangle and add a smaller triangle at each side. Then, you repeat that process with smaller triangles, and so on. If you repeated that a hundred times, the total circumference would be as long as the distance between the Earth and the sun. The result eerily looks like snowflakes we know (see Figures 3.18, and 3.19):
Figure 3.18:A snowflake (image source: shutterstock).
As this is the simplest way to create scalable structures, fractals can be found in nature at many places. And having scalable rules is essential for plants which can grow only as nutrients and sunlight allow them to grow. With fractals, a tree does not have to encode branching structures in its genetic code, it simply has to make sure that branches, when reaching a certain size, branch further. Having such a pseudorandom pattern also increases the amount of light a plant can absorb by minimizing the amount of shadow the individual leaves and branches throw on other leaves. But instead of storing the whole tree structure in the genetic code, it stores only the rule for how the branching should happen and applies that rule repeatedly when growing (see Figure 3.20).
Remembering what we have talked about in the first book, Philosophy for Heroes: Knowledge, this also connects to the golden ratio, a numerical relationship that looks aesthetically pleasing to our eyes. This is because the underlying Fibonacci sequence (1,1,2,3,5,8,13,21,…) of the golden cut (5/3,8/5,13/8,21/13,…) is also built on a repeating function, namely adding the two previous numbers (1 + 1 = 2,1 + 2 = 3,2 + 3 = 5,3 + 5 = 8,…).
The Magnetic Pendulum
Imagine a pendulum and three magnets (see Figure 3.21). You release the pendulum and it swings back and forth, its path always slightly affected by the nearest magnet it swings by. Eventually, it will come to a rest above one of the magnets. You keep on repeating this experiment but no matter how often you do, you seem to never be able to predict where the pendulum will arrive at the end—some obvious starting positions like right above one magnet aside.
If you made note of the starting positions each time you released the pendulum and assigned a color to each magnet, you would end up with a picture that looks like someone has thrown buckets of paint onto a canvas. It seems to be a product of chance, not of a deterministic cause or pattern. But examining it more closely, we do see some form of organization, which seems to be symmetric in three directions. It still could be buckets of different colors of paint having been thrown onto a canvas and then mirrored along its three (or six if you include the mirroring of the left and right side) axes.
CHANCE · The cause of an effect when no other cause could be determined.
Despite having only a simple setup of three magnets and a pendulum, the resulting image (see Figure 3.22) shows significant complexity. It is that chaotic because each swing multiplies any change of the initial starting position. The more swings the pendulum will make (the further it is away from any magnet), the harder it is to predict the final position based on a previous experiment, no matter how small the difference of the starting positions. Each swing multiplies initial inaccuracies in putting the pendulum at the same starting place as before.
[Chaos is when] the present determines the future, but the approximate present does not approximately determine the future.
CHAOS THEORY · The chaos theory states that small differences in initial conditions can yield widely diverging outcomes. For example, given enough repetitions, the effect of a butterfly flapping its wings on one side of the world might cause a hurricane on the other side of the world. Beyond this butterfly effect, the chaos theory also deals with patterns that emerge from an apparently chaotic system.
The Raisin in the Dough
A similar example would be kneading dough. In our case, the kneading process would involve rolling out the dough, then folding it in the middle, and repeating this process. Imagine putting a raisin into the dough, and then doing the rolling out and folding process ten times. Where is the raisin?
Let us say that we put a raisin at position 0.9 (0 being the left side, and 1 being the right side of the dough, see step 1 in Figure 3.23). When we roll out the dough (step 2), it is at 0.8 because the rolled out dough is twice the size of the folded one. Then, we fold it in the middle (step 3). It is now at position 0.4. Rolling it out again (step 4), it lands at 0.8. Folding it again, and we are back at 0.4. Continue this process, and the raisin would jump forth and back between 0.4 and 0.8. It has reached an “attractor” position and we can predict where it will be in the future after a certain number of repetitions.
Now, let us imagine that we made a small mistake and we started instead at 0.9001. Can we reuse the result of our previous folding or calculation to predict where this raisin will land? It will land on very similar spots for a few foldings. But after around ten foldings, the raisin will be at a very different side of the folding than it would have been if we had started at 0.9000. Ultimately, only positions that are part of an attractor (see below) can be used to predict where a raisin starting at a different position will land: if we started at 0.6, 0.4, or 0.8, we can reuse the result; if we started at 0.89999 or 0.90001, after a number of foldings, we can no longer predict where the raisin will turn up without redoing our calculation.
So, while the end result seems to be random, both the end positions of the pendulum as well as the raisin can actually be predicted. But both are very sensitive systems so that even the smallest change might have an effect on the end result. In this case, it was a simple rule (the pendulum moves according to the three magnetic forces or the folding of the dough, respectively) which, when applied multiple times (swings or foldings), produced a complex pattern.
When looking at the universe not as a set of particles at certain positions, but particles that are wiggling all the time, the “fixed points” we attribute to particles need to be replaced by attractors. Something that looks like it is staying at one place is actually moving around but ultimately will return to the original setting, only to return to moving around in the same pattern. This is what we have seen with the raisin at position 0.4: The raisin will jump back and forth between 0.4 and 0.8 after each folding and rolling out. Thus, 0.4 and 0.8 are attractors. Another example of an attractor was the magnet. Obviously, the position right above a magnet is another (albeit simple) attractor. Other than the name suggests, though, the magnets in that example would notthemselves be attractors; attractors are just the rules that are repeatedly applied with each swing.
ATTRACTOR · Repeatedly applied rules or laws can eventually loop. In chaos theory, this kind of a loop is called an “attractor.”
In a certain way, you could look at rivers as attractors. It rains and the empty riverbed slowly fills until it reaches an equilibrium of water flowing into the riverbed and water flowing away as part of the river. The river is the stable state, the “attractor.” Likewise, rain starts out with a few droplets until it reaches a stable state of a certain amount of water falling down per minute. Similarly, you could describe the waves on the sea as a stable attractor. Sure, they move around, but the sea returns to its original state after each wave.
We see the same idea with light or sound, which we do not describe by absolute positions, but by their attractors, their wavelengths. For example, a note in music is not a singular event or particle, it is a pattern of how air pressurizes and depressurizes back and forth. We find this idea also in computer programs that loop. In old TV screens, there was a screen with a single cathode ray highlighting individual points on the screen with a certain color. This highlighting was done in lines and so fast that instead of individual lines or points lighting up, our eyes could see only a full picture.
Chaos theory explains the complexity in nature by pointing out that it is the result of repeatedly applied (simple) rules or natural laws. Stable elements within a chaotic system are called attractors. They are the result of a looping repeatedly applied rule.
At the end of the next chapter, a picture will emerge of what thoughts are. Instead of singular states of individual neurons, you can look at them as attractors—signals swinging back and forth between a number of neurons like the magnetic pendulum hovering over one magnet.