This is an excerpt from the book series Philosophy for Heroes: Continuum. You can get a copy here.
If you’re good at finding the one right answer to life’s multiple-choice questions, you’re smart. But intelligence is what you need when contemplating the leftovers in the refrigerator, trying to figure out what might go with them. Or if trying to speak a sentence that you’ve never spoken before. As Jean Piaget used to say, intelligence is what you use when you don’t know what to do, when all the standard answers are inadequate.
—William H. Calvin, How Brains Think: Evolving Intelligence, Then And Now [Calvin, 1997, p. 1]
What is the main difference between the human brain and a computer?
The main difference between a single computer and the human brain is that you can remove a piece of a computer system and it stops working. Just like we previously explained that adding more neurons to the brain reduces jitter, the solution in the computer sciences to improve reliability is to add redundancy by simply putting an identical machine beside the first, to take over if the first machine happens to malfunction. In the case of the human brain, this kind of redundancy is found everywhere.
Imagine a team of people. Each person knows his or her field very well. But when one team member gets sick, the whole project will stand still because nobody else can help out. This is the computer model: you take a piece out of the computer or a software program and it will crash or not even start. In the brain model, each team member knows a bit of everything. So even if someone is not available, the team can work together to finish a project. If you ask something, an individual might not know every detail, but together with the other team members, you can get the whole picture. In the same fashion, you can lose some brain cells and the ideas they were mapped to are still available to you, although maybe not in the same level of detail. This is theholographic and redundant model of how to manage information.
When making a decision, you can likewise imagine the workings of the brain as a committee. Individual members of this committee (neurons) could be sick at home and the rest of the committee could still arrive at a sensible decision because they share knowledge about the subject from previous meetings. That explains why even people who have lost 90% of their cortex still can make decisions like healthy people [cf. Calvin, 1996, p. 190].
We only “hear” the results of the committee, not the missing voices or the individual agreeing or dissenting voices. If the vote is split, introspecting to our own thoughts might show us that we are jumping back and forth to different ideas or memories all the time. How often this happens depends on our psychological makeup and the concentration of the brain transmitters. This can be compared to the “committee” changing “its mind.” It constantly “votes” which thought should “pass” and become reality in the form of an action.
Of course, it can take a long time for these “committees” in our mind to reach a conclusion. As long as there are several competing thought patterns, we still have to “make up our mind” and we are still indecisive about something or plan our next actions in our mind until we feel “ready.” On top of deciding what to do, we also have to synchronize our thoughts [cf. Calvin, 1996, p. 169f].
All learning involves a process of automatizing, i.e., of first acquiring knowledge by fully conscious, focused attention and observation, then of establishing mental connections which make that knowledge automatic (instantly available as a context), thus freeing man’s mind to pursue further, more complex knowledge.
—Ayn Rand, The Romantic Manifesto [Rand, 1971, p. 36]
For routine tasks, we do not need this committee decision process. For example, over the years, we have become “expert walkers.” Some people can easily read a book while taking a stroll. Similarly, to tie a tie or knot a hairband takes a lot of attention at the beginning. But once we have learned it, it is much easier to not try to think about it. That is also the reason why it sometimes can be difficult to explain and teach things in which we are already experts [cf. Calvin, 1996, p. 117, p. 176].
Our “committees” only get activated when something unexpected happens, if what we perceive differs from what we expected. In that regard, the committees are strongly connected to anything related to re-examining our existing pathways of how we reach a conclusion from perception to action: learning and creativity.
A popular example of the workings of the committees and the “voting” is the “Necker cube” (see Figure 4.27), an optical illusion first published in 1832 by Louis Albert Necker. It is a simple wire-frame drawing of a cube that can be interpreted as either having the lower left corner or the upper right corner pointing toward the onlooker. Objectively, it is impossible to tell which way it points, simply because you see only the two-dimensional shadow of the object. But the brain is built in such a way that it always wants to “make sense” of the world. Two dimensional drawings being unknown to the brain, it assumes that we are in fact looking at a three dimensional object. Hence, your “committees” will go back and forth, with you “seeing” the Necker cube first pointing to the lower left, only to return later to point to the upper right again. An optical illusion is something where your committees think that you are looking at reality, but the image does not provide enough information to give you the full picture.
Take a close look at the fork in Figure 4.28. It has two or three tines depending on where you focus your eyes. Your mind wants to build a three-dimensional object out of the illustration but your committees are having a hard time deciding how many tines it has. Objectively, the drawing does not make sense if it is viewed as a three-dimensional object, but your subjective mind does not care. It tries to keep on “voting” which kind of three-dimensional object it is rather than consider that it might just be a drawing. We are aware of that in an abstract way, but when looking at it, we are assuming it to be three-dimensional. This kind of intuition of our mind can help us in many cases to process the world faster instead of bumping into obstacles, but we have to be aware that the model we create of the world is not necessarily how the world really is. The three-dimensional two-three-tined fork does not exist in reality, only in our mind.
Did you know?
Using optical illusions is a good way of studying the difference between one’s objective perception (the ability to measure the lines on the paper to find out that it is, in fact, a two-dimensional drawing), and the subjective experience of the drawing (the creation of a three-dimensional object in our awareness). In addition, we can study how our cognitive knowledge has no influence on our subjective experience of the drawing. Sure, we know in an abstract way that it is a two-dimensional drawing, but we cannot “tell” that to our subjective experience. In Continuum, we are using the example only to point out the fact that given no objective way of deciding which way the cube points, our committees are going back and forth and deciding more or less randomly, or simply based on our expectations of how the three-dimensional object should look. In the next book, we will also examine other illusions, including non-optical ones where we are subjectively sure about something, but objectively the facts tell a different story.
→ Read more in Philosophy for Heroes: Act [Lode, 2018]
While you can imagine that for a single decision, a whole number of “committees” (you!) in your brain decides which option to choose, in reality, we are often faced with an infinite number of possible decisions.
While thinking about which movie to watch now, part of you also thinks about whether to grab some snacks from the kitchen, the birthday card you still need to write, the itch you become aware of only when someone else scratches his head, or the sound of dripping water from a broken faucet in the kitchen. The big question is which of those “committees” gets the say because you can do only one thing at a time.
This question becomes even more complex when we are not only faced with clear distinct options, but also when we have to sit in front of an empty sheet of paper and fill it with a painting or story without clear guidance. While it is easy for the “committees” to recognize a known pattern (like a face of a person we know), it is much harder to (re)construct that pattern or image or even invent a new one [cf. Calvin, 1996, p. 123f].
The point of the committees is that members are in constant competition. Not only do they vote according to their own views, but they also try to convince other members of the committee and recruit them for their “cause.”
The main difference between the human brain and a computer is that the brain is based on redundancy. This also affects decision making, for which there is no central processing unit like in the computer, but a series of competing decentralized “committees.”
If every member of an idea committee votes in its own interest, all members face resistance by competition. Who determines which idea wins?
To find an answer to this question, let us take a step back. When looking at humans, we find that they show creativity and problem-solving capability unparalleled in nature. Or do they? Where is the difference between an engineer solving a physical or biochemical problem, and nature solving these problems through evolution and expressed in the DNA? How do these two seemingly totally different fields—evolution in nature and human creativity—connect?
Experiences in our life create the “landscape” of our mind. In this landscape, thought patterns compete to gain dominance. In “niches,” less “popular” thoughts wait for their time of day, defending their place from other thought patterns that did not adapt to these “experience niches.” It is like a jungle, home to many different “experience organisms” that come out only when we have an experience connected with it, like a familiar smell, sound, or image.
You can imagine the different thought patterns being members of different species of animal: some are hiding underground, some try to survive by climbing trees, some are courageous enough to live in the open field, etc. Each thought pattern has to try to survive and adapt in its environment consisting of other thoughts, and memories. As the vocabulary suggests, what is going on in the cortex very much resembles what is going on in nature, just at a much faster pace.
On top of memories, the current state of your brain and current sensory input influence the competition between thought patterns, too. The amount of sleep you had, drugs in your system, brain transmitters regulating your neurons, and genetics in general might promote certain thought patterns. These parameters can partially be summarized as your “personality.”
If you have a thought pattern in the back of your head about today’s traffic, overhearing a conversation about the traffic will strengthen that thought pattern and you will become aware of an otherwise filtered out conversation. Another input is simply the environment you are currently in or the activity you are currently pursuing. Being in the supermarket, other things will “catch your eye” than when driving your car. Even the kind of music playing in the background can have a general effect on how your mind processes thoughts.
Did you know?
Certain intuitive people like to “keep their options open,” and examine a lot of different ideas before they act. Their brain chemistry would decrease the speed of thought patterns to gain dominance over the others. Likewise, certain substances (like alcohol) might reduce the “vetting process” of your thought patterns so that yet unrefined thought patterns pass the barrier and enter your awareness. Along these lines, a variety of scenarios can be imagined.
→ Read more in Philosophy for Heroes: Act [Lode, 2018]
Experiences in our life (as well as our biology and our current situation to a certain degree) create the “landscape” of our mind. This landscape influences how the committees vote and which thought pattern becomes dominant. The process is comparable to evolution in nature with competing organisms within the same geographical area.
How do these committees and this competition work on a neuronal level?
First, the members of the “committee,“ the neurons themselves, do not contain concepts as such. So, you will not find, e.g., a specific “apple neuron” that lights up when you see an apple and that somehow coordinates with other neurons (also representing other things) what to do with it.
For simplicity, we can imagine concepts being represented in the brain by a group of neurons firing in a certain way. The structure of the brain allows these groups of neurons to “copy” their “role” onto other groups of neurons, making them fire the same way. If you imagine this process going on for a while, more and more groups of neurons copy themselves onto more and more neurons until most of the brain’s neurons are firing the same thought pattern. This thought pattern is what “wins” the competition by sheer numbers and thus a decision is made.
This idea of a process of competing thoughts was originally devised by the neurologist William Calvin and is still part of ongoing research. As mentioned above, the point is that neurons are not computers where each one contains a single concept. Instead, concepts are always mapped onto multiple neurons. The challenge is that one group of neurons copy themselves not simply randomly around the brain, but that they keep their relative structure. With RNA, the self-replication process was done by pairing a strand with its opposite nucleotides and thus creating a negative copy which could be used to create a positive copy. With static neurons, the situation is different. Here, we want to copy impulse sequences to another neuron. How can this be implemented?
Let us first look at the basic problem of copying one image to another location on a piece of paper. For this, we can use a tool called the pantograph (see Figure 4.29). It consists of four sticks that are pair-wise connected at their center and at their ends. You fix the pantograph at one location, lead a pointer over the image you want to copy, while a pen at the end of the pantograph draws the copy.
With neurons in the brain, you have neither a fixed point nor a pantograph. Instead, you can rely on two neurons with identical firing patterns. Their signal creates an interference pattern like we have seen in Chapter 3 with the waves. Two waves will cancel each other out or increase each other’s signal strength at certain points where they meet: an interference pattern emerges. With waves, not only the location, but also the signal frequency is important. So, a group of neurons needs to send individual copies with the same group frequency. Groups of neurons in the distance can then decipher not only the signal frequency but also the spacial structure of the original group—like the spacial structure of the image of an “a” is copied by the pantograph.
Neurons that happen to be in a position where the firing patterns meet can be primed to repeat this firing pattern themselves, starting to send out the same firing pattern and subsequently recruiting other neurons as well. This way, a few neurons with an identical signal pattern can become the dominant thought pattern in a short amount of time. This constitutes the neuronal basis for natural selection within the human brain. The paper Copying and Evolution of Neuronal Topology [Fernando et al., 2008] lays out the details of how how firing patterns can self-replicate.
Because of this copying mechanism, William Calvin calls the cortex of the brain the “Darwin machine,” suggesting that an evolution of thoughts can happen (the exact implementation pathways of the “Darwin machine” is still being examined—current studies, e.g., Kozloski , show a constant loop of the signals via a number of pathways). And indeed it fulfills the basic ideas of evolutionary progress. Thought patterns replicate themselves, possibly with some mutations. Those thought patterns that copy themselves in the most efficient way will eventually dominate others. Likewise, thought patterns that are better suited for their “environment” (other thought patterns, memories, current sensory input, etc.) are more “fit” to survive the mental landscape and will more easily fend off invading thought patterns. Some groups of neurons even become a very stable attractor with a lot of “guardian” attractors nearby that lead you back into the old thought patterns. To get out of these circling and repeating thoughts (neuroses), therapy is needed, for example for obsessive compulsive disorders.
The previous explanation suggests that thoughts simply are copied and then can become the dominant thought pattern. What actually happens is that, just as in nature, changes occur. Copying is not 100% exact, so new copies that might fit better (or worse) into the landscape of the brain are created. The common “I have to sleep on it” is simply an expression of letting those thought patterns run on throughout the night to make a better (or just more refined) decision. The longer we let this process run, the more refined our thoughts might become. For example, when composing music, we iterate over the process until we can say that we (all niches of our brain) are happy with the result. It “touches all the right points” and our neurons are happily firing without complaining: they do not generate new, conflicting thought patterns. The result of our thought process feels “clean.”
Concerning the length of the process, this can be compared to the cooling off of materials. If we cool off a material too fast, it becomes brittle: the molecules did not have time to find their energetic minimum. If we cool it too slowly, well, it might simply take too much time. When we are tired, it becomes more difficult for neurons to keep firing in order to remain the dominant thought pattern. We are then more prone to switching between thoughts and, as a result, our concentration suffers. Likewise, if we allow a thought pattern to become dominant too quickly, the resulting ideas are also “brittle” and unrefined like metal you have thrown into water.
Individual neurons do not contain concepts. Instead, it is always a group of neurons that could be considered as representing a concept. The firing pattern of the group of neurons propagates through the brain in a manner similar to that of a wave. A group of neurons can recruit other neurons to repeat their fire pattern by having their waves of activation create an interference pattern in other groups of neurons.
How can a neuronal network like the human brain learn?
Obviously, this copying process faces resistance as other thought patterns want to copy themselves, too. Imagine getting thousands of “cards” each day. Which ones you will react to will depend on a number of conditions, like learned behavior, other or previous signals, hormonal situation, etc. In a completely uniform brain—without memories or any learned behavior—this would lead to nothing but noise—like the noise of old analog television sets—random firing of neurons without any pattern. This is basically the experience of the world of a baby. Only over time, the neurons differ, they “learn” by getting positive feedback from the brainstem. Learning changes the brain permanently. It affects not just memories we can recall, but also how we process information. The parameters in the affected neurons are changed and they will produce different signals.
Learning for neurons is done by what is called “back propagation,” which is comparable to a bucket brigade where items are transported by passing them from one (stationary) person to the next. For example, this was used to transport water before hand-pumped fire engines but it can also be seen at disaster recovery sites where machines are not available or not usable. Learning is all about reward: you want to encourage the right behavior. When fighting a fire, only the last person of the bucket brigade is actually dousing the fire. Should only that person get the reward?
In neuronal learning, indeed only the last neuron gets the reward and is encouraged for its behavior. But that neuron back propagates the reward from end to start, allowing for the whole “bucket brigade” of neurons to get rewarded and encouraged. This simple principle is also the core of artificial neuronal networks that are able to recognize and conceptualize images, or even beat human players in games such as “Go.”
Imagine you go to the grocery store and pick up a bar of chocolate and take a bite—your brain will be happy, back propagating the reward from the bite, to the act of unwrapping, picking up the chocolate, and walking to the store. But wait, it tastes salty! You will be very unhappy because it did not meet your expectations. Depending on the situation, this negative experience will back propagate now to a negative rating of the shop, and the brand of the chocolate. Next time visiting the shop or seeing the chocolate brand, you will have a negative feeling about it.
If an action led to a reward, the neurons that led to the action are “rewarded,” and the probability of those neurons acting in a similar manner is increased. The neurons that led other neurons to act in this way are rewarded as well, and so on. The basic principle of neuronal learning is called “back propagation.”