Competence Without Comprehension
How is it done?
From Bacteria to Bach and Back - Daniel Dennett Philosophy and Simulation - Manuel Delanda Sapiens - Yuval Noah Harari
A common thread in these thinkers is to consider how cognitive capacities like comprehension can arise. Dennett deals with the topic quite directly. Delanda explores the sorts of digital simulations that are required to deal with the sorts of things we deal with like pattern matching and decision making. Harari explores how things like morality or language could emerge in a lineage that started with neither.
To put it clearly; a neuron (say) does not comprehend but a well functioning brain, that is a huge network of neurons, does comprehend. A neuron may not comprehend, but stimulation causes it's state to change. But this change of state occurs in a system where the change of state becomes stimulation for other neurons and a cascade of stimulation and state change propagates within the system.
This process may be chaotic but it is not random. A particular neuron will only change its state (which affects other neurons) if a certain set of conditions is met. This is called 'firing'. That set of conditions is created by evolution and experience, but I'd call the capacity to fire when a particular set of conditions is met a competence. Within the context of the network of neurons the firing has meaning. Within the network the functional purpose of that firing is to signal that certain conditions have been met. Other neurons get the signal and fire in turn if the set of signals they get meets their own set of conditions.
A competence is the ability to do something meaningful within a system if conditions are met. This is different from a purely physical change like ice melting to water if it's warmed.
A neuron doesn't comprehend; it just fires when conditions are met. And yet we comprehend. And there is a level of comprehension we share with other mobile animals that is not associated with being conscious. A frog snapping up a fly comprehends the fly in a way that none of it's neurons do. But there is no thought process of "Ahhh - a black spot that's a fly! Must snap it up!" But there is a comprehension of what the spot means and the response follows. And the comprehension enables a further competence; the ability to snap up a fly.
What we are seeing here is how comprehension emerges from the competences of things that do not comprehend. The idea of emergence is a meme that has evolved into many different forms so I must be clear about what I mean here so I digress a bit.
Let's start with a level of physical reality like subatomic particles. They have properties that cause them to combine and associate in various ways that we call atoms. The atoms have properties that the subatomic particles do not have. The properties that atoms have that subatomic particles do not have are emergent properties. The subatomic particles and the atoms are at different levels of abstraction. Atoms produce matter in its various forms - ie liquid solid or gas, and the various chemical elements - another emergent level of abstraction. Biology emerges from chemistry, and so on.
Consider the bicycle in the context of emergence. A bicycle is made of physical stuff. Let's call the pile of parts needed to make a bicycle a level of abstraction. If we assemble the parts properly we get a new level of abstraction that is the bicycle. The bicycle has properties the pile of parts did not have. You can ride a bicycle. You could not ride the pile of parts. We can say that a new property has emerged; let's call it rideability. Rideability emerges from physical reality but is not a physical thing with mass or color or charge. But it is real, as real as the metal a bicycle is made of.
Returning to the frog. The ability to comprehend that a fly is present emerges from the lower level of the frog's neural network. DeLanda shows how properties like pattern matching emerge from simple neural networks. Then he explores what happens when you treat those simple networks as elements from which further levels of abstraction emerge. He also explores what happens when you let systems like that compete in an evolutionary environment. He explores systems that have a level of comprehension in a virtual reality similar to our frog.
Both Dennett and Harari discuss our own level of comprehension as emerging not only from our brains but also from our interactions with other individuals and with social institutions.
A practical problem with this kind of analysis is that since ancient times we have thought of competence as requiring comprehension. For instance a competent welder comprehends what they are doing - that is, the comprehension is a lower level of abstraction from which competence emerges. But was that ever right? Who ever learned to weld from a textbook?
What do you think?