Explanatory gap
The explanatory gap is a term used mostly by philosophers meaning the huge difference between the known physical properties of the brain and personal mental experiences. This is a possible starting place for this website as a whole, because it is a concept that is relatively easy to understand, and a primary aim of this website is to bridge this gap by providing evidence for why there is a gap, and how it can be explained. Scientists know quite a lot about the low-level components of the brain, how they work and what they do; the explanatory gap refers to the difficulty of how I can use this knowledge to explain how my thoughts, feelings and consciousness come about.
My conclusion on this website is that the reason there is a gap is because I am only aware of a model of my world in my own brain, I do not directly perceive the real world or my actual brain, and what I mean by “I” is actually the model of me within my brain. This is why it is impossible for “me” to internally understand what is really happening in my own brain. This website attempts to explain how the model and its components within my brain can be created and maintained and how they can provide the internal experiences I have.
Contents of this page
|
History of the term - a brief history of the term “explanatory gap”.
|
Emergent features - examples of emergent features relating to the history.
|
Levels of description - how multiple levels of description create emergent features which help plug the explanatory gap.
|
Illusions - a review of illusions in three categories. These highlight different aspects of the explanatory gap and help to determine possible levels of description.
|
Conclusions - the explanatory gap helps to determine useful levels of description for explaining the workings of the human brain.
|
References - references and footnotes.
|
History
- The
explanatory gap
is a term coined by philosopher
Joseph Levine
in 19831,
although the idea was first put forward in a different form by Gottfried Leibniz
as early as 1714, and is sometimes referred to as Leibniz’s gap
or Leibniz’s Mill2.
Levine’s first example was of the difficulty in deciding whether statements relating to the cause of mental states are true or not, and his conclusion was that we cannot know.
- He compared two statements:
- Heat is the motion of molecules (statement A).
- Pain is the firing of
C-fibres (statement B).
He explained that statement A is always true because we understand the underlying mechanism
of how the movement of molecules creates heat, and statement B may be true, but we cannot
say whether it is always true because we have no understanding of how the firing of C-fibres
creates the internal feeling of pain.
- His argument was that
materialism
(essentially the explanation of things using basic laws of physics) cannot explain
qualia
(the internal feeling of experiencing something)
and therefore also cannot explain consciousness, and hence the so-called
mind/body problem
remains an unsolvable problem. The philosopher
David Chalmers
later (1995) described this as the
hard problem
of consciousness.
- Levine said that, relating to his statement A concerning heat, we have a good understanding
of how the higher level concept emerges from the lower level functionality, so he acknowledged that heat is an
emergent concept, but did not attempt to describe pain in the same way.
- In more recent times, I think the phrase “explanatory gap” has tended to be used in a
slightly more specific way than Levine intended.
- It usually means the gap between our understanding of the physical attributes and abilities of
the lowest level components of the brain (neurons, synapses, glia and neuromodulators) and the personal
experience of consciousness.
- It also implies that the gap is one that cannot be explained in scientific
terms3,
4.
- However, it is still sometimes also used in relation to phenomenon other than consciousness as a whole, such as qualia.
- If we assume materialism is the correct approach, then we must assume that pain is a concept
that emerges from the known low-level functioning of the brain.
But it is also obvious that there cannot be just a single level between the two.
The only conclusion is that there must be multiple levels of description, each with emergent concepts.
Emergent features
- An emergent feature, which can be a concept or behaviour, is something that emerges at a higher
level of description than the “micro” level.
- Relating to Levine’s statement A above, heat is in fact a very good example of an emergent concept.
- The concept of heat only exists at the “macro” or “higher” level where it is
being described, in this example at the level of a human feeling heat or
measuring heat using a thermometer.
- It does not exist at the “micro” or “lower” level of molecules.
- However, it is a very real and useful concept at the higher level,
and everyone knows what it is and what it feels like.
- As Levine pointed out, we have a good understanding of how the lower level components
and forces create the higher level concept.
- I discuss the example of heat as an emergent concept (as part of temperature) more fully
on my page on levels of description.
- Similarly, relating to Levine’s statement B above, pain has to be also a good example
of an emergent concept.
- The concept of pain only exists at the higher level where it is
being described, in this example at the level of the internal feeling of a human.
- It does not exist at the lower level of neurons
or synapses
(or, as in Levine’s example, nerve fibres, which are the axons of neurons).
- However, it is a very real and useful concept at the higher level.
- It is real because everyone knows what it is and what it feels like, and
when people feel pain it is very real to them.
- It is useful because it provides a warning signal of tissue or nerve damage.
- The only exception is those people who have a
congenital insensitivity to pain.
- This disorder normally has a genetic cause, although it also affects some people with autism
(see Google Scholar search).
- Levine acknowledged that there is an explanatory gap, but he seemed to admit defeat in trying to
understand how it could be explained.
- He did not consider that there might be several intermediate
levels of description, what they might be,
or how to move from one level to the other.
- The main difference between the possible explanations of heat and pain as emergent concepts must be
the number of intermediate levels of description.
- For heat, there is (arguably) only one level of description: heat can be explained from the
statistical behaviour of the energy of a large number of molecules.
- To be able to explain pain starting from the low-level knowledge of neurons and synapses
will require several intermediate levels of description.
Levels of description
- My proposal on this website is that using multiple levels of description
is the only likely way to be able to bridge the explanatory gap.
- As discussed in more detail on my page on levels of description,
this is likely to be true for any system of sufficient complexity.
- The example of pain above shows why this is likely to be true for the brain.
- The question then is: how can possible useful levels be discovered, and how can they can be described?
- On other pages on this website about levels of description and
afferent processing,
I have used evidence about how the brain functions correctly to help to answer this question;
the remainder of this page looks at clues that might be given when the brain apparently does not function correctly.
Illusions
- A potentially useful way to think about the explanatory gap and what the intermediate
levels and concepts may be is to consider so-called “illusions”,
which are things that my brain clearly “gets wrong”.
- When the brain gets something wrong, an explanation of why it is wrong is likely to reveal
something useful about the explanatory gap.
- So the hope is that this will give clues on the way my brain really works,
and suggest some intermediate levels of description.
- I, like many people, have always been fascinated by illusions, particularly optical illusions.
- The obvious reason is that, like any puzzle or joke, I enjoy being challenged by something
that initially does not make sense, but which is later resolved.
- Perhaps there is a deeper reason, which is that my brain is wired to be curious,
to seek out the unexpected and unexplained. That is basically how the brain helps the body to survive.
- My attention is automatically drawn to something unexpected,
and therefore I always become conscious of it and will try to make sense of it.
- There is nothing more unexpected or more surprising than something that appears impossible,
or something that clearly contradicts my view of reality.
- There could be an even deeper reason, which is that my brain does not expect its own
processing to be incorrect.
- My brain generally seems to expect every perception to fit its expectations.
- This is partly because of the large element of prediction
that my brain uses in every perception.
- It could also be a clue leading towards the idea that what I mean by “my brain”
is not my whole brain, but only a model within my brain of my own brain processes.
- I have expanded the normal meaning of the term “illusions” in the review below.
- The normal meaning describes when I perceive specially-crafted external things and can easily be
aware that there is an illusion going on, and in some cases (but not all), I can adjust my perspective
to correct the incorrect perception.
- I also include “illusions” that are happening all the time, but that I am not normally
aware of unless they are drawn to my attention in a special way, or explained in detail, and even then some
of them are difficult to appreciate.
- Finally, I also include “illusions” that are part of my very existence, things that
are almost essential for me to believe in.
- The conclusions from the examination of the various categories of illusions below are that
there must be representations of objects and concepts already in the brain before any perception takes place,
and that together these representations form a model of my world. This includes my perception of myself.
- Illusions can usefully be broken down into three categories:
- Illusions of the perception of things external to the brain are when I perceive something incorrectly.
Some of these errors can be corrected by learning or experience, but others remain stubbornly incorrect even when I consciously know exactly what I should be perceiving.
- Most relate to sight and some to hearing, but there are a few relating to other senses, and some which involve more than one sense.
- There are many known
optical illusions
that result in me seeing something different from what is actually there.
Wikipedia has a
list of optical illusions
that currently contains around 150 well-studied examples; some of these are ambiguous images that do not really
shed any light on the workings of my brain, although some do help with the understanding of how the processing of sense data works.
- There are also quite a few
auditory illusions,
some of which I still hear incorrectly even when I know exactly what I should be hearing.
- An example of an illusion that shows what can happen when the input from two senses is contradictory is the
McGurk effect.
- A clip from a BBC Horizon programme on the subject that shows the effect can be
seen here.
This was first described in 19766
and has been shown to work to some extent in quite young
children7,
but has since been shown to give variable results in different people and in different
circumstances8.
- This is true even when the conscious brain knows what it should see -
the unconscious brain stubbornly interprets it as something else.
The brain combines information from different senses, and when it receives contradicting information,
it tries to resolve the situation so that it can present one unambiguous perception.
- There are also a few
tactile illusions.
- Although the senses of taste and smell seem simpler in their processing than sight and hearing,
there are certainly “illusions” where the expectation takes precedence over the actual perception.
- There are illusions
relating to the perception of the body, including the well-known
rubber hand illusion
and out-of-body experiences.
These illusions provide good evidence that my perception of any object external to the brain is decided before my consciousness has
any knowledge of it or influence on it, and that how the object is perceived is largely dependent on my
previous knowledge and experience related to that object, or something very much like it, and is largely predictive.
Therefore I conclude that there must be some representation of the perceived object already in my brain
that has been created by previous encounters with the same object, but which maybe can be updated by this encounter.
- Illusions of perception itself are when my perceptions themselves are
incorrect. What I believe I perceive does not correspond with reality.
- I have a
blind spot
in each eye that blocks my vision over a specific area, but I am never aware of it in normal circumstances.
- My eyes frequently move quickly several times a second,
called saccades,
for example, when reading, and yet I am never aware of the thing I am looking at
moving with respect to me, or of any blurring as my eyes move.
- My two eyes actually see two slightly different images, due to the spacing between them,
but I normally perceive a three-dimensional image.
However, when different images are shown to each eye at the same time, I do not see a merged image, but
my perception flips between the two images, apparently at random.
This is called binocular rivalry.
- I see colours, but in reality they do not exist; colour is solely a
construction
of the brain resulting from varying wavelengths of visible light. Visible light is just a tiny part of the spectrum of
electromagnetic radiation.
- I hear sounds,
but in reality they do not exist, there are only vibrations
of air molecules, or sometimes vibrations of physical things.
- I smell and
taste things,
but all there is in the real world are volatile molecules that can be detected by my nose and tongue.
These illusions show that what I consider to be my perception of the outside world is in fact a constructed model or schema in my brain.
As explained in perception, this is explained by the proposition
that we only recognise and perceive something via the symbol schema for it
rather than directly via senses; in other words, we access a model of the thing being perceived,
not the thing itself.
- Illusions of cognoception are when I have an
incorrect perception of my brain’s internal processes (cognoception is my invented name for my own understanding of my
own brain processes). Wikipedia has long lists of
introspection illusions and
cognitive biases, but
my examples here are rather more general:
- My innate feeling about my own attention is that I can choose
where to direct it and that it flows smoothly from one thing to another.
However, we know from research that this is not true.
- My innate feeling about my own memory is that I can store and
recall almost every detail about an experience; it is like a video recorder or tape recorder.
We know from research, but also we tend to realise ourselves, particularly as we get older, that this
is nowhere near the truth.
- I believe that my perception shows me a constantly-updating
view of the world around me, as if I were recording a high-definition three-dimensional video.
This, of course, is nonsense.
- I believe I have free will and can make free choices and
have full control over what actions I take. This is not completely true.
The explanatory gap here is between my “natural instincts” of how my brain actually works and reality.
Drawing a parallel with my conclusion above that my perception of the outside world is via a constructed model
or schema in my brain, my conclusion from these illusions is that I perceive myself via a model or schema
of myself. In both cases, the schemas are not necessarily complete or correct, so my perceptions are not
necessarily complete or correct. Also in both cases, a large part of the perception is down to prediction.
Conclusions
- An examination of the explanatory gap in our knowledge of the workings of human brain suggests that there needs to be several levels of description in any useful explanation of the human brain.
- Reviewing illusions as examples of the explanatory gap suggests some ideas about how the human brain processes data and creates a perception, not only of external things, but of the self as well.
- It suggests that there must be structures in the brain that represent concepts, and that these are used predictively to recognise objects before any consciousness is involved.
- It also suggests that similar processing and prediction must be involved in the perception of the self.
- So there is a very clear argument that these representative structures should feature as an emergent concept from a level of description in any explanation of the working of the human brain.
-
^
Materialism and qualia: the explanatory gap - Levine 1983
doi: 10.1111/j.1468-0114.1983.tb00207.x
downloadable here or see
GoogleScholar.
Page 357, end of third paragraph: “...for instance, if penetration of the skin by a sharp metallic object excites certain nerve endings, which in turn excites the C-fibres, which then causes certain avoidance mechanisms to go into effect, the causal role of pain has been explained... However, there is more to our concept of pain than its causal role, there is its qualitative character, how it feels, and what is left unexplained by the discovery of C-fiber firing is why pain should feel the way it does!”
-
^
Leibniz’s Mill Argument Against Mechanical Materialism Revisited - Lodge 2014
doi: 10.3998/ergo.12405314.0001.003
downloadable here or see
GoogleScholar.
A detailed discussion on the thoughts of Leibniz in 1714, who proposed that it would never be possible to bridge the explanatory gap using physical explanations (materialism), although Leibniz clearly already believed that perception, sensation and thought, along with the soul, are immaterial with non-physical causes. However, the author does not actually say whether he agrees or not.
-
^
The puzzle of conscious experience - Chalmers 1995 (updated 2002)
downloadable here or see
GoogleScholar.
Page 96, first column, third paragraph:
“Of course, neuroscience is not irrelevant to the study of consciousness. For one, it may be able to reveal the nature of the neural correlate of consciousness - the brain processes most directly associated with conscious experience. It may even give a detailed correspondence between specific processes in the brain and related components of experience. But until we know why these processes give rise to conscious experience at all, we will not have crossed what philosopher Joseph Levine has called the explanatory gap between physical processes and consciousness.”
-
^
Consciousness explained or described? - Schurger and Graziano 2022
doi: 10.1093/nc/niac00
downloadable here or see
GoogleScholar.
Page 2, second paragraph: “The idea of the explanatory gap is that, while it is possible to come up with laws of consciousness, a true scientific theory of consciousness is not possible.”
-
^
Knowledge in perception and illusion - Gregory 1997
doi: 10.1098/rstb.1997.0095 downloadable here or see
GoogleScholar.
See page 2 under the heading “2. The hollow face”.
This paper explains this type of illusion by assuming that there is a difference in level between “perceptual knowledge” and “conceptual knowledge”. Although it acknowledges that initial perception is done unconsciously it does not seem to make a connection between “perceptual knowledge” being subconscious and “conceptual knowledge” being conscious.
-
^
Hearing lips and seeing voices - McGurk and MacDonald 1976
doi: 10.1038/264746a0 downloadable here or see
GoogleScholar.
This is the paper that first reported what came to be known as the McGurk effect. From the beginning of the abstract:
“Most verbal communication occurs in contexts where the listener can see the speaker as well as hear him. However, speech perception is normally regarded as a purely auditory process. The study reported here demonstrates a previously unrecognised influence of vision upon speech perception. It stems from an observation that, on being shown a film of a young woman's talking head, in which repeated utterances of the syllable [ba] had been dubbed on to lip movements for [ga], normal adults reported hearing [da]. With the reverse dubbing process, a majority reported hearing [bagba] or [gaba]. When these subjects listened to the soundtrack from the film, without visual input, or when they watched untreated film, they reported the syllables accurately as repetitions of [ba] or [ga]. Subsequent replications confirm the reliability of these findings; they have important implications for the understanding of speech perception.”
-
^
The McGurk effect in infants - Rosenblum, Schmuckler and Johnson 1997
Perception & Psychophysics, 59, 347-357,
downloadable here or see
GoogleScholar.
This paper shows that McGurk effect can be seen in children as young as five months old, although the results are obviously not as clear cut as with adults who can describe what they hear. Since children this young don’t have a fully developed sense of self, and therefore would probably not be able to report on what they heard, even if they could communicate, I think these results have to be treated with care.
-
^
Forty Years After Hearing Lips and Seeing Voices - the McGurk Effect Revisited - Alsius, Pare and Munhall 2017
doi: 10.1163/22134808-00002565
downloadable here or see
GoogleScholar.
Second sentence of Abstract on page 1:
“Despite the well-established practice of using the McGurk illusion as a tool for studying the mechanisms underlying audiovisual speech
integration, the magnitude of the illusion varies enormously across studies.”
Page last uploaded
Sat Mar 2 02:55:43 2024 MST