Memory-enhanced coincidence detection and lateral inhibition
The title of this page may sound technical, but it is a
description that combines three relatively simple functions derived from the basic capability of
neurons and the synapse connections between them.
When the three functions are combined and applied recursively and hierarchically, much more powerful
functionality emerges that can explain all of
afferent processing, the processing of all data by the brain,
data either from the senses or from within the brain.
This is my proposed model, and it seems to fit with evidence and other opinions that are referenced on this page,
but it is not the only possible interpretation.
This is level 2
of seven in my proposed hierarchical
levels of description as outlined in the
summary,
immediately “above” the level of the “hardware” of the brain,
and provides the mechanism for the brain to create understanding and intelligence,
to make decisions and take action based on the priority of incoming stimuli,
and is also, ultimately, the source of attention, free will and consciousness.
- Memory-enhanced coincidence detection and lateral inhibition is made up
of three different capabilities of neurons and
synapses, as well as the inherent structure of connections in the brain.
- Coincidence detection
- A neuron continuously adds up the incoming electrical signals
that arrive near the
axon hillock,
and produces an output signal at a given
time only if the sum of its inputs at that time exceeds a certain threshold.
- Incoming signals can come from many other neurons and the single output signal can be passed
to many other neurons, and becomes their input signals.
- Whether or not an output signal is created can also depend on the environment outside the
neuron, which is affected by neuromodulation.
- A neuron (marked C in this diagram) with two inputs (from neurons marked A and B)
can act as a coincidence detector because if both A and B generate signals at the same time,
or within a very short time, neuron C may generate a signal and pass it on to other neurons.
- This is obviously an extremely simplified view, but to be able to explain
the workings of a complex neuron with potentially hundreds or even thousands of
inputs, I have created a very simple model neuron that I call an
ABCD neuron.
- Many of these ABCD neurons connected together in the correct configuration
can be used to model a real neuron, or, more likely, many neurons, because there tends
to be quite a lot of duplication in the brain.
- This diagram shows these ABCD neurons rather than real ones - see
diagram information.
- I have created a set of afferent processing examples
that show in detail how the simple functionality of ABCD neurons can be used to process incoming data;
this is a practical way of demonstrating how real neurons may work, using simple diagrams and descriptions.
- Not all coincidences will be between incoming sense data signals;
other signals from “higher up” the hierarchy coming from the other direction
via efferent connections
may also match with sense data, and act as predictors.
The afferent processing examples
also give details on how this might happen.
- My use of the word “coincidence” here is purely a scientific one.
- I do not mean a “coincidence” as often meant in everyday conversation of a strange, unlikely,
or even mystical connection between two or more things or events.
- A neuron that is receiving signals has no information on where the signals originate from,
and they have no meaning in themselves, every signal is just the same as any other.
- It is useful to be able to describe the “coincidence” of a certain number of signals
that arrive in a neuron within a certain time frame because that is what causes it to generate and pass on a signal.
- The overall processing of a neuron can be described as coincidence detection, but clearly
other descriptions (mostly much more complicated) are possible as well.
- It is relatively easy to see how coincidence detection works to create a map of the body in the
brain1.
- If my finger touches something, signals will arrive in my brain from a number of sensors
in my finger at the same time, activating several neurons at the same time.
- After this has happened a few times, mutual connections between these neurons will be strengthened.
- This is the start of a map of my finger in my brain, and this can extend to all parts of my body.
- A similar description can then be made to include other
senses3
(see also afferent processing example 3).
- Memory enhancement
- The strength of a synapse can change depending on its recent usage
and whether the two neurons it is connecting fire together.
- When a synapse is activated, it becomes stronger, so is more likely to activate in the
future1,
2.
- If it is not active over a period of time, it will become less likely
to activate, and may be
pruned altogether.
- When the two neurons either side of a synapse fire together, or at least
very closely together, the synapse will be strengthened. This functionality is called
Hebbian learning and is summed up in the phrase:
“neurons that fire together wire
together”2.
- The behaviour of a neuron with synapses that have a form of memory is similar to a
memristor.
- The strength of a synapse can affect whether or not a signal is passed
from one neuron to another.
- The strength of a synapse can act as a memory of previous coincidences.
- This memory can increase the chances of correctly interpreting and even
predicting future coincidences.
- The resulting functionality can be described as memory-enhanced coincidence detection.
- Lateral inhibition
- If a neuron that is detecting a coincidence has synapse links from other nearby neurons
(which are perhaps also detecting coincidences)
and those links produce a negative or inhibitory effect on the firing of those neurons, then this is
the beginnings of a competitive process.
- When this process is carried out over many hierarchical levels,
this will result in a signal selection process that is crucial for higher-level processing.
- “Lateral” means “sideways”, and “inhibition”
means “prevention” or perhaps “hindrance”,
so this phrase is describing the process of a neuron sending out signals to nearby neurons
(that may be fulfilling similar functions) to
discourage them from firing.
- Inhibition was first discovered in investigations into reflex actions controlled
by the spinal cord by
Sir Charles Sherrington (1857-1952).
He called it “reciprocal control” because it enables the correct reflex to be
actioned strongly, and reciprocal (opposite effect) competing ones to be inhibited at the
same time4.
- It was subsequently discovered that inhibition is common through all areas
of the brain involved in processing sense data or initiating action.
It is the means by which the one most important piece of data is processed at any one time,
and also the means by which the one most important action is taken at any one
time5,
6,
7.
- Lateral inhibition is the lowest level description of the process of attention,
and also the lowest level description of what has been called
Biased Competition Theory, although this involves signals from
above and below in the hierarchy as well as lateral signals.
- Inherent structure of connections.
- It is important to note that a prerequisite for the three functions above is
that there must be the necessary (potential) connections in place between
neurons. The connection structure of the neocortex
is built, from the coding of DNA before birth, to enable data from the senses to be processed
in a hierarchical and recursive (repetitive and cumulative) way to spot patterns, compress
data and store it.
- This structure seems to be readily available in the
columns of the neocortex,
but potentially may be either available or able to be created in other areas of the brain as well.
-
^ ^
Livewired - David Eagleman Canongate 2020
Page 34, second paragraph:
“If our neuron spikes, and then a connected neuron spikes just after that, the bond between them is strengthened. This rule can be summarized as 'neurons that fire together, wire together.'”
Note 13 on page 254:
“This rule, known as Hebb’s rule, was first proposed in 1949 [reference 2 below]. It often turns out to be slightly more complex: if neuron A fires just before neuron B, then the bond between them is strengthened; if A fires just after B, their bond is weakened. This is known as spike-timing-dependent plasticity.”
Page 34, fourth paragraph:
“How does this simple trick lead to a map of the body? Consider what happens as you bump, touch, hug, kick, hit, and pat things in the world. When you pick up a coffee mug, patches of skin on your fingers will tend to be active at the same time. When you wear a shoe, patches of skin on your foot will tend to be active at the same time. In contrast, touches on your ring finger and your little toe will tend to enjoy less correlation, because there are few situations in life when those are active at the same moment. The same is true all over your body: patches that are neighboring will tend to be co-active more than patches that are not neighboring. After interacting with the world for a while, areas of skin that happen to be co-active often will wire up next to one another, and those that are not correlated will tend to be far apart. The consequence of years of these co-activations is an atlas of neighboring areas: a map of the body. In other words, the brain contains a map of the body because of a simple rule that governs how individual brain cells make connections with one another: neurons that are active close in time to one another tend to make and maintain connections between themselves. That’s how a map of the body emerges in the darkness.”
Note 14 on page 254:
“There are also genetic tendencies that cause the map to form in certain ways; for example, the reason the head is on one end of the map and the feet on another has to do with the way the fibers attach from the body. But this book emphasizes the surprising ways that experience changes the wiring.”
-
^ ^
The Organization of Behavior
- Hebb 1949
viewable here,
downloadable here or see
GoogleScholar.
Page 62 under the heading “A Neurophysiological Postulate”: “Let us assume then that the persistence or repetition of a reverberatory activity (or 'trace') tends to induce lasting cellular changes that add to its stability. The assumption, can be precisely stated as follows: when an axon of cell A is near enough to excite a cell B and, repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.”
-
^
Time-locked multiregional retroactivation: a systems-level proposal for the neural substrates of recall and recognition - Damasio 1989
doi: 10.1016/0010-0277(89)90005-X downloadable here or see
GoogleScholar.
This paper (which is quite difficult to make sense of) describes a theory of how neurons can store information about entities and events in “convergence zones”, and touches briefly on how these convergence zones might be created, for example page 43, last paragraph:
“The key to regionalization is the detection, by populations of neurons, of coincident or sequential spatial and temporal patterns of activity in the input neuron populations.”
-
^
In Search of Memory - Kandel 2006 Norton & Company USA -
see GoogleScholar.
Extracts from pages 70-2: “Sherrington discovered ... that not all nervous action is excitatory - that is, not all nerve cells use their presynaptic terminals to stimulate the next receiving cells in line to transmit information onward. Some cells are inhibitory; they use their terminals to stop the receiving cells from relaying information. Sherrington made this discovery while studying how different reflexes are coordinated to yield a coherent behavioral response.
... Sherrington immediately appreciated the importance of inhibition not only for coordinating reflex responses but also for increasing the stability of a response. Animals are often exposed to stimuli that may elicit contradictory reflexes. Inhibitory neurons bring about a stable, predictable, coordinated response to a particular stimulus by inhibiting all but one of those competing reflexes, a mechanism called reciprocal control. For example, extension of the leg is invariably accompanied by inhibition of flexion, and flexion of the leg is invariably accompanied by inhibition of extension. Through reciprocal control, inhibitory neurons select among competing reflexes and ensure that only one of two or even several possible responses is expressed as behavior.
... Sherrington saw reciprocal control as a general means of coordinating priorities to achieve the singleness of action and purpose required for behavior. His work on the spinal cord revealed principles of neuronal integration that were likely to underlie some of the brain’s higher cognitive decision making as well. Each perception and thought we have, each movement we make, is the outcome of a vast multitude of basically similar neural calculations.”
-
^
Top-down and bottom-up mechanisms in biasing competition in the human brain - Beck and Kastner 2009
doi:10.1016/j.visres.2008.07.012 downloadable here or see
GoogleScholar.
Bottom of page 1 to page 2, under the heading “Multiple stimuli compete for neural representation in visual
cortex”: “...objects compete for neural representation in visual cortex. A large body of evidence from both single-cell physiology and neuroimaging suggests that multiple stimuli present at the same time within a neuron’s receptive field (RF) are not processed independently, but interact with each other in a mutually suppressive way.”
-
^
Rethinking Consciousness - Graziano 2019 Norton & Company USA - see
GoogleScholar.
Page 30-31: “...regardless of the type of information - visual, auditory, emotional, intellectual - the architecture of the cortex creates elimination rounds of competition. The increasingly selected information becomes ever more deeply processed and ever more likely to have an impact on behavior.”
-
^
A Theory of How Columns in the Neocortex Enable Learning the Structure of the World - Hawkins, Ahmad and Cui 2017
doi: 10.3389/fncir.2017.00081
downloadable here or see
GoogleScholar.
Page 12, under the heading “Role of Inhibitory Neurons”: “..neurons in mini-columns mutually inhibit each other. Specifically, neurons that are partially depolarized (in the predictive state) generate a first action potential slightly before cells that are not partially depolarized. Cells that spike first prevent other nearby cells from firing. This requires a very fast, winner-take-all type of inhibition among nearby cells, and suggests that such fast inhibitory neurons contain stimulus-related information, which is consistent with recent experiment findings.”
Page last uploaded
Sat Mar 2 02:55:43 2024 MST