Model of my world
The end result of the afferent processing of data,
from internal and external senses as well as from inside my brain, is a very large number of symbol schemas,
each of which represents a concept, and all the connections between them.
The total of these symbols and their connections forms a model of my world in my brain, which includes my body and my brain itself.
The model has an emergent architecture that is mathematically fractal and chaotic.
This means that the behaviour of the brain is determinate (the future can in principle be predicted),
but non-deterministic (not predictable in practice).
This concept is level 5 in my proposed
hierarchical levels
and the highest of the four afferent processing levels.
All higher-level functions in
levels 6 and 7 are dependent on this model and its contents.
Contents of this page
|
Overview - a brief summary of my views.
|
The science - a summary of how other writers have referred to a model of the world in the brain.
|
Details - details of my proposals.
|
References - references and footnotes.
|
Overview
- The model of my world in my brain is made up of all the individual
symbol schemas and the many connections between them.
- This includes everything I have ever come across in the external world, as long as I have
retained it as a memory, whether or not it is consciously available.
- It also includes everything in my “internal” world: my thoughts, dreams, plans, imaginations,
and the schemas of my brain processes, as well my self symbol schema.
- It includes fictional characters and situations, even physically impossible things and events.
- The connections between symbol schemas represent relationships between things that are
represented in the model, but specific types of relationships are represented by schemas.
- Whenever I perceive anything, whether it is something in the outside world, something in my body,
a process in my brain, or even myself, I am actually perceiving the contents of the model, not the
reality8.
- My model is used for prediction, so that I can perceive things that are partially hidden,
and in all circumstances my brain can make a best guess before I even finish sensing something.
- I cannot perceive something if it is not in my model; if something new arrives in my senses,
the model has to be updated with it before I can be aware of it.
- Despite all this, in my moment-to-moment existence, I am not even aware that I have a model,
what it includes, or what it doesn’t include.
- The model is totally unique to me; the contents are different from anyone
else’s1,
because it pertains solely to my world, but the overall architecture and functionality is the same in all adult humans.
- The model is made up of everything from
level 4 in my proposed
hierarchical levels, but as a whole it possesses
emergent architectural properties that do not exist at level 4, such as being mathematically fractal and chaotic.
- These properties emerge from the lower level balance of excitatory and inhibitory connections
between neurons and therefore also between symbol schemas.
- The resulting behaviour is that a state of activation of one symbol schema can be changed by a very small perturbation
to cause the one to be deactivated and another to be activated.
- The model is also a
non-linear,
dynamical system and is
in a state of being on the edge of chaos,
- In the language of chaos theory,
the activation of any symbol schema is a stable
strange attractor,
which may be strange non-chaotic attractors.
- The implication is that the behaviour of the brain is determinate, but not necessarily predictable,
which is relevant to free will.
The science
- Many people from a number of different scientific disciplines have over many years proposed that the brain contains
a model of the world. In recent years there has been a number of attempts to describe the architecture and usage of the model
in scientific or mathematical terms. This section is a brief history of some of the relevant areas.
- Proposals have often come hand-in-hand with the science of prediction,
because any prediction by the brain requires that a model is held within the brain.
- Early theories did not provide any details of make up of the model or how it is built or used;
more modern theories do have more detail on the architecture and mathematical properties, but still most do not
give many clues on their construction or actual usage.
- One of the first scientists to specifically propose that the brain contains a model of the world was
Hermann von Helmholtz,
who was also involved in making early proposals about prediction in the brain (see prediction - Helmholtz).
- In 1867, Helmholtz said in rather general terms that the brain creates models of objects
in the outside world, and that these models are symbols that represent things in the real
world2.
- As evidence for this, he gave many examples of
illusions3
that show that the brain can sometimes predict incorrectly based on what it thinks it senses,
and said that this shows that, to be able to make these predictions, the brain must already have a model.
He also pointed out that, even when we know what we should be perceiving, there are some illusions that we cannot correct,
but others that we can.
- He also suggested that the models are built from our experience of sensing
objects4.
- The British psychologist Richard Gregory
published several papers in the 1960s and 1970s that took up some of the ideas of Helmholtz concerning prediction and its use of a model of the world
(see prediction - Gregory).
- Like Helmholtz, he described a number of illusions of the senses, and described in some detail the advantages
of the brain having a model of the world5.
- In 1943, philosopher Kenneth Craik
proposed that thinking is based on an internal model of the world, and explained that a model has a similar structure of relationships to the things
in the real world that it is modelling6.
- The philosopher and psychologist Philip Johnson-Laird
picked up this idea in a book published in 1983 and pointed out that the models do not have to be complete or wholly accurate to be useful,
and that a model of the self can provide a sense of self-identity and
continuity7.
- In 1970, a formal proof was published that showed that every controller or regulator of a complex system must become a model of that
system10.
- It was assumed that this must apply to the brain, because it is the only known entity that is capable of modelling itself.
- In the man-made field of engineering this rule says that every controller or regulator of a complex system
must have (rather than must be) a model of the system, and is known as the
good regulator theorem.
- The aphorism written in 1976 by the statistician
George Box
“all models are wrong”
is very relevant to this discussion, but is a truism.
- A model, by definition, is not the real thing; it lacks some detail compared to the thing it is modelling,
perhaps in scale and/or physical form, or in other ways.
- So a model is never 100% “correct” in all respects, where “correct” means a faithful reproduction of the thing it is modelling.
- The complete quotation from George Box is “All models are wrong, but some are useful”.
This applies to the model of my world in my brain because it certainly is useful to me, but it also means that it is worthwhile
proposing a model of something as complex as the brain, because it might be useful, even if it is “wrong”.
- In the field of psychology, the concepts of models,
schemas and
isomorphisms have been used for many years.
- A schema in psychology can be any pattern in the brain relating to a specific thought or behaviour, but I use the term in
a more specific way in the phrase symbol schema.
- In mathematics, an isomorphism is a
precise relationship such as a mapping between two things that is reversible; in psychology, it is the relationship between a stimulus
and its representation in the brain, but since “all models are wrong” (see above),
a model in the brain is not a mathematical isomorphism of the thing it represents, because the relationship is not reversible.
- In biology, the study of homeostasis,
the monitoring and maintenance of the state of the body to keep it alive, and
allostasis,
the methods an animal uses to proactively manage homeostasis, both require a model of the body to be maintained in the
brain11.
In can also be argued that a model is required for any
action12.
- In information theory,
an area of research that was largely spurred by the development of computers from the 1940s onwards,
data compression is an important area of study.
It has been proposed that the brain carries out compression, and it is compressed data that is stored in a model of the world in the brain.
A corollary of this is that a compressed version of the self can provide
self-awareness13.
- Predictive Processing
is a relatively recent theory that describes the processing of the brain from the viewpoint of prediction.
- It inevitably therefore requires a model of the world in the brain in order to drive the predictions.
- It describes it as a generative model, meaning that it is capable of generating or reproducing the input that produced it.
- A lot more detail is given in prediction - Predictive Processing.
- When incoming data from the senses does not match the model, a prediction error arises, and the model is
then either updated, or an action is taken to change the incoming data so that it does match.
- The proposal is that the brain aims to minimise the prediction error, and a perception only takes place
when it is minimised. This is the same as saying that a perception only takes place when the model matches the incoming data.
- It has been proposed that the brain has an overall architecture that is fractal, so that the structure
looks similar at different levels, and that its functionality is on the
edge of chaos, in the mathematical sense.
- The structure of the brain has been described as fractal at many different
levels, and it has been said that chaos plays an important role in these many
levels14.
- Chaotic behaviour is a feature of complex systems, and the brain is most certainly a complex
system. Just as a weather forecast can never be 100% accurate because the forces that drive the weather are chaotic,
so the outcomes of complex interactions in the brain may be
uncertain15.
Chaos theory says that
tiny initial changes can be amplified in non-linear systems to result in very large changes.
- However, because the brain has a balance of excitation and inhibition
(see Scholarpedia: Balance of excitation and inhibition),
it has been suggested that this causes the brain to function
on the edge of
chaos16,
18.
- It is not currently known how the brain maintains this balance, although there is some recent evidence that it slowly goes
out of balance during the day and is restored by
sleep19.
- A model of human thought is portrayed as a non-linear system with many
attractors, where each
relates to the synchronised firing of groups of neurons (symbol schemas in my terminology).
Because the linkages are dependent on a balance of many mutually excitatory and mutually inhibitory connections,
a very small perturbation can change the circuits that are
active17.
This is described, in the language of chaos theory, as a winnerless competition between
attractors20.
- When this balance goes wrong, particularly when the balance between excitation and inhibition is lost,
it is thought that this can be a factor in the cause of some epileptic
fits21,
which is obviously a major event in the brain, not just a minor change in behaviour.
More specifically, the imbalance of too much excitation, and too little inhibition seems strongly associated with epilepsy,
but the possible causes of this are many and varied.
- In a number of ways, the model in the brain is similar to a
scientific model,
a model that scientists create to try to understand a complex system, which can be mathematical or purely descriptive.
- It is built based only on evidence of the senses, sometimes extended by instruments.
- It is analysed for patterns or coincidences.
- Any patterns or coincidences are used to compress the data, to obtain generalisations,
or abstractions, of the concepts.
- These abstractions are the equivalent, for scientists, of theories, rules or laws.
- Humans are unique (as far as we know) because we also invent words to describe the concepts.
- Words are simply metaphors, or symbols, that stand for the concept (see language).
- Philip Johnson-Laird takes this same point further to make a mind-boggling
thought9.
- Models are always less complete and less accurate than the real thing that they are modelling.
- These proposals on this page suggest that my brain builds a model of the world in order to perceive and understand it,
and therefore to provide a survival advantage, and this model includes the self.
- Therefore the real world and the self are both more complex than the models of them in the brain.
- These proposals are themselves a model of how the brain builds a model.
- Therefore the real way that the brain builds a model is more complex than the model proposed here.
- Assuming I understand this model and what it represents, then my understanding of the model
must be more complex than the model itself, and my brain must be more complex than the model in order to understand it.
- This has parallels with my self-awareness, which is a simplified
understanding of my self.
Details of my proposals
- The model of my world in my brain is made up of many thousands of symbol schemas,
each consisting of many hundreds or even thousands of neurons all over the brain, each joined by many thousands of synapse connections.
- Any one neuron may potentially be a part of hundreds or even thousands of different symbol schemas that make up the model.
- There are symbol schemas for all concrete and abstract concepts, including ones that represent relationships,
so the connections between symbol schemas have no special meaning in themselves, it is only the symbol schemas that potentially have meaning.
- For example, when I think of the word “spin”, my self symbol schema
connects to my symbol schema for the concept of the word “spin”, which then connects to various visual images I have for that word,
including a frisbee spinning, perhaps a roundabout, a ballet dancer, the spelling of the word, the sound of the word as it is spoken, and so on.
- The model also include symbol schemas for actions.
- Actions will have efferent connections to sensory neurons that can carry out those actions, so that when I think of doing something,
I can either actually do it, or I can think of doing it without actually moving, but the two feel very similar.
- My model also includes my body, and the processes of my brain, but, as with all other symbol schemas,
these are compressed and simplified models.
- As the quotation from George Box above makes clear, the model is “wrong” in the sense that
it is nowhere near 100% accurate, but it is as useful as it needs to be.
- This does explain why I sometimes make silly mistakes: I can hit my thumb with a hammer, I can trip over a raised paving slab,
I can miscalculate, I can misunderstand, I can get my words wrong, and so on.
- However, without a model of my world, I would be incapable of making even the simplest decisions.
- Each symbol schema has many efferent connections back towards sensory neurons that are,
in effect, a historic record of how the symbol schema was created and updated, and which are used by
reinstatement to give meaning to the symbol schema.
- For representations of concrete concepts, this will involve the activation of sensory neurons,
in other cases such as abstract concepts, this will be connecting neurons in other symbol schemas.
- My interpretation of the meaning of the word “generative” as an adjective for a model in the
Predictive Processing theory described above.
- Not all of these connections are available to conscious attention,
that is to say they do not have direct efferent connections from my self symbol schema,
but all symbol schemas must have once had a connection from my self symbol schema, so potentially links can usually be recreated.
- This may explain how I can sometimes recall what seems to be a long-forgotten memory.
- The model of my world in my brain has an architecture that emerges from the connections between neurons and symbol schemas.
- The way my thoughts, both conscious and unconscious, flow from one thing to another is totally dependent
on the architecture of the model of my world in my brain, and in particular the nature of the connections between symbol schemas.
- There is a balance of excitatory and inhibitory connections between neurons and between symbol schemas,
so a small perturbation is capable of stopping the activation of one symbol schema and starting the activation of another.
- I am able to keep my attention on one thing for many seconds if required, but I can
very quickly change my attention to something else, and I can easily swap rapidly between two things, or sometimes even more.
- Obviously I am not aware of my subconscious thoughts, but from what occasionally pops into my consciousness,
apparently without my influence, I suspect subconscious thoughts move in similar ways, and I assume that it only some of them
that become conscious.
- My proposals on this website are that symbol schemas represent concepts, a symbol schema activation is what I know
of as a thought, and that a symbol schema only comes into my consciousness when the process of attention connects it to my self symbol schema.
- Thought, then, is the flow of one or more symbol schemas being activated at the same time, and each one potentially
activated others in turn. Subconscious thoughts are fleeting and short-lived, conscious ones last longer because they consist of larger
self-sustaining oscillating networks.
- This suggests that the structure of the model of my world can be described as a set of “attractors”,
each of which is a symbol schema, and each of which is a stable state while it is active.
- But when another symbol schema becomes active, especially when it is one that is logically closer
(which means not physically closer, but more highly connected), the many inhibitory connections can cause the first
symbol schema to be deactivated very quickly, and the rival to become the takeover, and perhaps become conscious.
- This description seems very similar to chaotic behaviour as described in the science section above.
- The activation of symbol schema can be described, in the language of
chaos theory,
as the finding of an attractor position.
- It is chaotic, non-linear and non-determinate. This means that the present state of the model determines the future state,
but the future state is not necessarily predictable from the current state.
- Since the activation of a symbol schema can be a semi-stable configuration, but the architecture is fractal in nature,
this may mean that symbol schemas are either
strange attractors or more likely
strange nonchaotic attractors.
-
^
The brain from inside out -
Gyorgy Buzsaki 2019 Oxford University Press
See also The brain from inside out
doi: 10.1093/oso/9780190905385.001.0001
or see GoogleScholar.
Page 83, second paragraph of chapter summary:
“Thus the brain builds a simplified, customized model of the world by encoding the relationships of events to each other. These aspects of model building are uniquely different from brain to brain.”
-
^
Treatise on Physiological Optics, Volume III -
Hermann von Helmholtz 1867, translated from German by James P. C. Southall 1925
downloadable here.
Page 23:
“The idea of a single individual table which I carry in my mind is correct and exact, provided I can deduce from it correctly the precise sensations I shall have when my eye and my hand are brought into this or that definite relation with respect to the table. Any other sort of similarity between such an idea and the body about which the idea exists, I do not know how to conceive. One is the mental symbol of the other.”
-
^
Ibid. Treatise on Physiological Optics, Volume III
Examples of illusions include:
Pages 3-4 - phantom limb:
“The most remarkable and astonishing cases of illusions of this sort are those in which the peripheral area of this particular portion of the skin is actually no longer in existence, as, for example, in case of a person whose leg has been amputated. For a long time after the operation the patient frequently imagines he has vivid sensations in the foot that has been severed. He feels exactly the places that ache on one toe or the other.”;
Page 12 - a distant light:
“when a distant light, for example, is taken for a near one, or vice versa. Suddenly it dawns on us what it is, and immediately, under the influence of the correct comprehension, the correct perceptual image also is developed in its full intensity. Then we are unable to revert to the previous imperfect apperception.”;
Pages 192-193 - horizontal and vertical stripes:
“There are numerous illustrations of the same effect in everyday life. An empty room looks smaller than one that is furnished; and a wall covered with a paper-pattern looks larger than one painted uniformly in one colour. Ladies frocks with cross stripes on them make the figure look taller.”;
Pages 195-196 - Hering illusion and
Zollner illusion;
Page 283 and pages 291-2 - the moon on the horizon.
-
^
Ibid. Treatise on Physiological Optics, Volume III
Page 31:
“We explain the table as having existence independent of our observation, because at any moment we like, simply by assuming the proper position with respect to it, we can observe it. The essential thing in this process is just this principle of experimentation. Spontaneously and by our own power, we vary some of the conditions under which the object has been perceived. We know that the changes thus produced in the way that objects look depend solely on the movements we have executed. ...
In fact we see children also experimenting with objects in this way. They turn them constantly round and round, and touch them with the hands and the mouth, doing the same things over and over again day after day with the same objects, until their forms are impressed on them; in other words, until they get the various visual and tactile impressions made by observing and feeling the same object on various sides.”
-
^
Perceptual illusions and brain models -
Gregory 1968
doi: 10.1098/rspb.1968.0071 downloadable
here or see
GoogleScholar.
(All papers of Richard Gregory are available at Richard Gregory - papers)
Page 6, from sixth paragraph of left-hand column:
“Perception seems, then, to be a matter of 'looking up' stored information of objects, and how they behave in various situations. Such systems have great advantages. ... Systems which control their output directly from currently available input information have serious limitations. In biological terms, these would be essentially reflex systems. Some of the advantages of using input information to select stored data for controlling behaviour, in situations which are not unique to the system, are as follows:
1. In typical situations they can achieve high performance with limited information transmission rate. It is estimated that human transmission rate is only about 15 bits/second. They gain results because perception of objects - which are redundant - requires identification of only certain key features of each object.
2. They are essentially predictive. In typical circumstances, reaction-time is cut to zero.
3. They can continue to function in the temporary absence of input; this increases reliability and allows trial selection of alternative inputs.
4. They can function appropriately to object-characteristics which are not signalled directly to the sensory system. This is generally true of vision, for the image is trivial unless used to 'read' non-optical characteristics of objects.
5. They give effective gain in signal/noise ratio, since not all aspects of the model have to be separately selected on the available data, when the model has redundancy. Provided the
model is appropriate, very little input information can serve to give adequate perception and control.
There is, however, one disadvantage of 'internal model' look-up systems, which appears inevitably when the selected stored data are out of date or otherwise inappropriate. We may with some
confidence attribute perceptual illusions to selection of an inappropriate model, or to mis-scaling of the most appropriate available model.”
-
^
The Nature of Explanation - Kenneth Craik Cambridge University Press 1943 or see
GoogleScholar.
See also The Nature of Explanation (a review).
In chapter 5 entitled “Hypothesis on the nature of thought”, page 51, second paragraph, to page 52:
“By a model we thus mean any physical or chemical system, which has a similar relation-structure to that of the process it imitates. By 'relation-structure' I do not mean some obscure non-physical entity which attends the model, but the fact that it is a physical working model which works in the same way as the process it parallels, in the aspects under consideration at any moment. Thus, the model need not resemble the real object pictorially; Kelvin’s tide predictor, which consists of a number of pulleys on levers, does not resemble a tide in appearance, but it works in the same way in certain essential respects - it combines oscillations of various frequencies so as to produce an oscillation which closely resembles in amplitude at each moment the variation in tide level at any place.”
Page 57, fifth paragraph:
“My hypothesis then is that thought models, or parallels, reality - that its essential feature is not 'the mind', 'the self', 'sense-data' nor propositions but symbolism, and that this symbolism is largely of the same kind as that which is familiar to us in mechanical devices which aid thought and calculation.”
Page 61, second line:
“If the organism carries a 'small-scale model' of external reality and of its own possible actions within its head, it is able to try out various alternatives, conclude which is the best of them, react to future situations before they arise, utilise the knowledge of past events in dealing with the present and future, and in every way to react in a much fuller, safer, and more competent manner to the emergencies which face it. Most of the greatest advances of modern technology have been instruments which extended the scope of our sense-organs, our brains or our limbs. Such are telescopes and microscopes, wireless, calculating machines, typewriters, motor cars, ships and aeroplanes. Is it not possible, therefore, that our brains themselves utilise comparable mechanisms to achieve the same ends and that these mechanisms can parallel phenomena in the external world as a calculating machine can parallel the development of strains in a bridge?”
-
^
Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness - Philip Johnson-Laird Cambridge University Press 1983
This book has some helpful and prescient statements about the nature of mental models, and how we can understand them, but also has a lot of less useful detail on the possible methods of processing of logic and language. The general conclusion is that brains do not contain logic or language processing modules, but that they build models of the world and manipulate them to emulate the world. It proposes that there are different types of representations for language, objects in the world and images, and that many common relational concepts are innate. It is non-committal on whether meaning is in the mind or resides in the world and has no useful information on how meaning is represented.
Page x (part of prologue), end of second paragraph to beginning of third:
“...human beings construct mental models of their world... This idea is not new. Many years ago Kenneth Craik (1943) [see reference above] proposed that thinking is the manipulation of internal representations of the world.”
Page 474, first paragraph:
“Moreover, models need be neither complete nor wholly accurate to be useful; and what our limited knowledge of our own operating system gives us is a sense of self-identity, continuity, and individuality.”
Page 4, second paragraph:
“There are no complete mental models for any empirical phenomena. What must be emphasised, however, is that one does not necessarily increase the usefulness of a model by adding information to it beyond a certain level.”
-
^
Ibid. Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness
Page 402 under the heading “How do mental models represent the world” third paragraph and last paragraph:
“You may say that you perceive the world directly, but in fact what you experience depends on a model of the world. ...In short, our view of the world is causally dependent both on the way the world is and on the way we are. There is an obvious but important corollary: all our knowledge of the world depends on our ability to construct models of it.”
-
^
Ibid. Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness
Page 10, under the heading “Mental models and criteria for explanation”, first and second paragraphs:
“At the first level, human beings understand the world by constructing working models of it in their minds. Since these models are incomplete, they are simpler that the entities they represent. In consequence, models contain elements that are merely imitations of reality - there is no working model of how their counterparts in the world operate, but only procedures that mimic their behaviour. ... At the second level, since cognitive scientists aim to understand the human mind, they, too, must construct a working model. It happens to be of a device for constructing working models. Like other models, however, its utility is not improved by embodying more than a certain amount of knowledge. The crucial aspect of mental processes is their functional organization, and hence a theoretical model of the mind need concern only such matters.”
Page 470, first paragraph under the heading “Self-awareness in automata that understand themselves”:
“...the mind must be more complicated than any theory of it: however complex the theory, a device that invented it must be still more complex. Obviously, cognitive scientists aim to understand the mind - to have a mental model of a device that makes mental models. There is a striking similarity between this goal and the achievement of self-awareness: the mind is aware of the mind. It understands itself at least to some extent, and it understands that it understands itself.”
-
^
Every good regulator of a system must be a model of that system - Conant and Ashby 1970
doi: 10.1080/00207727008920220
downloadable here or see
GoogleScholar.
(The downloaded version of this paper that I have has presumably been scanned and had some Optical Character Recognition software applied to it, but not completely successfully, so there are some odd misprints.)
This paper contains a formal proof that every regulator of a complex system will automatically become a model of that system.
Final paragraph under the heading “Discussion” on page 10:
“To those who study the brain, the theorem [that has been proved in this paper] founds a 'theoretical neurology'. For centuries, the study of the brain has been guided by the idea that as the brain is the organ of thinking, whatever it does is right. But this was the view held two centuries ago about the human heart as a pump; today’s hydraulic engineers know too much about pumping to follow the heart’s method slavishly: they know what the heart ought to do, and they measure its efficiency. The developing knowledge of regulation, information processing, and control is building similar criteria for the brain. Now that we know that any regulator (if it conforms to the qualifications given) must model what it regulates, we can proceed to measure how efficiently the brain carries out this process. There can no longer be question about whether the brain models its environment: it must.”
-
^
The theory of constructed emotion: an active inference account of interoception and categorization - Barrett 2017
doi: 10.1093/scan/nsw154
downloadable here or see
GoogleScholar.
Page 5, last paragraph, under the heading “How does a brain perform allostasis?”:
“For a brain to effectively regulate its body in the world, it runs an internal model of that body in the world.”
And the note relating to this sentence at the bottom of the page:
“There is a well-known principle of cybernetics: anything that regulates (i.e. acts on) a system must contain an 'internal model' of that system.”
Page 6, second paragraph:
“All animals run an internal model of their world...”
Page 11, second paragraph:
“A brain implements an internal model of the world with concepts because it is metabolically efficient to do so.”
-
^
Do we have an internal model of the outside world? - Land 2014
doi: 10.1098/rstb.2013.0045
downloadable here or see GoogleScholar.
This paper says that there is evidence that the brain must contain a model of the surroundings and also parts of the body that is updated with every move we make and that this is needed for action. Start of abstract:
“Our phenomenal world remains stationary in spite of movements of the eyes, head and body. In addition, we can point or turn to objects in the surroundings whether or not they are in the field of view. In this review, I argue that these two features of experience and behaviour are related. The ability to interact with objects we cannot see implies an internal memory model of the surroundings, available to the motor system. And, because we maintain this ability when we move around, the model must be updated, so that the
locations of object memories change continuously to provide accurate directional information. The model thus contains an internal representation of both the surroundings and the motions of the head and body...”
-
^
Consciousness is Data Compression - Maguire and Maguire 2010
downloadable here or see
GoogleScholar.
Although the title of this paper is obviously not a true statement, it contains a very interesting discussion about compression, how a model of the world is built, and why that model must include the self - hence it probably is true that consciousness requires data compression.
Page 749, third paragraph:
“Algorithmic information theory reveals that compression is the only systematic means for generating predictions based on prior observations. All successful predictive systems, including animals and humans, are approximations of algorithmic induction.”
Page 750, first and second paragraphs:
“...the compression carried out by the brain has one additional ingredient which sets it apart from simpler compression systems: it compresses its observations of its own behaviour. The capacity for a system to model its own actions necessarily involves the identification of itself as an entity separate to its surroundings. As a result, self-compression entails self-awareness.
The human brain is a self-representational structure which seeks to understand its own behaviour. For example, people model their own selves in order to more accurately predict how they are going to feel and react in different situations. They build up internal models about who they think they are and use these models to inform their decisions. In addition, the human brain compresses the observed behaviour of other organisms. When we watch other individuals, we realize that there is a great deal of redundancy in their activity: rather than simply cataloguing and memorizing every action they perform, we can instead posit the more succinct hypothesis of a concise 'self' which motivates these actions. By representing this self we can then make accurate predictions as to how the people around us will behave. The idea that the actions of an organism are controlled by a singular self is merely a theoretical model which eliminates redundancy in the observed behaviour of that organism. People apply this same process to themselves: what you consider to be the essence of you is simply a model which compresses your observations of your own past behaviour.”
-
^
Fractal and chaotic dynamics in nervous systems - King 1991
doi: 10.1016/0301-0082(91)90003-J
downloadable here or see
GoogleScholar.
(The page numbers in the contents do not match with the page numbers in this very technical paper)
Second sentence of summary, page 30:
“The relation of chaos to fractal processes in the brain from the neurosystems level down to the molecule has been explored. It is found that chaos appears to play an integral, though not necessarily exclusive role in function at all levels of organization from the neurosystems to the molecular and quantum levels.”
-
^
Free Will, Physics, Biology, and the Brain - Koch 2009
Chapter 2 in Downward Causation and the Neurobiology of Free Will ed. Murphy, Ellis and O'Connor pub. Springer 2009
doi: 10.1007/978-3-642-03205-9_2
downloadable here or see
GoogleScholar.
Page 36, last sentence, to page 37:
“...astronomers cannot be certain whether Pluto will be on this side of the sun (relative to Earth’s position) or the other side ten million years from now! No matter how small the residue of our measurement error, it will never vanish and therefore will always limit how far we can peer into the future. If this uncertainty holds for the position of a planet-sized body in deep space, what does this portend for the predictability of a single synapse deeply embedded inside a brain, let alone the action of a nervous system of millions or billions of nerve cells, each one encrusted with thousands of synapses? Given the nonlinear and cooperative nature of such neural networks, their behavior is chaotic to a high degree. ... Any organelle, such as the nucleus of a cell or a synapse, is made out of a fantastically large number of molecules suspended in watery solution. These molecules incessantly jostle and move about in a way that can’t be precisely captured; this is called noise. Physicists are unable to track individual molecules. To tame this noise, they borrow techniques from statistics and from probability theory, calculating the average kinetic energy of the molecules or the average time between synaptic release and so on.”
-
^
Is there chaos in the brain? II. Experimental evidence and related models - Korn and Faure 2003
doi: 10.1016/j.crvi.2003.09.011
downloadable here or see
GoogleScholar.
Abstract, end of second paragraph:
“Here we present the data and main arguments that support the existence of chaos at all levels from the simplest to the most complex forms of organization of the nervous system.”
-
^
Ibid. Is there chaos in the brain? II. Experimental evidence and related models
Page 824, end of first paragraph onwards:
“Thus another classical paradigm, called the 'winnerless competition model' (WLC), is advocated by G. Laurent and his collaborators. Like other nonlinear models, WLC is based on simple nonlinear equations of the Lotka-Volterra type where (i) the functional unit is the neuron or a small group of synchronized cells and (ii) the neurons interact through inhibitory connections. Several dynamics can then arise, depending for a large part on the nature of this coupling and the strength of the inhibitory connections. If the connections are symmetrical, and in some conditions of coupling, the system behaves as a Hopfield network or it has only one favored attractor if all the neurons are active. If the connections are only partly asymmetrical, one attractor (which often corresponds to the activity of one neuron) will emerge in a 'winner-takes-all' type of circuit. Finally a 'weakly chaotic' WLC arises when all the inhibitory connections are nonsymmetrical; then, the system, with N competitive neurons, has different heteroclinic orbits in the phase space. In this case, and for various values of the inhibitory strengths, the system’s activity 'bounces off' between groups of neurons: if the stimulus is changed, another orbit in the vicinity of the heteroclinic orbit becomes a global attractor.”
-
^
Broadband Criticality of Human Brain Network Synchronization - Kitzbichler, Smith, Christensen and Bullmore 2009
doi: 10.1371/journal.pcbi.1000314
downloadable here or see
GoogleScholar.
Towards end of abstract:
“These results strongly suggest that human brain functional systems exist in an endogenous state of dynamical criticality, characterized by a greater than random probability of both prolonged periods of phase-locking and occurrence of large rapid changes in the state of global synchronization, analogous to the neuronal 'avalanches' previously described in cellular systems.”
-
^
Daily Oscillation of the Excitation-Inhibition Balance in Visual Cortical Circuits - Bridi, Zong, Min, Luo, Tran, Qiu, Severin, Zhang, Wang, Zhu, He and Kirkwood 2020
doi: 10.1016/j.neuron.2019.11.011
downloadable GoogleScholar.
Summary, page 621:
“A balance between synaptic excitation and inhibition (E/I balance) maintained within a narrow window is widely regarded to be crucial for cortical processing. In line with this idea, the E/I balance is reportedly comparable across neighboring neurons, behavioral states, and developmental stages and altered in many neurological disorders. Motivated by these ideas, we examined whether synaptic inhibition changes over the 24-h day to compensate for the well-documented sleep-dependent changes in synaptic excitation. We found that, in pyramidal cells of visual and prefrontal cortices and hippocampal CA1, synaptic inhibition also changes over the 24-h light/dark cycle but, surprisingly, in the opposite direction of synaptic excitation. Inhibition is upregulated in the visual cortex during the light phase in a sleep-dependent manner. In the visual cortex, these changes in the E/I balance occurred in feedback, but not feedforward, circuits. These observations open new and interesting questions on the function and regulation of the E/I balance.”
-
^
Winnerless competition in clustered balanced networks: inhibitory assemblies do the trick - Rost, Deger and Nawrot 2017
doi: 10.1007/s00422-017-0737-7
downloadable here or see
GoogleScholar.
Beginning of abstract:
“Balanced networks are a frequently employed basic model for neuronal networks in the mammalian neocortex. Large numbers of excitatory and inhibitory neurons are recurrently connected so that the numerous positive and negative inputs that each neuron receives cancel out on average. Neuronal firing is therefore driven by fluctuations in the input and resembles the irregular and asynchronous activity observed in cortical in vivo data. Recently, the balanced network model has been extended to accommodate clusters of strongly interconnected excitatory neurons in order to explain persistent activity in working memory-related tasks. This clustered topology introduces multistability and winnerless competition between attractors.”
Beginning of Introduction:
“Neural responses in the mammalian neocortex are notoriously variable. Even when identical sensory stimuli are provided and animal behaviour is consistent across repetitions of experimental tasks, the neuronal responses look very different each time. This variability is found on a wide range of temporal and spatial scales. To this day, it remains a matter of discussion how the brain can cope with this variability or whether it might even be an essential part of neural computation.
It has been shown that in network models of randomly connected excitatory and inhibitory neurons a condition exists in which these neurons fire in a chaotic manner at low firing rates. This condition was termed the Balanced State and occurs if excitation and inhibition to each cell cancel each other on average so that spike emission is triggered by fluctuations in the input current rather than by elevation of the mean input current.”
Page 95, “5. Conclusions and prospects”, second sentence:
“We have shown that multistability with moderate firing rates can be achieved in balanced networks with joint excitatory and inhibitory clusters. This architecture allows for robust winnerless competition dynamics without rate saturation over a wide range of cluster strengths.”
-
^
Epilepsy - when chaos fails - Sackellares, Iasemidis, Shiau, Gilmore and Roper 2000
doi: 10.1142/9789812793782_0010
downloadable here or see
GoogleScholar.
Abstract, towards end of first page, and second page:
“We have postulated that epileptic brains, being chaotic nonlinear systems, repeatedly make the abrupt transitions into and out of the ictal state [episodic paroxysmal electrical discharges] because the epileptogenic focus drives them into self-organizing phase transitions from chaos to order. ... an epileptic seizure occurs when spatiotemporal chaos in the brain fails...”
Page last uploaded
Sat Mar 2 02:55:42 2024 MST