Action
Action is the ability of the brain to cause change or movement in the body.
It is one of several high-level brain functions that make up level 6 in my
hierarchical structure of levels of description
because it depends on the existence of symbol schemas. I am not aware of the cause of many of my actions
because I am only aware of my model of action.
An action is very often driven by a perception, by the need to validate a
prediction, and the result of an action can often be a new perception.
A perception is an action in reverse: a perception is an external cue that activates a symbol schema;
an action is an external change caused by the activation of a symbol schema, and both involve prediction.
So action and perception can be seen as the drivers of a continuous action/perception cycle.
Contents of this page
|
Introduction - introduction and explanation of the four categories of action.
|
Reflex actions - automatic reflex actions driven by very simple circuits.
|
Autonomous actions - actions that happen automatically and cannot be consciously controlled.
|
Symbol schema actions - my term for actions that are initiated by the activation of symbol schemas.
|
Conscious actions - symbol schema actions that require conscious attention.
|
Learnt subconscious actions - symbol schema actions that are learnt by conscious practice and become unconscious.
|
Prediction in action - the role of prediction in action and the connections to perception and attention.
|
My model of action - my inherent understanding of my own actions.
|
References - references and footnotes.
|
Introduction
- Most examples of a change or movement in my body are initiated in my brain by the activation of a
symbol schema that represents the action.
- This includes a wide array of activities ranging from autonomous (automatic or involuntary) functions through to complex conscious movements.
- However, there are some simple reflex actions that do not involve the brain at all.
- The activation of the symbol schema in my brain that causes an action may or may not
be caused by the process of attention.
- If it is via attention, I may or may not have had an influence over the decision, but I will become aware of it in either case.
- If it is not initiated by attention, I can sometimes become aware of it, but in many cases I will not.
- I have divided actions into four categories, and there is a section below on each of these.
This list is by increasing amount of brain involvement:
- Reflex actions (little or no brain involvement).
- Autonomous actions (some brain involvement).
- Learnt subconscious actions (non-conscious brain involvement).
- Conscious actions driven by attention (conscious brain involvement).
However, because actions in category 3, learnt subconscious actions, are learnt by practising under conscious attention, which is number 4,
it is easier to describe these types of actions with the last two swapped around, so this is the order of the sections below.
- The neurons and connections to control reflex and autonomous actions are built during development, mostly before birth,
according to plans governed by inherited DNA;
for all other actions they are built up by the afferent processing of internal sense data.
Reflex actions
- A reflex action is a completely automatic reaction
produced in the brain that is created by circuits pre-determined by DNA, so do not use symbol schemas, and there is no learning or adaption involved.
Many are only found in babies and disappear after a few months.
- Very simple reflexes do not involve the brain at all because the circuitry only goes as far as the spinal cord.
The most well-known of these is the knee-jerk reflex,
where the lower part of the leg below the knee jerks upwards when tapped with a hammer below the knee cap
(and where the phrase “knee-jerk reaction” presumably comes from).
- Babies have a number of so-called primitive reflexes
that disappear after just a few months as the normal brain develops.
- The most obvious and clearly advantageous reflexes are the
rooting reflex and the
sucking reflex
when a baby automatically searches for a nipple and then sucks its mother’s milk.
- Another less obvious, but quite well-known reflex is the
grasp reflex
when a baby will grasp onto something with both hands. This is assumed to be a throw-back in evolutionary
history to when a baby had to cling on to its mother’s fur as she moved around, possibly high up in the trees.
- Young animals of non-human species have more complex reflex actions.
- Many new-born four-legged creatures are able to walk within minutes of
birth1.
This clearly has an evolutionary survival advantage, although it has been found that the
time between conception and learning to walk is around the same for all mammals, relative to
their brain mass2.
- New-born calves will follow their mother and will be happy close to other animals
of their own kind, but will shy away from any other animal, including humans.
This again has a clear advantage that will increase the chances of survival, but
is a complex evolved behaviour.
- The indications are that the brain of a baby when it born is still developing and it is born earlier
in terms relative to other
animals3.
- So if (hypothetically) a human baby were to stay in the womb for a year more than normal, they may well be learning to
walk within weeks of birth. (You would have to feel sorry for the hypothetical mother though!)
- A number of reflexes
occur in humans of all ages, and some of them occur in other mammals as well.
- There is fuzzy boundary line between reflexes and autonomous actions (see section below).
- One definition of a reflex is that it is controlled by a single neural pathway called a
reflex arc, but some
actions that are generally classed as reflexes have much more complex mechanisms, such as the
pupil reflex
that controls the diameter of the pupil, and the
blink reflex
in response to anything touching the eye, or a loud noise.
Autonomous actions
- Autonomous actions are the vital functions of the body
such as heart rate, temperature regulation, digestion, breathing etc., which
are controlled from the brain by what is usually called the
autonomic nervous system.
- The part of the brain involved is deep in the back of the head, above the top of spinal cord.
This part of the brain, the functions it controls, and the way the control is done,
are similar in all vertebrates, including all mammals, so it is an evolutionarily ancient area.
- The neurons and connections in the brain to control these functions
are built during brain development before birth from blueprints encoded in DNA, so do not involve
symbol schemas, and there is no learning or adaption involved in order for them to work.
- Most of these functions cannot be consciously controlled or even monitored.
Some can be monitored with practice, and some can be controlled to an extent,
or indirectly affected but not completely controlled. For example:
- Heart rate
can be controlled indirectly by various methods such as relaxation, but there is no direct control.
- Breathing
is normally automatic, controlled by the
respiratory centre in the
brain stem,
but can be consciously controlled within limits.
- Temperature regulation,
the ability of the brain to keep the temperature of the body
within quite narrow limits, starts at the moment of
birth4,
and is completely automatic; we have no conscious control over
the actions that the brain causes in the body to effect changes to internal temperature,
but we are aware of the triggers and the outcomes: we know if we are feeling hot or cold,
and we know if we sweat or shiver.
In general, the brain does not allow full conscious control over any internal function that is vital for life.
The reason seems obvious: any evolved ability to do this would be bound to result in fatal mistakes being made,
so the trait would not survive.
- The method of control by the brain of most critical functions is by efferent connections from
neurons with very long axons, although all of the
endocrine system
and a number of generalised functions are controlled by the generation of
hormones,
chemical signals that are released into the blood stream and have an effect on other organs.
- The feedback to the brain to enable the required monitoring is called
interoception.
- Interoception can be regarded as an internal sense, comparable with external senses
such as sight and hearing.
- Much of the feedback is via afferent neurons with long axons into the brain,
but some is by the production of chemical hormones. Those hormones that get into the brain are known as
neuromodulators or
neurohormones.
- It seems most likely that this internal sense data is processed in a similar
way to external sense data. This afferent processing
will also strengthen some efferent connections that
can be used for prediction.
- The efferent connections are not as numerous as they are in the areas that process
data from the standard five senses, and there are very few efferent connections from my
self symbol schema in these areas. This is why I cannot
focus my conscious attention or any of these areas, and also why any
feelings about the states of these mechanisms are very unspecific
with a lack of reinstatement available.
- In some of these examples, there is clearly a form of modelling going on in the brain,
with prediction and feedback mechanisms, and constant fine tuning of the processes.
- This indicates that the models could be described as symbol schemas,
but in some cases without all of the functionality that is associated with them elsewhere in the brain.
- For autonomous actions that can sometimes be controlled consciously,
such as breathing, this functionality is fully implemented, so it is more like a learnt subconscious action
(see below).
- The enteric nervous system
has millions of neurons that control the operation of the gut.
- This is sometimes known as the second brain.
- There are connections from it to the brain primarily via the
vagus nerve,
which is how certain emotions can affect the operation of the gut, but it has been shown that
it can operate totally independently of the brain.
- Most of the components and structure, including neurons, glia, neurotransmitters and neuromodulators, are
very similar to the brain, and there are both afferent and efferent connections between neurons.
- So it is possible that there are structures in the gut that resemble symbol schemas in the brain,
and there could be prediction and learning going on (but this website is about the brain, not the gut).
Symbol schema actions
- I propose that the initiation of all actions except reflex and autonomous actions
is driven by the activation of symbol schemas, either consciously or unconsciously.
- Actions that are normally initiated unconsciously are discussed below under the heading Learnt subconscious actions.
- Actions that are normally initiated consciously, which can only be initiated via the process of attention,
are discussed below under the heading Conscious actions.
- There is no firm dividing line between those actions that are initiated consciously and those that are initiated unconsciously.
- Most actions that are normally initiated unconsciously can be initiated consciously, but sometimes conscious
thought about an action that is normally unconscious can cause problems.
- For example, walking is quite a complex set of actions, and is always normally initiated automatically.
- If I need to go to the other side of the room to pick something up, my legs automatically
initiate the moves to walk in the right direction.
- But I can also consciously try to think about all the moves that I need to do to make myself walk
to the other side of the room.
- However, when I try to do this, I realise that actually I do not consciously know all the moves required, and
although I can make it to the other side of the room, my walk is very strange and not normal.
- Most actions that I currently have to do consciously can become unconscious if I rehearse them sufficiently,
and this can sometimes happen without me realising.
- There are around 640 muscles in the human body, all of which can be controlled by the
brain5.
- It is generally accepted that the control system for action in the brain (usually called the
motor system)
is organised hierarchically, very much like the sensory system, but with the main flow of data going in the opposite
direction6,
8.
- The lowest levels of the hierarchy have the most detail about the exact movements required, and each
higher level has more abstracted models and more general descriptions of the movement required.
The highest level reflects the purpose of the
movement7,
and there will also be links to the expected results.
- The symbol schemas, and more particularly the
efferent connections from them that are required for coordinated,
complex actions, are built up by the afferent processing of sense data,
particularly sense data relating to the position of parts of the body, known as
proprioception.
- Symbol schemas that represent movement start to be built even before birth as a baby moves in the
womb9.
- When a baby moves their limbs around, seemingly at random, they are all the time testing their predictions
against the sense data coming back10.
- The apparently random movement of a baby, called
motor babbling,
is the activation of almost random neurons in the motor area of the brain, which then causes one or more
of those 640 muscles to move in a certain way, and to a certain extent.
- The flow of signals that comes back from the muscles, limbs, and other parts of the body, called
proprioception
can then be matched up with the signals that were
sent11.
This can be described as coincidence detection.
- Over time, this will be build up a hierarchical structure of networks that represent particular
movements, as described by my afferent processing.
- The word babbling
is also used to describe the way a small child learns to speak, which follows the same
principles12.
- Afferent processing example 2 and
example 3 shows how quite a complex movement
of throwing a frisbee can be represented and also associated with the symbol schema for a frisbee.
- Afferent processing example 7 then
shows how this becomes connected to the self symbol schema via the
process of attention so that it is possible for me (my self symbol schema,
because I am my self symbol schema) to have a strong influence
on initiating the action to my arms and wrist to throw a frisbee.
- These symbol schemas can easily be extended to model tools and other extensions, such as
a car, a bicycle, a piano, a hammer, a snooker cue or a tennis racket. All these things, and many other examples,
can, with practice, become almost part of our body schema13.
Conscious actions
- Those actions that are initiated by the process of attention are therefore
subject to all the same requirements. A few examples follow:
- As described in the page on attention, I feel as though I have direct control
over what I pay attention to, and therefore in this case what actions I initiate, but in reality this is not fully correct.
- I can certainly have an influence over what actions I take, but there are other influences from different directions.
My influences are efferent from my self symbol schema, but there are afferent influences from incoming sense data as well as
lateral influences from other activated symbol schemas.
- My innate understanding of conscious action is that there are many things I do that I
need to concentrate on, but further analysis shows that it is actually rather more complicated than this.
- Take the example of throwing a frisbee. If I have never tried to do it before, I will have to
concentrate on moving my hands and arms in the way that I think is right, either because I have watched
someone else doing it, or someone has tried to describe to me what I need to do.
- My attention will probably be on my hand and my wrist, but in doing this my attention will
not be on other things, such as how my legs move, how I breathe and how I move my fingers.
- After I release the frisbee, my attention is likely to be on the trajectory of the frisbee and
seeing where it ends up, and, during those few seconds, my attention is no longer on my hand or wrist.
- So actions that I pay attention to are very specific and for a relatively short time, and
while this is happening, there are always other actions that I am doing that I am not fully aware of at the time.
- This illustrates that anything my body does that is not specifically under my attention must
either be an be an autonomous action or a learnt subconscious action.
- Some actions that need a lot of attention such as threading a needle
or solving a Rubik’s cube seem to be in a different class from the described learnt actions below,
but in fact anything can become learnt, and done (to a certain extent) unconsciously,
i.e. without attention.
- Attention is a complex, multi-level competition of afferent and efferent influences that I have much less control over than I think,
but as a child I learnt to concentrate, to pay attention, to a certain task that was necessary to be done.
This helped create my model of attention, and also established the efferent connections required.
- A good example is when I see something from an unusual angle or something that has had an unexpected change made to it.
I do not recognise it unconsciously, and I have to concentrate on it, at least briefly, so that more resources are given to recognising it.
But once I have recognised it, and then categorised it as a different aspect of something I already knew, then I do not have to concentrate any more.
- Conscious action is the basis of free will, and therefore also of morality, of good over evil,
and, ultimately, the success of humans as a species.
- So are we able to control our actions (see free will)?
The answer is yes, but only to a certain extent, and it can be helpful to understand the mechanisms so that advantage can be taken.
Conscious action, enabled via attention, is the way to make a choice to do good in the long term.
Learnt subconscious actions
- Using the example of throwing a frisbee, once I have successfully thrown a frisbee a few times,
perhaps over a few days, by using attention on my hand and wrist movements, I might start to find that the movement required
becomes semi-automatic, and eventually completely automatic, so that I do not have to pay attention to doing it.
- This sort of learning is a lot easier for children, and the younger the better, providing the child
has the physical control and the strength to do the activity.
- The saying “You can’t teach an old dog new tricks” is not entirely true, but certainly an
older person is likely to take longer than a younger person to learn any new action.
- The reason for this is presumably that there is more plasticity in the synapse connections in
the brain of a younger person, and also that an older person may have already learnt other actions that are similar
but not identical.
- Learnt subconscious actions such as walking, riding a bicycle, catching a ball
and playing the piano are non-autonomous, but can become automatic or semi-automatic once learnt.
Concentration (attention) is needed to start with, and progress can be very slow, but with practice, and consolidation
(which often requires sleep),
symbol schemas and connections are put into place which can then become autonomous.
Prediction in action
- There is no doubt that action often involves prediction.
- Most people will know the feeling of picking up a jug, say, that they were expecting to be
full, and it turns out to be empty. The action is clearly predicted to need a certain strength for picking
up a heavy item, and it is actually light. The result is that the jug goes up much more quickly than
expected. This type of thing can happen with conscious or unconscious actions.
- The feeling I have when I miss a step on a short flight of stairs is also quite disturbing,
but in this case the actions being carried out are clearly learnt subconscious actions, but something goes
wrong, and I end up stepping into mid-air.
- It can be argued that, for all actions that are initiated from the activation of symbol schemas,
which, as argued above, is all except very basic reflex and autonomous actions, there is an aspect of prediction
involved, because the symbol schema for the action will be linked to other symbol schemas that are involved in
triggering the action and others that represent the outcome expected.
- So, for example, the symbol schema for throwing a frisbee is only initiated in my brain once
I have succeeded in picking up the frisbee and am holding it. Once I complete the action of throwing, I expect to
see the frisbee spinning and flying in the air towards where I aimed it. If I do not, and it goes in totally
the wrong direction, or it hits the ground near my feet, my expectations are dashed and I know I have to
try to improve my technique.
- The recent theories of Predictive Processing
propose that prediction plays a much bigger part in action than the previous examples might
suggest14.
- The claim is that most, if not all, action is initiated to try to minimise prediction errors,
which is the same as trying to recognise what is being sensed, and also is claimed to be the most efficient
method of maintaining the internal model.
- This is a very interesting idea, but I think it goes a little too far in suggesting that all
actions may result from this.
- This is a lot more detail of the theories, and my comments and criticisms, on the
page on prediction.
My model of action
- My inherent belief is that I have full conscious control over all movements that my body makes, but that is
because there is a model of action in my brain that specifies this, but the model only has data from what my consciousness is aware of.
- What I mean by “my inherent belief” is that the symbol schema that represent the process of action
(which I can notate as {action} - see symbol schema notation),
which forms part of my self symbol schema ({self}), specifies that I initiate and control
all my actions, which of course is not true.
- The reason for this is that this symbol schema that represents action has been created by the process of
cognoception, which means that it only represent the parts of the process that are conscious,
in other words, known by my self symbol schema.
- So {action} only represents actions that my {self} has successfully initiated.
My {self} is totally unaware of the cause of actions that it has not tried to initiate, and is also
generally unaware of actions that it has tried to initiate but which have not actually happened.
- Some evidence that backs up this proposal is an experiment that shows that we are not necessarily aware
whether we have caused an action or not15.
- I am aware sometimes that I consciously choose to do something, but something else causes a
higher-priority distraction, and I end up not doing it.
- All my {self} can do is to attempt to influence the
hierarchical process of attention.
- As described in my page on attention, the process is in fact a complex
competition with both afferent and efferent influences, with prediction playing a large part.
- There is no centralised control over the decision-making that causes actions,
it is actually a decentralised, multi-partite process.
- Prediction plays a much more important part in nearly all actions than I realise,
and consciousness plays a much smaller part in actions than my brain leads me to believe.
- My models of action, attention, perception and
free will are very closely connected and have some overlaps.
-
^
The Principles of Psychology - William James 1890
viewable here,
downloadable here: Volume I and
Volume II or see
GoogleScholar.
Volume 2, Chapter XIV “Instinct”, under the heading “Special human instincts”:
“Mr Bain has tried, by describing the demeanor of new-born lambs, to show that locomotion is learned by a very rapid experience. But the observation recorded proves the faculty to be almost perfect from the first; and all others who have observed new-born calves, lambs, and pigs agree that in these animals the powers of standing and walking, and of interpreting the topographical significance of sights and sounds, are all but fully developed at birth. Often in animals who seem to be 'learning' to walk or fly the semblance is illusive. The awkwardness shown is not due to the fact that 'experience' has not yet been there to associate the successful movements and exclude the failures, but to the fact that the animal is beginning his attempts before the co-ordinating centres have quite ripened for their work.”
-
^
A unifying model for timing of walking onset in humans and other mammals - Garwicza, Christenssona and Psounib 2009
10.1073/pnas.0905777106
downloadable here or see
GoogleScholar.
Start of abstract:
“The onset of walking is a fundamental milestone in motor development of humans and other mammals, yet little is known about what factors determine its timing. Hoofed animals start walking within hours after birth, rodents and small carnivores require days or weeks, and nonhuman primates take months and humans approximately a year to achieve this locomotor skill. Here we show that a key to the explanation for these differences is that time to the onset of walking counts from conception and not from birth, indicating that mechanisms underlying motor development constitute a functional continuum from pre- to postnatal life. In a multiple-regression model encompassing 24 species representative
of 11 extant orders of placental mammals that habitually walk on the ground, including humans, adult brain mass accounted for 94% of variance in time to walking onset postconception.”
-
^
Brain: the story of you - David Eagleman Pantheon Books 2015
See GoogleScholar.
Chapter 1 “Who am I”, third paragraph under the heading “Born unfinished”:
“At birth we humans are helpless. We spend about a year unable to walk, about two more before we can articulate full thoughts, and many more years unable to fend for ourselves. We are totally dependent on those around us for our survival. Now compare this to many other mammals. Dolphins, for instance, are born swimming; giraffes learn to stand within hours; a baby zebra can run within forty-five minutes of birth. Across the animal kingdom, our cousins are strikingly independent soon after they’re born...
the human brain is born remarkably unfinished. Instead of arriving with everything wired up - let’s call it 'hardwired' - a human brain allows itself to be shaped by the details of life experience.”
-
^
Fetal and neonatal thermoregulation - Asakura 2004
downloadable here or see
GoogleScholar.
This paper contains a fascinating account of the set of actions that kicks in at the moment of birth to regulate the temperature of the new-born baby.
Abstract, page 360: “...fetal temperature is maternally dependent until birth.”
-
^
Principles of Neural Science - Sixth edition - Kandel et al. McGraw-Hill US 2021 - or see GoogleScholar.
In the introduction to Section V concerning movement, page 709:
“The immense repertoire of motions that humans are capable of stems from the activity of some 640 skeletal muscles - all under the control of the central nervous system.”
-
^
Ibid. Principles of Neural Science - Sixth edition
In the introduction to Section V concerning movement, page 709:
“The task of the motor systems is the reverse of the task of the sensory systems. Sensory processing generates an internal representation in the brain of the outside world or of the state of the body. Motor processing begins with an internal representation: the desired purpose of movement. Critically, however, this internal representation needs to be continuously updated by internally generated information (efference copy) and external sensory information to maintain accuracy as the movement unfolds.”
-
^
Ibid. Principles of Neural Science - Sixth edition
In the introduction to Section V concerning movement, page 710:
“Motor systems are organized in a functional hierarchy, with each level concerned with a different decision. The highest and most abstract level, likely requiring the prefrontal cortex, deals with the purpose of a movement or series of motor actions. The next level, which is concerned with the formation of a motor plan, involves interactions between the posterior parietal and premotor areas of the cerebral cortex. The premotor cortex specifies the spatiotemporal characteristics of a movement based on sensory information from the posterior parietal cortex about the environment and about the position of the body in space. The lowest level of the hierarchy coordinates the spatiotemporal details of the muscle contractions needed to execute the planned movement.”
-
^
On Intelligence and Google Book preview - On Intelligence - Jeff Hawkins with Sandra Blakeslee, St. Martin’s Press, New York 2004
See GoogleScholar.
Page 46 in chapter 3 entitled “The human brain”:
“The motor system of the cortex is also [like the regions dealing with sense input] hierarchically organized.
...The hierarchy of the motor area and the hierarchies of the sensory areas look remarkably similar. They seem to be put together in the same way.
In the motor region we think of information flowing down the hierarchy toward M1 [the lowest motor area] to drive the muscles and in the sensory regions we think of information flowing up the hierarchy away from the senses. But in reality information flows both ways. What is referred to as feedback in sensory regions is the output of the motor region, and vice versa.”
-
^
The brain from inside out -
Gyorgy Buzsaki 2019 Oxford University Press
doi: 10.1093/oso/9780190905385.001.0001
or see GoogleScholar.
Page 77, third paragraph:
“These seemingly aimless movements in newborn rodents are the same as fetal movements or 'baby kicks' observed in later stages of pregnancy in humans. ... each kick helps the brain to lean about the physics of the body it controls.”
-
^
What are you doing? How active and observational experience shape infants’ action understanding - Hunnius and Bekkering 2014
doi: 10.1098/rstb.2013.0490
downloadable here or see
GoogleScholar.
Abstract, page 1: “When infants execute actions, they form associations between motor acts and the sensory consequences of these acts.”
-
^
Livewired - David Eagleman Canongate 2020
See GoogleScholar.
Page 116, under the heading “Motor babbling”:
“In the same way, the brain learns how to steer its body by motor babbling. Just observe that same baby in her crib. She bites her toes, slaps her forehead, tugs on her hair, bends her fingers, and so on, learning how her motor output corresponds to the sensory feedback she receives. In this way, she learns to understand the language of her body: how her outputs map onto the next inputs. By this technique, we eventually learn to walk, bring strawberries to our mouths, stay afloat in a pool, dangle on monkey bars, and master jumping jacks.”
-
^
Ibid. Livewired
Page 116, under the heading “Motor babbling”:
“A baby learns how to shape her mouth and her breath to produce language - not by genetics, nor by surfing Wikipedia, but instead by babbling. Sounds come out of her mouth, and her ears pick up on those sounds. Her brain can then compare how close her sound was with the utterances she’s hearing from her mother or father. Helping things along, she earns positive reactions for some utterances and not for others. In this way, the constant feedback allows her to refine her speech.”
-
^
Ibid. Livewired
Bottom of page 116, under the heading “Motor babbling”:
“And even better, we use the same learning method to attach extensions to our bodies. Think about riding a bicycle, a machine that our genome presumably didn’t see coming. Our brains originally shaped themselves in conditions of climbing trees, carrying food, fashioning tools, and walking great distances. But successfully riding a bicycle introduces a new set of challenges, such as carefully balancing the torso, modifying direction by moving the arms, and stopping suddenly by squeezing the hand. Despite the complexities, any seven-year-old can demonstrate that the extended body plan is easily added to ... the motor cortex.”
-
^
Surfing Uncertainty - Prediction, Action and the Embodied Mind - Clark 2016 Oxford University Press
doi: 10.1093/acprof:oso/9780190217013.001.0001
see GoogleScholar.
Bottom of page 68, in chapter 2 “Adjusting the volume”, under the heading “Gaze allocation: doing what comes naturally”:
“Precision-weighted PP [Predictive Processing] accounts are ideally placed to bring all these elements together in a single unifying story:
one that places neural prediction and the reduction of uncertainty centre-stage. This is because PP treats action, perception and attention as (in effect) forming a single mechanism for the context- and task-dependent combination of bottom-up sensory cues with top-down expectations.”
Page 111, start of chapter 4 “Prediction-action machines”, under the heading “Staying ahead of the break”:
“How does a guessing engine (a hierarchical prediction machine) turn prediction into action? ... by predicting the shape of its own motor trajectories. In accounting for action, we thus move from predicting the rolling present to predicting the near-future, in the form of the not-yet-actual trajectories of our own limbs and bodies. These trajectories, predictive processing suggests, are specified by their distinctive sensory (especially proprioceptive) consequences. ... predicting these (non-actual) sensory states actually serves to bring them about.”
-
^
Making up the mind - Frith 2007
See GoogleScholar.
Chapter 3 “What the brain tells us about our bodies”, pages 76-77 under the heading “Who’s Doing It?”,
describes an experiment that proved that we do not necessarily even know what actions are ours. The experiment showed that if you intend to make a movement, but then it is made for you, you assume that you made it, even though you didn’t:
“Daniel Wegner has proposed that we have no direct knowledge of causing our actions. All we know is that we have the intention to act, and then, a little later, the action occurs. We infer that our intention caused the action. But Wegner didn’t just stop with this speculation. He did some experiments to test the idea. He predicted that, if an action occurred after you had the intention to act, then you would assume that you had caused the act even when it was actually caused by someone else. The experiment is quite tricky in all senses of the term. When you take part in this experiment you have a companion (who is really a stooge of the experimenter). You and your companion place your right forefingers on a special mouse. By moving this mouse around you move a pointer on a computer monitor. There are lots of objects on the screen. Through earphones you hear someone name one of the objects. You think about moving the pointer toward the object. If your companion moves the pointer toward the object at that moment (he is also instructed through earphones), then you are very likely to think that you made the movement. Of course the timing is critical. If the mouse moves just before you had the thought, then you don’t feel you caused it. If the mouse moves too long afterwards, then you don’t feel you caused it either. If the interval is about 1 and 5 seconds between having the thought and the mouse moving, then you will believe you have moved your arm even when this is not actually the case.”
Page last uploaded
Wed Feb 14 08:51:34 2024 MST