65 research outputs found
What is ‘anti’ about anti-reaches? Reference frames selectively affect reaction times and endpoint variability
Reach movement planning involves the representation of spatial target information in different reference frames. Neurons at parietal and premotor stages of the cortical sensorimotor system represent target information in eye- or hand-centered reference frames, respectively. How the different neuronal representations affect behavioral parameters of motor planning and control, i.e. which stage of neural representation is relevant for which aspect of behavior, is not obvious from the physiology. Here, we test with a behavioral experiment if different kinematic movement parameters are affected to a different degree by either an eye- or hand-reference frame. We used a generalized anti-reach task to test the influence of stimulus-response compatibility (SRC) in eye- and hand-reference frames on reach reaction times, movement times, and endpoint variability. While in a standard anti-reach task, the SRC is identical in the eye- and hand-reference frames, we could separate SRC for the two reference frames. We found that reaction times were influenced by the SRC in eye- and hand-reference frame. In contrast, movement times were only influenced by the SRC in hand-reference frame, and endpoint variability was only influenced by the SRC in eye-reference frame. Since movement time and endpoint variability are the result of planning and control processes, while reaction times are consequences of only the planning process, we suggest that SRC effects on reaction times are highly suited to investigate reference frames of movement planning, and that eye- and hand-reference frames have distinct effects on different phases of motor action and different kinematic movement parameters
The Proprioceptive Map of the Arm Is Systematic and Stable, but Idiosyncratic
Visual and somatosensory signals participate together in providing an estimate of the hand's spatial location. While the ability of subjects to identify the spatial location of their hand based on visual and proprioceptive signals has previously been characterized, relatively few studies have examined in detail the spatial structure of the proprioceptive map of the arm. Here, we reconstructed and analyzed the spatial structure of the estimation errors that resulted when subjects reported the location of their unseen hand across a 2D horizontal workspace. Hand position estimation was mapped under four conditions: with and without tactile feedback, and with the right and left hands. In the task, we moved each subject's hand to one of 100 targets in the workspace while their eyes were closed. Then, we either a) applied tactile stimulation to the fingertip by allowing the index finger to touch the target or b) as a control, hovered the fingertip 2 cm above the target. After returning the hand to a neutral position, subjects opened their eyes to verbally report where their fingertip had been. We measured and analyzed both the direction and magnitude of the resulting estimation errors. Tactile feedback reduced the magnitude of these estimation errors, but did not change their overall structure. In addition, the spatial structure of these errors was idiosyncratic: each subject had a unique pattern of errors that was stable between hands and over time. Finally, we found that at the population level the magnitude of the estimation errors had a characteristic distribution over the workspace: errors were smallest closer to the body. The stability of estimation errors across conditions and time suggests the brain constructs a proprioceptive map that is reliable, even if it is not necessarily accurate. The idiosyncrasy across subjects emphasizes that each individual constructs a map that is unique to their own experiences
The effects of visual control and distance in modulating peripersonal spatial representation
In the presence of vision, finalized motor acts can trigger spatial remapping, i.e., reference frames transformations to allow for a better interaction with targets. However, it is yet unclear how the peripersonal space is encoded and remapped depending on the availability of visual feedback and on the target position within the individual’s reachable space, and which cerebral areas subserve such processes. Here, functional magnetic resonance imaging (fMRI) was used to examine neural activity while healthy young participants performed reach-to-grasp movements with and without visual feedback and at different distances of the target from the effector (near to the hand–about 15 cm from the starting position–vs. far from the hand–about 30 cm from the starting position). Brain response in the superior parietal lobule bilaterally, in the right dorsal premotor cortex, and in the anterior part of the right inferior parietal lobule was significantly greater during visually-guided grasping of targets located at the far distance compared to grasping of targets located near to the hand. In the absence of visual feedback, the inferior parietal lobule exhibited a greater activity during grasping of targets at the near compared to the far distance. Results suggest that in the presence of visual feedback, a visuo-motor circuit integrates visuo-motor information when targets are located farther away. Conversely in the absence of visual feedback, encoding of space may demand multisensory remapping processes, even in the case of more proximal targets
Gaze fixation improves the stability of expert juggling
Novice and expert jugglers employ different visuomotor strategies: whereas novices look at the balls around their zeniths, experts tend to fixate their gaze at a central location within the pattern (so-called gaze-through). A gaze-through strategy may reflect visuomotor parsimony, i.e., the use of simpler visuomotor (oculomotor and/or attentional) strategies as afforded by superior tossing accuracy and error corrections. In addition, the more stable gaze during a gaze-through strategy may result in more accurate movement planning by providing a stable base for gaze-centered neural coding of ball motion and movement plans or for shifts in attention. To determine whether a stable gaze might indeed have such beneficial effects on juggling, we examined juggling variability during 3-ball cascade juggling with and without constrained gaze fixation (at various depths) in expert performers (n = 5). Novice jugglers were included (n = 5) for comparison, even though our predictions pertained specifically to expert juggling. We indeed observed that experts, but not novices, juggled significantly less variable when fixating, compared to unconstrained viewing. Thus, while visuomotor parsimony might still contribute to the emergence of a gaze-through strategy, this study highlights an additional role for improved movement planning. This role may be engendered by gaze-centered coding and/or attentional control mechanisms in the brain
The Inactivation Principle: Mathematical Solutions Minimizing the Absolute Work and Biological Implications for the Planning of Arm Movements
An important question in the literature focusing on motor control is to determine
which laws drive biological limb movements. This question has prompted numerous
investigations analyzing arm movements in both humans and monkeys. Many theories
assume that among all possible movements the one actually performed satisfies an
optimality criterion. In the framework of optimal control theory, a first
approach is to choose a cost function and test whether the proposed model fits
with experimental data. A second approach (generally considered as the more
difficult) is to infer the cost function from behavioral data. The cost proposed
here includes a term called the absolute work of forces, reflecting the
mechanical energy expenditure. Contrary to most investigations studying
optimality principles of arm movements, this model has the particularity of
using a cost function that is not smooth. First, a mathematical theory related
to both direct and inverse optimal control approaches is presented. The first
theoretical result is the Inactivation Principle, according to which minimizing
a term similar to the absolute work implies simultaneous inactivation of
agonistic and antagonistic muscles acting on a single joint, near the time of
peak velocity. The second theoretical result is that, conversely, the presence
of non-smoothness in the cost function is a necessary condition for the
existence of such inactivation. Second, during an experimental study,
participants were asked to perform fast vertical arm movements with one, two,
and three degrees of freedom. Observed trajectories, velocity profiles, and
final postures were accurately simulated by the model. In accordance,
electromyographic signals showed brief simultaneous inactivation of opposing
muscles during movements. Thus, assuming that human movements are optimal with
respect to a certain integral cost, the minimization of an absolute-work-like
cost is supported by experimental observations. Such types of optimality
criteria may be applied to a large range of biological movements
Neural Predictors of Gait Stability When Walking Freely in the Real-World.
Background: Gait impairments during real-world locomotion are common in neurological diseases. However, very little is currently known about the neural correlates of walking in the real world and on which regions of the brain are involved in regulating gait stability and performance. As a first step to understanding how neural control of gait may be impaired in neurological conditions such as Parkinson’s disease, we investigated how regional brain activation might predict walking performance in the urban environment and whilst engaging with secondary tasks in healthy subjects.
Methods: We recorded gait characteristics including trunk acceleration and brain activation in fourteen healthy young subjects whilst they walked around the university campus freely (single task), while conversing with the experimenter and while texting with their smartphone. Neural spectral power density (PSD) was evaluated in three brain regions of interest, namely the pre-frontal cortex (PFC) and bilateral posterior parietal cortex (right/left PPC). We hypothesized that specific regional neural activation would predict trunk acceleration data obtained during the different walking conditions.
Results: Vertical trunk acceleration was predicted by gait velocity and left PPC theta (4-7 Hz) band PSD in single-task walking (R-squared = 0.725, p = 0.001) and by gait velocity and left PPC alpha (8-12 Hz) band PSD in walking while conversing (R-squared = 0.727, p = 0.001). Medio-lateral trunk acceleration was predicted by left PPC beta (15-25 Hz) band PSD when walking while texting (R-squared = 0.434, p = 0.010).
Conclusions: We suggest that the left PPC may be involved in the processes of sensorimotor integration and gait control during walking in real-world conditions. Frequency-specific coding was operative in different dual tasks and may be developed as biomarkers of gait deficits in neurological conditions during performance of these types of, now commonly undertaken, dual tasks
Non-predictive online spatial coding in the posterior parietal cortex when aiming ahead for catching
- …