6,661 research outputs found
Recommended from our members
An error-tuned model for sensorimotor learning
Current models of sensorimotor control posit that motor commands are generated by combining multiple modules which may consist of internal models, motor primitives or motor synergies. The mechanisms which select modules based on task requirements and modify their output during learning are therefore critical to our understanding of sensorimotor control. Here we develop a novel modular architecture for multi-dimensional tasks in which a set of fixed primitives are each able to compensate for errors in a single direction in the task space. The contribution of the primitives to the motor output is determined by both top-down contextual information and bottom- up error information. We implement this model for a task in which subjects learn to manipulate a dynamic object whose orientation can vary. In the model, visual information regarding the context (the orientation of the object) allows the appropriate primitives to be engaged. This top-down module selection is implemented by a Gaussian function tuned for the visual orientation of the object. Second, each module's contribution adapts across trials in proportion to its ability to decrease the current kinematic error. Specifically, adaptation is implemented by cosine tuning of primitives to the current direction of the error, which we show to be theoretically optimal for reducing error. This error-tuned model makes two novel predictions. First, interference should occur between alternating dynamics only when the kinematic errors associated with each oppose one another. In contrast, dynamics which lead to orthogonal errors should not interfere. Second, kinematic errors alone should be sufficient to engage the appropriate modules, even in the absence of contextual information normally provided by vision. We confirm both these predictions experimentally and show that the model can also account for data from previous experiments. Our results suggest that two interacting processes account for module selection during sensorimotor control and learning.This work was financially supported by the Wellcome Trust (to DMW; WT097803MA, http://www.wellcome.ac.uk), the Royal Society Noreen Murray Professorship in Neurobiology (to DMW; https://royalsociety.org), Natural Sciences and Engineering Research Council of Canada (to JRF; RGPIN/04837, http://www.nserc.ca), the Canadian Institutes of Health Research (to JRF; 82837, http://www.cihr.ca). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript
Neural Modeling and Imaging of the Cortical Interactions Underlying Syllable Production
This paper describes a neural model of speech acquisition and production that accounts for a wide range of acoustic, kinematic, and neuroimaging data concerning the control of speech movements. The model is a neural network whose components correspond to regions of the cerebral cortex and cerebellum, including premotor, motor, auditory, and somatosensory cortical areas. Computer simulations of the model verify its ability to account for compensation to lip and jaw perturbations during speech. Specific anatomical locations of the model's components are estimated, and these estimates are used to simulate fMRI experiments of simple syllable production with and without jaw perturbations.National Institute on Deafness and Other Communication Disorders (R01 DC02852, RO1 DC01925
The Neural Particle Filter
The robust estimation of dynamically changing features, such as the position
of prey, is one of the hallmarks of perception. On an abstract, algorithmic
level, nonlinear Bayesian filtering, i.e. the estimation of temporally changing
signals based on the history of observations, provides a mathematical framework
for dynamic perception in real time. Since the general, nonlinear filtering
problem is analytically intractable, particle filters are considered among the
most powerful approaches to approximating the solution numerically. Yet, these
algorithms prevalently rely on importance weights, and thus it remains an
unresolved question how the brain could implement such an inference strategy
with a neuronal population. Here, we propose the Neural Particle Filter (NPF),
a weight-less particle filter that can be interpreted as the neuronal dynamics
of a recurrently connected neural network that receives feed-forward input from
sensory neurons and represents the posterior probability distribution in terms
of samples. Specifically, this algorithm bridges the gap between the
computational task of online state estimation and an implementation that allows
networks of neurons in the brain to perform nonlinear Bayesian filtering. The
model captures not only the properties of temporal and multisensory integration
according to Bayesian statistics, but also allows online learning with a
maximum likelihood approach. With an example from multisensory integration, we
demonstrate that the numerical performance of the model is adequate to account
for both filtering and identification problems. Due to the weightless approach,
our algorithm alleviates the 'curse of dimensionality' and thus outperforms
conventional, weighted particle filters in higher dimensions for a limited
number of particles
Multisensory integration in dynamical behaviors: maximum likelihood estimation across bimanual skill learning
Optimal integration of different sensory modalities weights each modality as a function of its degree of certainty (maximum likelihood). Humans rely on near-optimal integration in decision-making tasks (involving e.g., auditory, visual, and/or tactile afferents), and some support for these processes has also been provided for discrete sensorimotor tasks. Here, we tested optimal integration during the continuous execution of a motor task, using a cyclical bimanual coordination pattern in which feedback was provided by means of proprioception and augmented visual feedback (AVF, the position of both wrists being displayed as the orthogonal coordinates of a single cursor). Assuming maximum likelihood integration, the following predictions were addressed: (1) the coordination variability with both AVF and proprioception available is smaller than with only one of the two modalities, and should reach an optimal level; (2) if the AVF is artificially corrupted by noise, variability should increase but saturate toward the level without AVF; (3) if the AVF is imperceptibly phase shifted, the stabilized pattern should be partly adapted to compensate for this phase shift, whereby the amount of compensation reflects the weight assigned to AVF in the computation of the integrated signal. Whereas performance variability gradually decreased over 5 d of practice, we showed that these model-based predictions were already observed on the first day. This suggests not only that the performer integrated proprioceptive feedback and AVF online during task execution by tending to optimize the signal statistics, but also that this occurred before reaching an asymptotic performance level
Brittany Bernal - Sensorimotor Adaptation of Speech Through a Virtually Shortened Vocal Tract
The broad objective of this line of research is to understand how auditory feedback manipulations may be used to elicit involuntary changes in speech articulation. We examine speech sensorimotor adaptation to supplement the development of speech rehabilitation applications that benefit from this learning phenomenon. By manipulating the acoustics of one’s auditory feedback, it is possible to elicit involuntary changes in speech articulation. We seek to understand how virtually manipulating participants’ perception of vowel space affects their speech movements by assessing acoustic variables such as formant frequency changes. Participants speak through a digital audio processing device that virtually alters the perceived size of their vocal tract. It is hypothesized that this modification to auditory feedback will facilitate adaptive changes in motor behavior as indicated by acoustic changes resulting from speech articulation. This study will determine how modifying the perception of vocal tract size affects articulatory behavior, indicated by changes in formant frequencies and changes in vowel space area. This work will also determine if and how the size of the virtual vowel space affects the magnitude and direction of sensorimotor adaptation for speech. The ultimate aim is to determine how important it is for the virtual vowel space to mimic the talker’s real vowel space, and whether or not perturbing the size of the perceived vowel space may facilitate or impede involuntary adaptive learning for speech.
Sensorimotor Adaptation of Speech Through a Virtually Shortened Vocal Tract by Brittany Bernal is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.https://epublications.marquette.edu/mcnair_2014/1009/thumbnail.jp
Robot pain: a speculative review of its functions
Given the scarce bibliography dealing explicitly with robot pain, this chapter has enriched its review with related research works about robot behaviours and capacities in which pain could play a role. It is shown that all such roles Âżranging from punishment to intrinsic motivation and planning knowledgeÂż can be formulated within the unified framework of reinforcement learning.Peer ReviewedPostprint (author's final draft
- …