6,661 research outputs found

    Neural Modeling and Imaging of the Cortical Interactions Underlying Syllable Production

    Full text link
    This paper describes a neural model of speech acquisition and production that accounts for a wide range of acoustic, kinematic, and neuroimaging data concerning the control of speech movements. The model is a neural network whose components correspond to regions of the cerebral cortex and cerebellum, including premotor, motor, auditory, and somatosensory cortical areas. Computer simulations of the model verify its ability to account for compensation to lip and jaw perturbations during speech. Specific anatomical locations of the model's components are estimated, and these estimates are used to simulate fMRI experiments of simple syllable production with and without jaw perturbations.National Institute on Deafness and Other Communication Disorders (R01 DC02852, RO1 DC01925

    The Neural Particle Filter

    Get PDF
    The robust estimation of dynamically changing features, such as the position of prey, is one of the hallmarks of perception. On an abstract, algorithmic level, nonlinear Bayesian filtering, i.e. the estimation of temporally changing signals based on the history of observations, provides a mathematical framework for dynamic perception in real time. Since the general, nonlinear filtering problem is analytically intractable, particle filters are considered among the most powerful approaches to approximating the solution numerically. Yet, these algorithms prevalently rely on importance weights, and thus it remains an unresolved question how the brain could implement such an inference strategy with a neuronal population. Here, we propose the Neural Particle Filter (NPF), a weight-less particle filter that can be interpreted as the neuronal dynamics of a recurrently connected neural network that receives feed-forward input from sensory neurons and represents the posterior probability distribution in terms of samples. Specifically, this algorithm bridges the gap between the computational task of online state estimation and an implementation that allows networks of neurons in the brain to perform nonlinear Bayesian filtering. The model captures not only the properties of temporal and multisensory integration according to Bayesian statistics, but also allows online learning with a maximum likelihood approach. With an example from multisensory integration, we demonstrate that the numerical performance of the model is adequate to account for both filtering and identification problems. Due to the weightless approach, our algorithm alleviates the 'curse of dimensionality' and thus outperforms conventional, weighted particle filters in higher dimensions for a limited number of particles

    Multisensory integration in dynamical behaviors: maximum likelihood estimation across bimanual skill learning

    Get PDF
    Optimal integration of different sensory modalities weights each modality as a function of its degree of certainty (maximum likelihood). Humans rely on near-optimal integration in decision-making tasks (involving e.g., auditory, visual, and/or tactile afferents), and some support for these processes has also been provided for discrete sensorimotor tasks. Here, we tested optimal integration during the continuous execution of a motor task, using a cyclical bimanual coordination pattern in which feedback was provided by means of proprioception and augmented visual feedback (AVF, the position of both wrists being displayed as the orthogonal coordinates of a single cursor). Assuming maximum likelihood integration, the following predictions were addressed: (1) the coordination variability with both AVF and proprioception available is smaller than with only one of the two modalities, and should reach an optimal level; (2) if the AVF is artificially corrupted by noise, variability should increase but saturate toward the level without AVF; (3) if the AVF is imperceptibly phase shifted, the stabilized pattern should be partly adapted to compensate for this phase shift, whereby the amount of compensation reflects the weight assigned to AVF in the computation of the integrated signal. Whereas performance variability gradually decreased over 5 d of practice, we showed that these model-based predictions were already observed on the first day. This suggests not only that the performer integrated proprioceptive feedback and AVF online during task execution by tending to optimize the signal statistics, but also that this occurred before reaching an asymptotic performance level

    Brittany Bernal - Sensorimotor Adaptation of Speech Through a Virtually Shortened Vocal Tract

    Get PDF
    The broad objective of this line of research is to understand how auditory feedback manipulations may be used to elicit involuntary changes in speech articulation. We examine speech sensorimotor adaptation to supplement the development of speech rehabilitation applications that benefit from this learning phenomenon. By manipulating the acoustics of one’s auditory feedback, it is possible to elicit involuntary changes in speech articulation. We seek to understand how virtually manipulating participants’ perception of vowel space affects their speech movements by assessing acoustic variables such as formant frequency changes. Participants speak through a digital audio processing device that virtually alters the perceived size of their vocal tract. It is hypothesized that this modification to auditory feedback will facilitate adaptive changes in motor behavior as indicated by acoustic changes resulting from speech articulation. This study will determine how modifying the perception of vocal tract size affects articulatory behavior, indicated by changes in formant frequencies and changes in vowel space area. This work will also determine if and how the size of the virtual vowel space affects the magnitude and direction of sensorimotor adaptation for speech. The ultimate aim is to determine how important it is for the virtual vowel space to mimic the talker’s real vowel space, and whether or not perturbing the size of the perceived vowel space may facilitate or impede involuntary adaptive learning for speech. Sensorimotor Adaptation of Speech Through a Virtually Shortened Vocal Tract by Brittany Bernal is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.https://epublications.marquette.edu/mcnair_2014/1009/thumbnail.jp

    Robot pain: a speculative review of its functions

    Get PDF
    Given the scarce bibliography dealing explicitly with robot pain, this chapter has enriched its review with related research works about robot behaviours and capacities in which pain could play a role. It is shown that all such roles Âżranging from punishment to intrinsic motivation and planning knowledgeÂż can be formulated within the unified framework of reinforcement learning.Peer ReviewedPostprint (author's final draft
    • …
    corecore