5,375 research outputs found

    Exploring auditory-motor interactions in normal and disordered speech

    Full text link
    Auditory feedback plays an important role in speech motor learning and in the online correction of speech movements. Speakers can detect and correct auditory feedback errors at the segmental and suprasegmental levels during ongoing speech. The frontal brain regions that contribute to these corrective movements have also been shown to be more active during speech in persons who stutter (PWS) compared to fluent speakers. Further, various types of altered auditory feedback can temporarily improve the fluency of PWS, suggesting that atypical auditory-motor interactions during speech may contribute to stuttering disfluencies. To investigate this possibility, we have developed and improved Audapter, a software that enables configurable dynamic perturbation of the spatial and temporal content of the speech auditory signal in real time. Using Audapter, we have measured the compensatory responses of PWS to static and dynamic perturbations of the formant content of auditory feedback and compared these responses with those from matched fluent controls. Our findings indicate deficient utilization of auditory feedback by PWS for short-latency online control of the spatial and temporal parameters of articulation during vowel production and during running speech. These findings provide further evidence that stuttering is associated with aberrant auditory-motor integration during speech.Published versio

    Reliability of single-subject neural activation patterns in speech production tasks

    Full text link
    Traditional group fMRI (functional magnetic resonance imaging) analyses are not designed to detect individual differences that may be crucial to better understanding speech disorders. Single-subject research could therefore provide a richer characterization of the neural substrates of speech production in development and disease. Before this line of research can be tackled, however, it is necessary to evaluate whether healthy individuals exhibit reproducible brain activation across multiple sessions during speech production tasks. In the present study, we evaluated the reliability and discriminability of cortical functional magnetic resonance imaging data from twenty neurotypical subjects who participated in two experiments involving reading aloud mono- or bisyllabic speech stimuli. Using traditional methods like the Dice and intraclass correlation coefficients, we found that most individuals displayed moderate to high reliability, with exceptions likely due to increased head motion in the scanner. Further, this level of reliability for speech production was not directly correlated with reliable patterns in the underlying average blood oxygenation level dependent signal across the brain. Finally, we found that a novel machine-learning subject classifier could identify these individuals by their speech activation patterns with 97% accuracy from among a dataset of seventy-five subjects. These results suggest that single-subject speech research would yield valid results and that investigations into the reliability of speech activation in people with speech disorders are warranted.Accepted manuscrip

    The neural correlates of speech motor sequence learning

    Full text link
    Speech is perhaps the most sophisticated example of a species-wide movement capability in the animal kingdom, requiring split-second sequencing of approximately 100 muscles in the respiratory, laryngeal, and oral movement systems. Despite the unique role speech plays in human interaction and the debilitating impact of its disruption, little is known about the neural mechanisms underlying speech motor learning. Here, we studied the behavioral and neural correlates of learning new speech motor sequences. Participants repeatedly produced novel, meaningless syllables comprising illegal consonant clusters (e.g., GVAZF) over 2 days of practice. Following practice, participants produced the sequences with fewer errors and shorter durations, indicative of motor learning. Using fMRI, we compared brain activity during production of the learned illegal sequences and novel illegal sequences. Greater activity was noted during production of novel sequences in brain regions linked to non-speech motor sequence learning, including the BG and pre-SMA. Activity during novel sequence production was also greater in brain regions associated with learning and maintaining speech motor programs, including lateral premotor cortex, frontal operculum, and posterior superior temporal cortex. Measures of learning success correlated positively with activity in left frontal operculum and white matter integrity under left posterior superior temporal sulcus. These findings indicate speech motor sequence learning relies not only on brain areas involved generally in motor sequencing learning but also those associated with feedback-based speech motor learning. Furthermore, learning success is modulated by the integrity of structural connectivity between these motor and sensory brain regions.R01 DC007683 - NIDCD NIH HHS; R01DC007683 - NIDCD NIH HH

    Changes in the McGurk Effect Across Phonetic Contexts

    Full text link
    To investigate the process underlying audiovisual speech perception, the McGurk illusion was examined across a range of phonetic contexts. Two major changes were found. First, the frequency of illusory /g/ fusion percepts increased relative to the frequency of illusory /d/ fusion percepts as vowel context was shifted from /i/ to /a/ to /u/. This trend could not be explained by biases present in perception of the unimodal visual stimuli. However, the change found in the McGurk fusion effect across vowel environments did correspond systematically with changes in second format frequency patterns across contexts. Second, the order of consonants in illusory combination percepts was found to depend on syllable type. This may be due to differences occuring across syllable contexts in the timecourses of inputs from the two modalities as delaying the auditory track of a vowel-consonant stimulus resulted in a change in the order of consonants perceived. Taken together, these results suggest that the speech perception system either fuses audiovisual inputs into a visually compatible percept with a similar second formant pattern to that of the acoustic stimulus or interleaves the information from different modalities, at a phonemic or subphonemic level, based on their relative arrival times.National Institutes of Health (R01 DC02852

    Engaging the articulators enhances perception of concordant visible speech movements

    Full text link
    PURPOSE This study aimed to test whether (and how) somatosensory feedback signals from the vocal tract affect concurrent unimodal visual speech perception. METHOD Participants discriminated pairs of silent visual utterances of vowels under 3 experimental conditions: (a) normal (baseline) and while holding either (b) a bite block or (c) a lip tube in their mouths. To test the specificity of somatosensory-visual interactions during perception, we assessed discrimination of vowel contrasts optically distinguished based on their mandibular (English /ɛ/-/æ/) or labial (English /u/-French /u/) postures. In addition, we assessed perception of each contrast using dynamically articulating videos and static (single-frame) images of each gesture (at vowel midpoint). RESULTS Engaging the jaw selectively facilitated perception of the dynamic gestures optically distinct in terms of jaw height, whereas engaging the lips selectively facilitated perception of the dynamic gestures optically distinct in terms of their degree of lip compression and protrusion. Thus, participants perceived visible speech movements in relation to the configuration and shape of their own vocal tract (and possibly their ability to produce covert vowel production-like movements). In contrast, engaging the articulators had no effect when the speaking faces did not move, suggesting that the somatosensory inputs affected perception of time-varying kinematic information rather than changes in target (movement end point) mouth shapes. CONCLUSIONS These findings suggest that orofacial somatosensory inputs associated with speech production prime premotor and somatosensory brain regions involved in the sensorimotor control of speech, thereby facilitating perception of concordant visible speech movements. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.9911846R01 DC002852 - NIDCD NIH HHSAccepted manuscrip

    An Investigation of the Effects of Categorization and Discrimination Training on Auditory Perceptual Space

    Full text link
    Psychophysical phenomena such as categorical perception and the perceptual magnet effect indicate that our auditory perceptual spaces are warped for some stimuli. This paper investigates the effects of two different kinds of training on auditory perceptual space. It is first shown that categorization training, in which subjects learn to identify stimuli within a particular frequency range as members of the same category, can lead to a decrease in sensitivity to stimuli in that category. This phenomenon is an example of acquired similarity and apparently has not been previously demonstrated for a category-relevant dimension. Discrimination training with the same set of stimuli was shown to have the opposite effect: subjects became more sensitive to differences in the stimuli presented during training. Further experiments investigated some of the conditions that are necessary to generate the acquired similarity found in the first experiment. The results of these experiments are used to evaluate two neural network models of the perceptual magnet effect. These models, in combination with our experimental results, are used to generate an experimentally testable hypothesis concerning changes in the brain's auditory maps under different training conditions.Alfred P. Sloan Foundation and the National institutes of Deafness and other Communication Disorders (R29 02852); Air Force Office of Scientific Research (F49620-98-1-0108

    A search for flares and mass ejections on young late-type stars in the open cluster Blanco-1

    Full text link
    We present a search for stellar activity (flares and mass ejections) in a sample of 28 stars in the young open cluster Blanco-1. We use optical spectra obtained with ESO's VIMOS multi-object spectrograph installed on the VLT. From the total observing time of ∼\sim 5 hours, we find four Hα\alpha flares but no distinct indication of coronal mass ejections (CMEs) on the investigated dK-dM stars. Two flares show "dips" in their light-curves right before their impulsive phases which are similar to previous discoveries in photometric light-curves of active dMe stars. We estimate an upper limit of <<4 CMEs per day per star and discuss this result with respect to a semi- empirical estimation of the CME rate of main-sequence stars. We find that we should have detected at least one CME per star with a mass of 1-15×1016\times10^{16} g depending on the star's X-ray luminosity, but the estimated Hα\alpha fluxes associated with these masses are below the detection limit of our observations. We conclude that the parameter which mainly influences the detection of stellar CMEs using the method of Doppler-shifted emission caused by moving plasma is not the spectral resolution or velocity but the flux or mass of the CME.Comment: Accepted for publication in MNRAS, accepted 2014 June 10, received 2014 June 5, in original form 2014 March 24, 14 pages, 5 figure

    Anomalous morphology in left hemisphere motor and premotor cortex of children who stutter

    Full text link
    Stuttering is a neurodevelopmental disorder that affects the smooth flow of speech production. Stuttering onset occurs during a dynamic period of development when children first start learning to formulate sentences. Although most children grow out of stuttering naturally, ∼1% of all children develop persistent stuttering that can lead to significant psychosocial consequences throughout one’s life. To date, few studies have examined neural bases of stuttering in children who stutter, and even fewer have examined the basis for natural recovery versus persistence of stuttering. Here we report the first study to conduct surface-based analysis of the brain morphometric measures in children who stutter. We used FreeSurfer to extract cortical size and shape measures from structural MRI scans collected from the initial year of a longitudinal study involving 70 children (36 stuttering, 34 controls) in the 3–10-year range. The stuttering group was further divided into two groups: persistent and recovered, based on their later longitudinal visits that allowed determination of their eventual clinical outcome. A region of interest analysis that focused on the left hemisphere speech network and a whole-brain exploratory analysis were conducted to examine group differences and group × age interaction effects. We found that the persistent group could be differentiated from the control and recovered groups by reduced cortical thickness in left motor and lateral premotor cortical regions. The recovered group showed an age-related decrease in local gyrification in the left medial premotor cortex (supplementary motor area and and pre-supplementary motor area). These results provide strong evidence of a primary deficit in the left hemisphere speech network, specifically involving lateral premotor cortex and primary motor cortex, in persistent developmental stuttering. Results further point to a possible compensatory mechanism involving left medial premotor cortex in those who recover from childhood stuttering.This study was supported by Award Numbers R01DC011277 (SC) and R01DC007683 (FG) from the National Institute on Deafness and other Communication Disorders (NIDCD). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIDCD or the National Institutes of Health. (R01DC011277 - National Institute on Deafness and other Communication Disorders (NIDCD); R01DC007683 - National Institute on Deafness and other Communication Disorders (NIDCD))Accepted manuscrip

    PyTranSpot\texttt{PyTranSpot} - A tool for multiband light curve modeling of planetary transits and stellar spots

    Full text link
    Several studies have shown that stellar activity features, such as occulted and non-occulted starspots, can affect the measurement of transit parameters biasing studies of transit timing variations and transmission spectra. We present PyTranSpot\texttt{PyTranSpot}, which we designed to model multiband transit light curves showing starspot anomalies, inferring both transit and spot parameters. The code follows a pixellation approach to model the star with its corresponding limb darkening, spots, and transiting planet on a two dimensional Cartesian coordinate grid. We combine PyTranSpot\texttt{PyTranSpot} with an MCMC framework to study and derive exoplanet transmission spectra, which provides statistically robust values for the physical properties and uncertainties of a transiting star-planet system. We validate PyTranSpot\texttt{PyTranSpot}'s performance by analyzing eleven synthetic light curves of four different star-planet systems and 20 transit light curves of the well-studied WASP-41b system. We also investigate the impact of starspots on transit parameters and derive wavelength dependent transit depth values for WASP-41b covering a range of 6200-9200 AËš\AA, indicating a flat transmission spectrum.Comment: 17 pages, 22 figures; accepted for publication in Astronomy & Astrophysic
    • …
    corecore