30,880 research outputs found

    Brittany Bernal - Sensorimotor Adaptation of Vowel Production in Stop Consonant Contexts

    Get PDF
    The purpose of this research is to measure the compensatory and adaptive articulatory response to shifted formants in auditory feedback to compare the resulting amount of sensorimotor learning that takes place in speakers upon saying the words /pep/ and /tet/. These words were chosen in order to analyze the coarticulatory effects of voiceless consonants /p/ and /t/ on sensorimotor adaptation of the vowel /e/. The formant perturbations were done using the Audapt software, which takes an input speech sample and plays it back to the speaker in real-time via headphones. Formants are high-energy acoustic resonance patterns measured in hertz that reflect positions of articulators during the production of speech sounds. The two lowest frequency formants (F1 and F2) can uniquely distinguish among the vowels of American English. For this experiment, Audapt shifted F1 down and F2 up, and those who adapt were expected to shift in the opposite direction of the perturbation. The formant patterns and vowel boundaries were analyzed using TF32 and S+ software, which led to conclusions about the adaptive responses. Manipulating auditory feedback by shifting formant values is hypothesized to elicit sensorimotor adaptation, a form of short-term motor learning. The amount of adaptation is expected to be greater for the word /pep/ rather than /tet/ because there is less competition for articulatory placement of the tongue during production of bilabial consonants. This methodology could be further developed to help those with motor speech disorders remedy their speech errors with much less conscious effort than traditional therapy techniques.https://epublications.marquette.edu/mcnair_2013/1008/thumbnail.jp

    Listeners normalize speech for contextual speech rate even without an explicit recognition task

    No full text
    Speech can be produced at different rates. Listeners take this rate variation into account by normalizing vowel duration for contextual speech rate: An ambiguous Dutch word /m?t/ is perceived as short /mAt/ when embedded in a slow context, but long /ma:t/ in a fast context. Whilst some have argued that this rate normalization involves low-level automatic perceptual processing, there is also evidence that it arises at higher-level cognitive processing stages, such as decision making. Prior research on rate-dependent speech perception has only used explicit recognition tasks to investigate the phenomenon, involving both perceptual processing and decision making. This study tested whether speech rate normalization can be observed without explicit decision making, using a cross-modal repetition priming paradigm. Results show that a fast precursor sentence makes an embedded ambiguous prime (/m?t/) sound (implicitly) more /a:/-like, facilitating lexical access to the long target word "maat" in a (explicit) lexical decision task. This result suggests that rate normalization is automatic, taking place even in the absence of an explicit recognition task. Thus, rate normalization is placed within the realm of everyday spoken conversation, where explicit categorization of ambiguous sounds is rare

    The Sensitivity of Auditory-Motor Representations to Subtle Changes in Auditory Feedback While Singing

    Get PDF
    Singing requires accurate control of the fundamental frequency (F0) of the voice. This study examined trained singers’ and untrained singers’ (nonsingers’) sensitivity to subtle manipulations in auditory feedback and the subsequent effect on the mapping between F0 feedback and vocal control. Participants produced the consonant-vowel /ta/ while receiving auditory feedback that was shifted up and down in frequency. Results showed that singers and nonsingers compensated to a similar degree when presented with frequency-altered feedback (FAF); however, singers’ F0 values were consistently closer to the intended pitch target. Moreover, singers initiated their compensatory responses when auditory feedback was shifted up or down 6 cents or more, compared to nonsingers who began compensating when feedback was shifted up 26 cents and down 22 cents. Additionally, examination of the first 50 ms of vocalization indicated that participants commenced subsequent vocal utterances, during FAF, near the F0 value on previous shift trials. Interestingly, nonsingers commenced F0 productions below the pitch target and increased their F0 until they matched the note. Thus, singers and nonsingers rely on an internal model to regulate voice F0, but singers’ models appear to be more sensitive in response to subtle discrepancies in auditory feedback

    Engineering data compendium. Human perception and performance. User's guide

    Get PDF
    The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use

    The influences and outcomes of phonological awareness: a study of MA, PA and auditory processing in pre-readers with a family risk of dyslexia

    Get PDF
    The direct influence of phonological awareness (PA) on reading outcomes has been widely demonstrated, yet PA may also exert indirect influence on reading outcomes through other cognitive variables such as morphological awareness (MA). However, PA's own development is dependent and influenced by many extraneous variables such as auditory processing, which could ultimately impact reading outcomes. In a group of pre-reading children with a family risk of dyslexia and low-risk controls, this study sets out to answer questions surrounding PA's relationship at various grain sizes (syllable, onset/rime and phoneme) with measures of auditory processing (frequency modulation (FM) and an amplitude rise-time task (RT)) and MA, independent of reading experience. Group analysis revealed significant differences between high- and low-risk children on measures of MA, and PA at all grain sizes, while a trend for lower RT thresholds of high-risk children was found compared with controls. Correlational analysis demonstrated that MA is related to the composite PA score and syllable awareness. Group differences on MA and PA were re-examined including PA and MA, respectively, as control variables. Results exposed PA as a relevant component of MA, independent of reading experience

    Speech rhythms and multiplexed oscillatory sensory coding in the human brain

    Get PDF
    Cortical oscillations are likely candidates for segmentation and coding of continuous speech. Here, we monitored continuous speech processing with magnetoencephalography (MEG) to unravel the principles of speech segmentation and coding. We demonstrate that speech entrains the phase of low-frequency (delta, theta) and the amplitude of high-frequency (gamma) oscillations in the auditory cortex. Phase entrainment is stronger in the right and amplitude entrainment is stronger in the left auditory cortex. Furthermore, edges in the speech envelope phase reset auditory cortex oscillations thereby enhancing their entrainment to speech. This mechanism adapts to the changing physical features of the speech envelope and enables efficient, stimulus-specific speech sampling. Finally, we show that within the auditory cortex, coupling between delta, theta, and gamma oscillations increases following speech edges. Importantly, all couplings (i.e., brain-speech and also within the cortex) attenuate for backward-presented speech, suggesting top-down control. We conclude that segmentation and coding of speech relies on a nested hierarchy of entrained cortical oscillations
    corecore