11,413 research outputs found

    The build-up of auditory stream segregation in adult cochlear implant users: Effect of differences in frequency and amplitude-modulation rate

    Get PDF
    This project will use an objective approach to evaluate the effect of inter-subsequence frequency difference and amplitude-modulation rate on build-up stream segregation in CI users. Six post-lingually deafened CI users, between 18 and 75 years old, have been studied and compared to four normal-hearing listeners, between 18 and 75 years old. Repeated pairs of A and B noise bursts were adopted from a previous work (Nie et al., 2014) with modifications and additional conditions, where A and B bursts are narrow-band noise carrying sinusoidal amplitude modulation (AM). The A and B bursts in a stimulus sequence differed either in the center frequency of the noise band, or in the AM-rate, or both. Subjects identified a deviant in a rhythmic stream and performance (d’) reflects the strength of stream segregation. The build-up effect was assessed by comparing performances during long and short sequence durations. Results of this study reveal both CI users and NH listeners showed evidence of build-up effect; however, NH listeners showed stronger stream segregation abilities. Duration has the strongest effect for the 16-10 condition and the weakest effect for the 10-10 condition when both groups were analyzed together. This could indicate that frequency separation is a cue for build-up effect. Frequency separation elicited stream segregation in both CI users and NH listeners. Any amount of frequency separation (within the given conditions) provided cues for stream segregation in NH listeners. Only the largest frequency separation (16-10) provided cues for stream segregation in CI listeners. This could indicate spectral interference still occurs even with 3 channels of separation. Finally, AM-rate separation did not elicit stream segregation in either CI users or NH listeners. These findings are contradictory to previous findings and indicate temporal pitch perception may be used by CI users to separate target auditory streams from background noise

    Changes in the McGurk Effect across Phonetic Contexts. I. Fusions

    Full text link
    The McGurk effect has generally been studied within a limited range of phonetic contexts. With the goal of characterizing the McGurk effect through a wider range of contexts, a parametric investigation across three different vowel contexts, /i/, /α/, and /u/, and two different syllable types, consonant-vowel (CV) and vowel-consonant (VC), was conducted. This paper discusses context-dependent changes found specifically in the McGurk fusion phenomenon (Part II addresses changes found in combination percepts). After normalizing for differences in the magnitude of the McGurk effect in different contexts, a large qualitative change in the effect across vowel contexts became apparent. In particular, the frequency of illusory /g/ percepts increased relative to the frequency of illusory /d/ percepts as vowel context was shifted from /i/ to /α/ to /u/. This trend was seen in both syllable sets, and held regardless of whether the visual stimulus used was a /g/ or /d/ articulation. This qualitative change in the McGurk fusion effect across vowel environments corresponded systematically with changes in the typical second formant frequency patterns of the syllables presented. The findings are therefore consistent with sensory-based theories of speech perception which emphasize the importance of second formant patterns as cues in multimodal speech perception.National Institue on Deafness and other Communication Disorders (R29 02852); Alfred P. Sloan Foundation and National Institute on Deafness and other Communication Disorders (R29 02852

    Examination of the role of movement in brain-injured patients’ processing of facial information

    Get PDF
    Bruce and Young's (1986) claims for the distinct processing routes involved in facial expression recognition, familiar face identity and unfamiliar face matching were examined in a group of brain-injured patients, using a common forced-choice procedure. Four patients were observed who showed specific dissociable impairments on one of the face processing tasks, whilst maintaining intact performance on the other two tasks. However, similar differences were observed in the performance of a group of normal subjects. Therefore, it was argued that if specific dissociable impairments are to provide support for the independence of each processing route, much larger unpairments are needed in the patient group. During our perception of facial information under everyday circumstances, we generally perceive dynamic faces. However, it was noted that the majority of face processing tasks, consisted of static stimuli (commonly photographs), making their task demands somewhat unnatural. It was considered important to examine the role of movement in the processing of facial information. Three agnosic patients' ability to process movement and facial expressions was assessed. One agnosic patient was able to successfully process movement and more importantly movement facilitated his impaired processing of static facial expressions. An extensive examination of this patient revealed that, dynamic information did not facilitate his processing of identity, for which he was severely impaired, but he was able to use movement in his processing of lip-read speech, as well as facial expressions. It was shown that movement is able to selectively feed into various face processing channels, with facilitative consequences upon recognition. Movement can provide both a supplementary source of information to static form and a pure movement pattern from which recognition can occur, in the absence of the underlying structural/configural information. There are several implications of these findings; firstly, greater emphasis needs to be placed upon the designs of future face processing tasks, specifically questioning the role of movement in the processing of facial information and secondly, this facilitative role of movement observed in this agnosic patient's processing of facial information, has important applications for the remediation of face processing deficits

    Asynchronous Preparation of Tonally Fused Intervals in Polyphonic Music

    Get PDF
    An analysis of a sample of polyphonic keyboard works by J.S. Bach shows that synchronous note onsets are avoided for those harmonic intervals that most promote tonal fusion (such as unison, fifths and octaves). This pattern is consistent with perceptual research showing an interaction between onset synchrony and tonal fusion in the formation of auditory streams (e.g., Vos, 1995). The results provide further support for the notion that polyphonic music is organized so as to facilitate the perceptual independence of the concurrent parts

    The Resonant Dynamics of Speech Perception: Interword Integration and Duration-Dependent Backward Effects

    Full text link
    How do listeners integrate temporally distributed phonemic information into coherent representations of syllables and words? During fluent speech perception, variations in the durations of speech sounds and silent pauses can produce different pereeived groupings. For exarnple, increasing the silence interval between the words "gray chip" may result in the percept "great chip", whereas increasing the duration of fricative noise in "chip" may alter the percept to "great ship" (Repp et al., 1978). The ARTWORD neural model quantitatively simulates such context-sensitive speech data. In AHTWORD, sequential activation and storage of phonemic items in working memory provides bottom-up input to unitized representations, or list chunks, that group together sequences of items of variable length. The list chunks compete with each other as they dynamically integrate this bottom-up information. The winning groupings feed back to provide top-down supportto their phonemic items. Feedback establishes a resonance which temporarily boosts the activation levels of selected items and chunks, thereby creating an emergent conscious percept. Because the resonance evolves more slowly than wotking memory activation, it can be influenced by information presented after relatively long intervening silence intervals. The same phonemic input can hereby yield different groupings depending on its arrival time. Processes of resonant transfer and competitive teaming help determine which groupings win the competition. Habituating levels of neurotransmitter along the pathways that sustain the resonant feedback lead to a resonant collapsee that permits the formation of subsequent. resonances.Air Force Office of Scientific Research (F49620-92-J-0225); Defense Advanced Research projects Agency and Office of Naval Research (N00014-95-1-0409); National Science Foundation (IRI-97-20333); Office of Naval Research (N00014-92-J-1309, NOOO14-95-1-0657
    • …
    corecore