30,054 research outputs found

    Short-and medium-term plasticity for speaker adaptation seem to be independent

    Get PDF
    The author wishes to thank James McQueen and Elizabeth Johnson for comments made on an earlier drafts of this paper.In a classic paper, Ladefoged and Broadbent [1] showed that listeners adapt to speakers based on short-term exposure of a single phrase. Recently, Norris, McQueen, and Cutler [2] presented evidence for a lexically conditioned medium-term adaptation to a particular speaker based on an exposure of 40 critical words among 200 items. In two experiments, I investigated whether there is a connection between the two findings. To this end, a vowel-normalization paradigm (similar to [1]) was used with a carrier phrase that consisted of either words or nonwords. The range of the second formant was manipulated and this affected the perception of a target vowel in a compensatory fashion: A low F2-range made it more likely that a target vowel was perceived as a front vowel, that is, with an inherently high F2. Manipulation of the lexical status of the carrier phrase, however, did not affect vowel normalization. In contrast, the range of vowels in the carrier phrase did influence vowel normalization. If the carrier phrase consisted of high-front vowels only, vowel categories shifted only for high-front vowels. This may indicate that the short-term and medium-term adaptations are brought about by different mechanisms.peer-reviewe

    Lexically-driven perceptual adjustments of vowel categories

    Get PDF
    We investigated the plasticity of vowel categories in a perceptual learning paradigm in which listeners are encouraged to use lexical knowledge to adjust their interpretation of ambiguous speech sounds. We tested whether this kind of learning occurs for vowels, and whether it generalises to the perception of other vowels. In Experiments 1 and 2, Dutch listeners were exposed during a lexical decision task to ambiguous vowels, midway between [i] and [e], in lexical contexts biasing interpretation of those vowels either towards /i/ or towards /e/. The effect of this exposure was tested in a subsequent phonetic-categorisation task. Lexically-driven perceptual adjustments were observed: Listeners exposed to the ambiguous vowels in /i/-biased contexts identified more sounds on an [i]-[e] test continuum as /i/ than those who heard the ambiguous vowels in /e/-biased contexts. Generalisation to other contrasts was weak and occurred more strongly for a distant vowel contrast (/α / vs. / ͻ /,) than for a near contrast (I/ vs. / ε /). In Experiment 3, spectral filters based on the difference between the exposure [i] and [e] sounds were applied to test stimuli from all three of the contrasts. Identification data of these filtered stimuli suggest that generalisation of learning across vowels does not depend on overall spectral similarity between exposure and test vowel contrasts.peer-reviewe

    Resonant Neural Dynamics of Speech Perception

    Full text link
    What is the neural representation of a speech code as it evolves in time? How do listeners integrate temporally distributed phonemic information across hundreds of milliseconds, even backwards in time, into coherent representations of syllables and words? What sorts of brain mechanisms encode the correct temporal order, despite such backwards effects, during speech perception? How does the brain extract rate-invariant properties of variable-rate speech? This article describes an emerging neural model that suggests answers to these questions, while quantitatively simulating challenging data about audition, speech and word recognition. This model includes bottom-up filtering, horizontal competitive, and top-down attentional interactions between a working memory for short-term storage of phonetic items and a list categorization network for grouping sequences of items. The conscious speech and word recognition code is suggested to be a resonant wave of activation across such a network, and a percept of silence is proposed to be a temporal discontinuity in the rate with which such a resonant wave evolves. Properties of these resonant waves can be traced to the brain mechanisms whereby auditory, speech, and language representations are learned in a stable way through time. Because resonances are proposed to control stable learning, the model is called an Adaptive Resonance Theory, or ART, model.Air Force Office of Scientific Research (F49620-01-1-0397); National Science Foundation (IRI-97-20333); Office of Naval Research (N00014-01-1-0624)

    OPA1-related auditory neuropathy: site of lesion and outcome of cochlear implantation.

    Get PDF
    Hearing impairment is the second most prevalent clinical feature after optic atrophy in Dominant Optic Atrophy associated with mutations in the OPA1 gene. In this study we characterized the hearing dysfunction in OPA1-linked disorders and provided effective rehabilitative options to improve speech perception. We studied two groups of OPA1 subjects, one comprising 11 patients (7 males; age range 13-79 years) carrying OPA1 mutations inducing haploinsufficiency, the other, 10 subjects (3 males; age range 5-58 years) carrying OPA1 missense mutations. Both groups underwent audiometric assessment with pure tone and speech perception evaluation, and otoacoustic emissions and auditory brainstem response recording. Cochlear potentials were recorded through transtympanic electrocochleography from the group of patients harboring OPA1 missense mutations and were compared to recordings obtained from 20 normally-hearing controls and from 19 subjects with cochlear hearing loss. Eight patients carrying OPA1 missense mutations underwent cochlear implantation. Speech perception measures and electrically-evoked auditory nerve and brainstem responses were obtained after one year of cochlear implant use. Nine out of 11 patients carrying OPA1 mutations inducing haploinsufficiency had normal hearing function. In contrast, all but one subject harboring OPA1 missense mutations displayed impaired speech perception, abnormal brainstem responses and presence of otoacoustic emissions consistent with auditory neuropathy. In electrocochleography recordings, cochlear microphonic had enhanced amplitudes while summating potential showed normal latency and peak amplitude consistent with preservation of both outer and inner hair cell activities. After cancelling the cochlear microphonic, the synchronized neural response seen in both normally-hearing controls and subjects with cochlear hearing loss was replaced by a prolonged, low-amplitude negative potential that decreased in both amplitude and duration during rapid stimulation consistent with neural generation. The use of cochlear implant improved speech perception in all but one patient. Brainstem potentials were recorded in response to electrical stimulation in five subjects out of six, whereas no compound action potential was evoked from the auditory nerve through the cochlear implant. These findings indicate that underlying the hearing impairment in patients carrying OPA1 missense mutations is a disordered synchrony in auditory nerve fiber activity resulting from neural degeneration affecting the terminal dendrites. Cochlear implantation improves speech perception and synchronous activation of auditory pathways by by-passing the site of lesion

    Development of audiovisual comprehension skills in prelingually deaf children with cochlear implants

    Get PDF
    Objective: The present study investigated the development of audiovisual comprehension skills in prelingually deaf children who received cochlear implants. Design: We analyzed results obtained with the Common Phrases (Robbins et al., 1995) test of sentence comprehension from 80 prelingually deaf children with cochlear implants who were enrolled in a longitudinal study, from pre-implantation to 5 years after implantation. Results: The results revealed that prelingually deaf children with cochlear implants performed better under audiovisual (AV) presentation compared with auditory-alone (A-alone) or visual-alone (V-alone) conditions. AV sentence comprehension skills were found to be strongly correlated with several clinical outcome measures of speech perception, speech intelligibility, and language. Finally, pre-implantation V-alone performance on the Common Phrases test was strongly correlated with 3-year postimplantation performance on clinical outcome measures of speech perception, speech intelligibility, and language skills. Conclusions: The results suggest that lipreading skills and AV speech perception reflect a common source of variance associated with the development of phonological processing skills that is shared among a wide range of speech and language outcome measures

    Predicting Dyslexia Based on Pre-reading Auditory and Speech Perception Skills

    Get PDF
    Purpose: This longitudinal study examines measures of temporal auditory processing in pre-reading children with a family risk of dyslexia. Specifically, it attempts to ascertain whether pre-reading auditory processing, speech perception, and phonological awareness (PA) reliably predict later literacy achievement. Additionally, this study retrospectively examines the presence of pre-reading auditory processing, speech perception, and PA impairments in children later found to be literacy impaired. Method: Forty-four pre-reading children with and without a family risk of dyslexia were assessed at three time points (kindergarten, first, and second grade). Auditory processing measures of rise time (RT) discrimination and frequency modulation (FM) along with speech perception, PA, and various literacy tasks were assessed. Results: Kindergarten RT uniquely contributed to growth in literacy in grades one and two, even after controlling for letter knowledge and PA. Highly significant concurrent and predictive correlations were observed with kindergarten RT significantly predicting first grade PA. Retrospective analysis demonstrated atypical performance in RT and PA at all three time points in children who later developed literacy impairments. Conclusions: Although significant, kindergarten auditory processing contributions to later literacy growth lack the power to be considered as a single-cause predictor; thus results support temporal processing deficits’ contribution within a multiple deficit model of dyslexia

    Predicting future reading problems based on pre-reading auditory measures: a longitudinal study of children with a familial risk of dyslexia

    Get PDF
    Purpose: This longitudinal study examines measures of temporal auditory processing in pre-reading children with a family risk of dyslexia. Specifically, it attempts to ascertain whether pre-reading auditory processing, speech perception, and phonological awareness (PA) reliably predict later literacy achievement. Additionally, this study retrospectively examines the presence of pre-reading auditory processing, speech perception, and PA impairments in children later found to be literacy impaired. Method: Forty-four pre-reading children with and without a family risk of dyslexia were assessed at three time points (kindergarten, first, and second grade). Auditory processing measures of rise time (RT) discrimination and frequency modulation (FM) along with speech perception, PA, and various literacy tasks were assessed. Results: Kindergarten RT uniquely contributed to growth in literacy in grades one and two, even after controlling for letter knowledge and PA. Highly significant concurrent and predictive correlations were observed with kindergarten RT significantly predicting first grade PA. Retrospective analysis demonstrated atypical performance in RT and PA at all three time points in children who later developed literacy impairments. Conclusions: Although significant, kindergarten auditory processing contributions to later literacy growth lack the power to be considered as a single-cause predictor; thus results support temporal processing deficits’ contribution within a multiple deficit model of dyslexia

    Speech perception abilities of adults with dyslexia: is there any evidence for a true deficit?

    Get PDF
    PURPOSE: This study investigated whether adults with dyslexia show evidence of a consistent speech perception deficit by testing phoneme categorization and word perception in noise. METHOD: Seventeen adults with dyslexia and 20 average readers underwent a test battery including standardized reading, language and phonological awareness tests, and tests of speech perception. Categorization of a pea/bee voicing contrast was evaluated using adaptive identification and discrimination tasks, presented in quiet and in noise, and a fixed-step discrimination task. Two further tests of word perception in noise were presented. RESULTS: There were no significant group differences for categorization in quiet or noise, across- and within-category discrimination as measured adaptively, or word perception, but average readers showed better across- and within-category discrimination in the fixed-step discrimination task. Individuals did not show consistent poor performance across related tasks. CONCLUSIONS: The small number of group differences, and lack of consistent poor individual performance, suggests weak support for a speech perception deficit in dyslexia. It seems likely that at least some poor performances are attributable to nonsensory factors like attention. It may also be that some individuals with dyslexia have speech perceptual acuity that is at the lower end of the normal range and exacerbated by nonsensory factors

    Audio-visual speech perception: a developmental ERP investigation

    Get PDF
    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development

    The Influence of Social Priming on Speech Perception

    Get PDF
    Speech perception relies on auditory, visual, and motor cues and has been historically difficult to model, partially due to this multimodality. One of the current models is the Fuzzy Logic Model of Perception (FLMP), which suggests that if one of these types of speech mode is altered, the perception of that speech signal should be altered in a quantifiable and predictable way. The current study uses social priming to activate the schema of blindness in order to reduce reliance of visual cues of syllables with a visually identical pair. According to the FLMP, by lowering reliance on visual cues, visual confusion should also be reduced, allowing the visually confusable syllables to be identified more quickly. Although no main effect of priming was discovered, some individual syllables showed the expected facilitation while others showed inhibition. These results suggest that there is an effect of social priming on speech perception, despite the opposing reactions between syllables. Further research should use a similar kind of social priming to determine which syllables have more acoustically salient features and which have more visually salient features
    corecore