4,610 research outputs found

    Cueing in a perceptual task causes long-lasting interference that generalizes across context to affect only late perceptual learning and is remediated by the passage of time

    Get PDF
    Perceptual learning, the improvement in sensory discriminations with practise, is also subject to stimulus-specific interference from temporal jitter in a learning session or manipulations applied between or immediately after sessions. We demonstrate a novel form of perceptual interference where even a brief cueing exposure to a complex speech-in-noise task produces a forward interference on subsequent speech-in-noise learning. This potent interference generalizes across cueing context but specifically affects only late learning in the subsequent task, is resistant to the remediating effects of sleep and persists across an overnight delay involving sleep, and can be evoked by a single exposure 1 day before the learning. Learning in the speech-in-noise task is due to generalized improvements in discriminating and extracting signals (speech) from noise and we hypothesize that the forward interference represents interference with improvements in access to higher-level representations in rapid perception of ecologically-familiar complex signals such as speech from background noise

    Critical appraisal of speech in noise tests: a systematic review and survey

    Get PDF
    Speech in noise tests that measure the perception of speech in presence of noise are now an important part of audiologic tests battery and hearing research as well. There are various tests available to estimate the perception of speech in presence of noise, for example, connected sentence test, hearing in noise test, words in noise, quick speech-in-noise test, bamford-kowal-bench speech-in-noise test, and listening in spatialized noise-sentences. All these tests are different in terms of target age, measure, procedure, speech material, noise, normative, etc. Because of the variety of tests available to estimate speech-in-noise abilities, audiologists often select tests based on their availability, ease to administer the test, time required in running the test, age of the patient, hearing status, type of hearing disorder and type of amplification device if using. A critical appraisal of these speech-in-noise tests is required for the evidence based selection and to be used in audiology clinics. In this article speech-in-noise tests were critically appraised for their conceptual model, measurement model, normatives, reliability, validity, responsiveness, item/instrument bias, respondent burden and administrative burden. Selection of a standard speech-in-noise test based on this critical appraisal will also allow an easy comparison of speech-in-noise ability of any hearing impaired individual or group across audiology clinics and research centers. This article also describes the survey which was done to grade the speech in noise tests on the various appraisal characteristics

    Cepstral analysis based on the Glimpse proportion measure for improving the intelligibility of HMM-based synthetic speech in noise

    Get PDF
    In this paper we introduce a new cepstral coefficient extraction method based on an intelligibility measure for speech in noise, the Glimpse Proportion measure. This new method aims to increase the intelligibility of speech in noise by modifying the clean speech, and has applications in scenarios such as public announcement and car navigation systems. We first explain how the Glimpse Proportion measure operates and further show how we approximated it to integrate it into an existing spectral envelope parameter extraction method commonly used in the HMM-based speech synthesis framework. We then demonstrate how this new method changes the modelled spectrum according to the characteristics of the noise and show results for a listening test with vocoded and HMM-based synthetic speech. The test indicates that the proposed method can significantly improve intelligibility of synthetic speech in speech shaped noise. Index Terms — cepstral coefficient extraction, objective measure for speech intelligibility, Lombard speech, HMM-based speech synthesis 1

    Developmental links between speech perception in noise, singing, and cortical processing of music in children with cochlear implants

    Get PDF
    THE PERCEPTION OF SPEECH IN NOISE IS challenging for children with cochlear implants (CIs). Singing and musical instrument playing have been associated with improved auditory skills in normal-hearing (NH) children. Therefore, we assessed how children with CIs who sing informally develop in the perception of speech in noise compared to those who do not. We also sought evidence of links of speech perception in noise with MMN and P3a brain responses to musical sounds and studied effects of age and changes over a 14-17 month time period in the speech-in-noise performance of children with CIs. Compared to the NH group, the entire CI group was less tolerant of noise in speech perception, but both groups improved similarly. The CI singing group showed better speech-in-noise perception than the CI non-singing group. The perception of speech in noise in children with CIs was associated with the amplitude of MMN to a change of sound from piano to cymbal, and in the CI singing group only, with earlier P3a for changes in timbre. While our results cannot address causality, they suggest that singing and musical instrument playing may have a potential to enhance the perception of speech in noise in children with CIs.Peer reviewe

    Vibro-Tactile Enhancement of Speech Intelligibility in Multi-talker Noise for Simulated Cochlear Implant Listening.

    Get PDF
    Many cochlear implant (CI) users achieve excellent speech understanding in acoustically quiet conditions but most perform poorly in the presence of background noise. An important contributor to this poor speech-in-noise performance is the limited transmission of low-frequency sound information through CIs. Recent work has suggested that tactile presentation of this low-frequency sound information could be used to improve speech-in-noise performance for CI users. Building on this work, we investigated whether vibro-tactile stimulation can improve speech intelligibility in multi-talker noise. The signal used for tactile stimulation was derived from the speech-in-noise using a computationally inexpensive algorithm. Eight normal-hearing participants listened to CI simulated speech-in-noise both with and without concurrent tactile stimulation of their fingertip. Participants' speech recognition performance was assessed before and after a training regime, which took place over 3 consecutive days and totaled around 30 min of exposure to CI-simulated speech-in-noise with concurrent tactile stimulation. Tactile stimulation was found to improve the intelligibility of speech in multi-talker noise, and this improvement was found to increase in size after training. Presentation of such tactile stimulation could be achieved by a compact, portable device and offer an inexpensive and noninvasive means for improving speech-in-noise performance in CI users

    Un comentario al texto Coneixences de les monedes de los Memoriales de Pere Miquel Carbonell

    Get PDF
    Published studies assessing the association between cognitive performance and speech-in-noise perception examine different aspects of each, test different listeners, and often report quite variable associations. By examining the published evidence base using a systematic approach, we aim to identify robust patterns across studies and highlight any remaining gaps in knowledge. We limit our assessment to adult non-hearing aid users with audiometric profiles ranging from normal hearing to moderate hearing loss. A total of 253 articles were independently assessed by two researchers, with 25 meeting the criteria for inclusion. Included articles assessed cognitive measures of attention, memory, executive function, IQ and processing speed. Speech-in-noise measures varied by target (phonemes/syllables, words, sentences) and masker type (unmodulated noise, modulated noise, multi (n>2) talker babble, and n<2 talker babble). The overall association between cognitive performance and speech-in-noise perception was r=0.31. For component cognitive domains, the association with (pooled) speech-in-noise perception were; processing speed (r=0.39), inhibitory control (r=0.34), working memory (r=0.28), episodic memory (r=0.26) and crystalized IQ (r=0.18). Similar associations were shown for the different speech target and masker types. This review suggests a general association of r≈0.3 between cognitive performance and speech perception, although some variability in association appeared to exist depending on cognitive domain and speech-in-noise target or masker assessed. Where assessed, degree of unaided hearing loss did not play a major moderating role. We identify a number of cognitive performance and speech-in-noise perception combinations that have not been tested, and whose future investigation would enable further finer-grained analyses of these relationships

    No Link Between Speech-in-Noise Perception and Auditory Sensory Memory - Evidence From a Large Cohort of Older and Younger Listeners

    Get PDF
    A growing literature is demonstrating a link between working memory (WM) and speech-in-noise (SiN) perception. However, the nature of this correlation and which components of WM might underlie it, are being debated. We investigated how SiN reception links with auditory sensory memory (aSM) - the low-level processes that support the short-term maintenance of temporally unfolding sounds. A large sample of old (N = 199, 60-79 yo) and young (N = 149, 20-35 yo) participants was recruited online and performed a coordinate response measure-based speech-in-babble task that taps listeners' ability to track a speech target in background noise. We used two tasks to investigate implicit and explicit aSM. Both were based on tone patterns overlapping in processing time scales with speech (presentation rate of tones 20 Hz; of patterns 2 Hz). We hypothesised that a link between SiN and aSM may be particularly apparent in older listeners due to age-related reduction in both SiN reception and aSM. We confirmed impaired SiN reception in the older cohort and demonstrated reduced aSM performance in those listeners. However, SiN and aSM did not share variability. Across the two age groups, SiN performance was predicted by a binaural processing test and age. The results suggest that previously observed links between WM and SiN may relate to the executive components and other cognitive demands of the used tasks. This finding helps to constrain the search for the perceptual and cognitive factors that explain individual variability in SiN performance

    A computer model of auditory efferent suppression: Implications for the recognition of speech in noise

    Get PDF
    The neural mechanisms underlying the ability of human listeners to recognize speech in the presence of background noise are still imperfectly understood. However, there is mounting evidence that the medial olivocochlear system plays an important role, via efferents that exert a suppressive effect on the response of the basilar membrane. The current paper presents a computer modeling study that investigates the possible role of this activity on speech intelligibility in noise. A model of auditory efferent processing [ Ferry, R. T., and Meddis, R. (2007). J. Acoust. Soc. Am. 122, 3519?3526 ] is used to provide acoustic features for a statistical automatic speech recognition system, thus allowing the effects of efferent activity on speech intelligibility to be quantified. Performance of the ?basic? model (without efferent activity) on a connected digit recognition task is good when the speech is uncorrupted by noise but falls when noise is present. However, recognition performance is much improved when efferent activity is applied. Furthermore, optimal performance is obtained when the amount of efferent activity is proportional to the noise level. The results obtained are consistent with the suggestion that efferent suppression causes a ?release from adaptation? in the auditory-nerve response to noisy speech, which enhances its intelligibility
    corecore