1,679 research outputs found

    Auditory Rehabilitation after Stroke: Treatment of Auditory Processing Disorders in Stroke Patients with Personal Frequency-Modulated (FM) Systems

    Get PDF
    Purpose: Auditory disability due to impaired auditory processing (AP) despite normal pure-tone thresholds is common after stroke, and it leads to isolation, reduced quality of life and physical decline. There are currently no proven remedial interventions for AP deficits in stroke patients. This is the first study to investigate the benefits of personal frequency-modulated (FM) systems in stroke patients with disordered AP. Methods: Fifty stroke patients had baseline audiological assessments, AP tests and completed the (modified) Amsterdam Inventory for Auditory Disability (AIAD) and Hearing Handicap Inventory for Elderly (HHIE) questionnaires. Nine out of these fifty patients were diagnosed with disordered AP based on severe deficits in understanding speech in background noise but with normal pure-tone thresholds. These nine patients underwent spatial speech-in-noise testing in a sound-attenuating chamber (the “crescent of sound”) with and without FM systems. Results: The signal-to-noise-ratio (SNR) for 50% correct speech recognition performance was measured with speech presented from 0° azimuth and competing babble from ±90° azimuth. Spatial release from masking (SRM) was defined as the difference between SNRs measured with co-located speech and babble and SNRs measured with spatially separated speech and babble. The SRM significantly improved when babble was spatially separated from target speech, while the patients had the FM systems in their ears compared to without the FM systems. Conclusions: Personal FM systems may substantially improve speech-in-noise deficits in stroke patients who are not eligible for conventional hearing aids. FMs are feasible in stroke patients and show promise to address impaired AP after stroke

    An investigation into vocal expressions of emotions: the roles of valence, culture, and acoustic factors.

    Get PDF
    This PhD is an investigation of vocal expressions of emotions, mainly focusing on non-verbal sounds such as laughter, cries and sighs. The research examines the roles of categorical and dimensional factors, the contributions of a number of acoustic cues, and the influence of culture. A series of studies established that naive listeners can reliably identify non-verbal vocalisations of positive and negative emotions in forced-choice and rating tasks. Some evidence for underlying dimensions of arousal and valence is found, although each emotion had a discrete expression. The role of acoustic characteristics of the sounds is investigated experimentally and analytically. This work shows that the cues used to identify different emotions vary, although pitch and pitch variation play a central role. The cues used to identify emotions in non-verbal vocalisations differ from the cues used when comprehending speech. An additional set of studies using stimuli consisting of emotional speech demonstrates that these sounds can also be reliably identified, and rely on similar acoustic cues. A series of studies with a pre-literate Namibian tribe shows that non-verbal vocalisations can be recognized across cultures. An fMRI study carried out to investigate the neural processing of non-verbal vocalisations of emotions is presented. The results show activation in pre-motor regions arising from passive listening to non-verbal emotional vocalisations, suggesting neural auditory-motor interactions in the perception of these sounds. In sum, this thesis demonstrates that non-verbal vocalisations of emotions are reliably identifiable tokens of information that belong to discrete categories. These vocalisations are recognisable across vastly different cultures and thus seem to, like facial expressions of emotions, comprise human universals. Listeners rely mainly on pitch and pitch variation to identify emotions in non verbal vocalisations, which differs with the cues used to comprehend speech. When listening to others' emotional vocalisations, a neural system of preparatory motor activation is engaged

    Design choices in imaging speech comprehension: An Activation Likelihood Estimation (ALE) meta-analysis

    Get PDF
    The localisation of spoken language comprehension is debated extensively: is processing located anterior or posterior on the left temporal lobe, and is it left- or bilaterally organised? An Activation Likelihood Estimation (ALE) analysis was conducted on functional MRI and PET studies investigating speech comprehension to identify the neural network involved in comprehension processing. Furthermore, the analysis aimed to establish the effect of four design choices (scanning paradigm, non-speech baseline, the presence of a task, and the type of stimulus material) on this comprehension network. The analysis included 57 experiments contrasting intelligible with less intelligible or unintelligible stimuli. A large comprehension network was found across bilateral Superior Temporal Sulcus (STS), Middle Temporal Gyrus (MTG) and Superior Temporal (STS) bilaterally, in left Inferior Frontal Gyrus (IFG), left Precentral Gyrus, and Supplementary Motor Area (SMA) and pre-SMA. The core network for post-lexical processing was restricted to the temporal lobes bilaterally with the highest ALE values located anterior to Heschl's Gyrus. Activations in the ALE comprehension network outside the temporal lobes (left IFG, SMA/pre-SMA, and Precentral Gyrus) were driven by the use of sentences instead of words, the scanning paradigm, or the type of non-speech baseline

    A computer based analysis of the effects of rhythm modification on the intelligibility of the speech of hearing and deaf subjects

    Get PDF
    The speech of profoundly deaf persons often exhibits acquired unnatural rhythms, or a random pattern of rhythms. Inappropriate pause-time and speech-time durations are common in their speech. Specific rhythm deficiencies include abnormal rate of syllable utterance, improper grouping, poor timing and phrasing of syllables and unnatural stress for accent and emphasis. Assuming that temporal features are fundamental to the naturalness of spoken language, these abnormal timing patterns are often detractive. They may even be important factors in the decreased intelligibility of the speech. This thesis explores the significance of temporal cues in the rhythmic patterns of speech. An analysis-synthesis approach was employed based on the encoding and decoding of speech by a tandem chain of digital computer operations. Rhythm as a factor in the speech intelligibility of deaf and normal-hearing subjects was investigated. The results of this study support the general hypothesis that rhythm and rhythmic intuition are important to the perception of speech

    Individual and environment-related acoustic-phonetic strategies for communicating in adverse conditions

    Get PDF
    In many situations it is necessary to produce speech in ‘adverse conditions’: that is, conditions that make speech communication difficult. Research has demonstrated that speaker strategies, as described by a range of acoustic-phonetic measures, can vary both at the individual level and according to the environment, and are argued to facilitate communication. There has been debate as to the environmental specificity of these adaptations, and their effectiveness in overcoming communication difficulty. Furthermore, the manner and extent to which adaptation strategies differ between individuals is not yet well understood. This thesis presents three studies that explore the acoustic-phonetic adaptations of speakers in noisy and degraded communication conditions and their relationship with intelligibility. Study 1 investigated the effects of temporally fluctuating maskers on global acoustic-phonetic measures associated with speech in noise (Lombard speech). The results replicated findings of increased power in the modulation spectrum in Lombard speech, but showed little evidence of adaptation to masker fluctuations via the temporal envelope. Study 2 collected a larger corpus of semi-spontaneous communicative speech in noise and other degradations perturbing specific acoustic dimensions. Speakers showed different adaptations across the environments that were likely suited to overcome noise (steady and temporally fluctuating), restricted spectral and pitch information by a noise-excited vocoder, and a sensorineural hearing loss simulation. Analyses of inter-speaker variation in both studies 1 and 2 showed behaviour was highly variable and some strategy combinations were identified. Study 3 investigated the intelligibility of strategies ‘tailored’ to specific environments and the relationship between intelligibility and speaker acoustics, finding a benefit of tailored speech adaptations and discussing the potential roles of speaker flexibility, adaptation level, and intrinsic intelligibility. The overall results are discussed in relation to models of communication in adverse conditions and a model accounting for individual variability in these conditions is proposed

    Characterisation of disordered auditory processing in adults who present to audiology with hearing difficulties in presence of normal hearing thresholds: Correlation between auditory tests and symptoms

    Get PDF
    The diagnosis of auditory processing disorder (APD) remains controversial. Quantifying symptoms in individuals with APD by using validated questionnaires may help better understand the disorder and inform appropriate diagnostic evaluation. Aims: This study was aimed at characterising the symptoms in APD and correlating them with the results of auditory processing (AP) tests. Methods: Phase 1: Normative data of a speech-in-babble test, to be used as part of the APD test battery, were collected for 69 normal volunteers aged 20–57 years. Phase 2: Sixty adult subjects with hearing difficulties and normal audiogram and 38 healthy age-matched controls completed three validated questionnaires (Amsterdam Inventory for Auditory Disability; Speech, Spatial and Qualities of Hearing Scale; hyperacusis questionnaire) and underwent AP tests, including dichotic digits, frequency and duration pattern, gaps-in-noise, speech-in-babble and suppression of otoacoustic emissions by contralateral noise. The subjects were categorised into the clinical APD group or clinical non- APD group depending on whether they met the criterion of two failed tests. The questionnaire scores in the three groups were compared. Phase 3: The questionnaire scores were correlated with the APD test results in 58/60 clinical subjects and 38 of the normal subjects. Results: Phase 1: Normative data for the speech-in-babble test afforded an upper cut-off mean value of 4.4 dB for both ears Phase 2: Adults with APD presented with hearing difficulties in quiet and noise; difficulties in localising, recognising and detecting sounds and hyperacusis with significantly poorer scores compared to clinical non- APD subjects and normal controls. Phase 3: Weak to moderate correlations were noted among the scores of the three questionnaires and the APD tests. Correlations were the strongest for the gaps-in-noise, speech-in-babble, dichotic digit tests with all three questionnaires. Conclusions: The three validated questionnaires may help identify adults with normal hearing who need referral for APD assessment

    Hearing Loss and the Voice

    Get PDF
    The voice varies according to the context of speech and to the physical and psychological conditions of the human being, and there is always a normal standard for the vocal output. Hearing loss can impair voce production, causing social, educational, and speech limitations, with specific deviation of the communication related to speech and voice. Usually, the voice is not the main focus of the speech-language pathology therapy with individuals with hearing loss, but its deviations can represent such a negative impact on this population that it can interfere on speech intelligibility and crucially compromise the social integration of the individual. The literature vastly explores acoustic and perceptual characteristics of children and adults with hearing loss. Voice problems in individuals with this impairment are directly related to its type and severity, age, gender, and type of hearing device used. While individuals with mild and moderate hearing loss can only present problems with resonance, severely impaired individuals may lack intensity and frequency control, among other alterations. The commonly found vocal deviations include strain, breathiness, roughness, monotone, absence of rhythm, unpleasant quality, hoarseness, vocal fatigue, high pitch, reduced volume, loudness with excessive variation, unbalanced resonance, altered breathing pattern, brusque vocal attack, and imprecise articulation. These characteristics are justified by the incapability of the deaf to control their vocal performance due to the lack of auditory monitoring of their own voice, caused by the hearing loss. Hence, the development of an intelligible speech with a good quality of voice on the hearing impaired is a challenge, despite the sophisticated technological advances of hearing aids, cochlear implants and other implantable devices. The purpose of this chapter is therefore to present an extensive review of the literature and describe our experience regarding the evaluation, diagnosis, and treatment of voice disorders in individuals with hearing loss
    • 

    corecore