188 research outputs found

    Communicating Emotion:Vocal Expression of Linguistic and Emotional Prosody in Children With Mild to Profound Hearing Loss Compared With That of Normal Hearing Peers

    Get PDF
    Objectives: Emotional prosody is known to play an important role in social communication. Research has shown that children with cochlear implants (CCIs) may face challenges in their ability to express prosody, as their expressions may have less distinct acoustic contrasts and therefore may be judged less accurately. The prosody of children with milder degrees of hearing loss, wearing hearing aids, has sparsely been investigated. More understanding of the prosodic expression by children with hearing loss, hearing aid users in particular, could create more awareness among healthcare professionals and parents on limitations in social communication, which awareness may lead to more targeted rehabilitation. This study aimed to compare the prosodic expression potential of children wearing hearing aids (CHA) with that of CCIs and children with normal hearing (CNH). Design: In this prospective experimental study, utterances of pediatric hearing aid users, cochlear implant users, and CNH containing emotional expressions (happy, sad, and angry) were recorded during a reading task. Of the utterances, three acoustic properties were calculated: fundamental frequency (F0), variance in fundamental frequency (SD of F0), and intensity. Acoustic properties of the utterances were compared within subjects and between groups. Results: A total of 75 children were included (CHA: 26, CCI: 23, and CNH: 26). Participants were between 7 and 13 years of age. The 15 CCI with congenital hearing loss had received the cochlear implant at median age of 8 months. The acoustic patterns of emotions uttered by CHA were similar to those of CCI and CNH. Only in CCI, we found no difference in F0 variation between happiness and anger, although an intensity difference was present. In addition, CCI and CHA produced poorer happy-sad contrasts than did CNH. Conclusions: The findings of this study suggest that on a fundamental, acoustic level, both CHA and CCI have a prosodic expression potential that is almost on par with normal hearing peers. However, there were some minor limitations observed in the prosodic expression of these children, it is important to determine whether these differences are perceptible to listeners and could affect social communication. This study sets the groundwork for more research that will help us fully understand the implications of these findings and how they may affect the communication abilities of these children. With a clearer understanding of these factors, we can develop effective ways to help improve their communication skills.</p

    Perception of speech, music and emotion by hearing-impaired listeners

    Get PDF
    The everyday tasks of perceiving speech, music and emotional expression via both of these media, are made much more difficult in the case of hearing impairment. Chiefly, this is because relevant acoustic cues are less clearly audible, owing to both hearing loss in itself, and the limitations of available hearing prostheses. This thesis focussed specifically on two such devices, the cochlear implant (CI) and the hearing aid (HA), and asks two overarching questions: how do users approach music and speech perception tasks, and how can performance be improved? The first part of the thesis considered auditory perception of emotion by CI users. In particular, the underlying mechanisms by which this population perform such tasks are poorly understood. This topic was addressed by a series of emotion discrimination experiments, featuring both normal-hearing (CI-simulated) participants and real CI users, in which listeners heard stimuli with processing designed to systematically attenuate different acoustic features. Additionally, a computational modelling approach was utilised in order to estimate participants' listening strategies, and whether or not these were optimal. It was shown that the acoustic features attended to by participants were a compromise of those generally better-preserved by the CI, and those particularly salient for each stimulus. In the latter half of the thesis, the nature of assessment of music perception by hearing-impaired listeners was considered. Speech perception has typically taken precedence in this domain which, it is argued, has left assessment of music perception relatively underdeveloped. This problem was addressed by the creation of a novel, psychoacoustical testing procedure, similar to those typically used with speech. This paradigm was evaluated via listening experiments with both HA users and CI-simulated listeners. In general, the results indicated that the measure produced both valid and reliable results, suggesting the suitability of the procedure as both a clinical and experimental tool. Lastly, the thesis considered the consequences of the various findings for both research and clinical practice, contextualising the results with reference to the primary research questions addressed, and thereby highlighting what there is left to discover

    Electro-Haptic Stimulation: A New Approach for Improving Cochlear-Implant Listening

    Get PDF
    Cochlear implants (CIs) have been remarkably successful at restoring speech perception for severely to profoundly deaf individuals. Despite their success, several limitations remain, particularly in CI users’ ability to understand speech in noisy environments, locate sound sources, and enjoy music. A new multimodal approach has been proposed that uses haptic stimulation to provide sound information that is poorly transmitted by the implant. This augmenting of the electrical CI signal with haptic stimulation (electro-haptic stimulation; EHS) has been shown to improve speech-in-noise performance and sound localization in CI users. There is also evidence that it could enhance music perception. We review the evidence of EHS enhancement of CI listening and discuss key areas where further research is required. These include understanding the neural basis of EHS enhancement, understanding the effectiveness of EHS across different clinical populations, and the optimization of signal-processing strategies. We also discuss the significant potential for a new generation of haptic neuroprosthetic devices to aid those who cannot access hearing-assistive technology, either because of biomedical or healthcare-access issues. While significant further research and development is required, we conclude that EHS represents a promising new approach that could, in the near future, offer a non-invasive, inexpensive means of substantially improving clinical outcomes for hearing-impaired individuals

    Recognition and cortical haemodynamics of vocal emotions-an fNIRS perspective

    Get PDF
    Normal-hearing listeners rely heavily on variations in the fundamental frequency (F0) of speech to identify vocal emotions. Without reliable F0 cues, as is the case for cochlear implant users, listeners’ ability to extract emotional meaning from speech is reduced. This thesis describes the development of an objective measure of vocal emotion recognition. The program of three experiments investigates: 1) NH listeners’ abilities to use F0, intensity, and speech-rate cues to recognise emotions; 2) cortical activity associated with individual vocal emotions assessed using functional near-infrared spectroscopy (fNIRS); 3) cortical activity evoked by vocal emotions in natural speech and in speech with uninformative F0 using fNIRS

    Décoder les émotions à travers la musique et la voix

    Full text link
    L’objectif de cette thèse est de comparer les mécanismes fondamentaux liés à la perception émotionnelle vocale et musicale. Cet objectif est sustenté par de nombreux rapports et théories appuyant l'idée de substrats neuronaux communs pour le traitement des émotions vocales et musicales. Il est proposé que la musique, afin de nous faire percevoir des émotions, recrute/recycle les circuits émotionnels qui ont évolué principalement pour le traitement des vocalisations biologiquement importantes (p.ex. cris pleurs). Bien que certaines études ont relevé de grandes similarités entre ces deux timbres (voix, musique) du point de vue cérébral (traitement émotionnel) et acoustique (expressions émotionnelles), certaines différences acoustiques et neuronales spécifique à chaque timbre ont également été observées. Il est possible que les différences rapportées ne soient pas spécifiques au timbre, mais observées en raison de facteurs spécifiques aux stimuli utilisés tels que leur complexité et leur longueur. Ici, il est proposé de contourner les problèmes de comparabilité de stimulus, par l’utilisation des expressions émotionnelles les plus simples dans les deux domaines. Pour atteindre l’objectif global de la thèse, les travaux ont été réalisés en deux temps. Premièrement, une batterie de stimuli émotionnels musicaux comparables aux stimuli vocaux déjà disponibles (Voix Affectives Montréalaises) a été développée. Des stimuli (Éclats Émotionnels Musicaux) exprimant 4 émotions (joie, peur, tristesse, neutralité) performés au violon et à la clarinette ont été enregistrés et validés. Ces Éclats Émotionnels Musicaux ont obtenu un haut taux de reconnaissance (M=80.4%) et reçu des jugements d’arousal (éveil/stimulation) et de valence correspondant à l’émotion qu’il représentait. Nous avons donc pu, dans un deuxième temps, utiliser ces stimuli nouvellement validés et les Voix Affectives Montréalaises pour réaliser deux études de comparaison expérimentales. D’abord, nous avons effectué à l’aide de l’imagerie par résonnance magnétique fonctionnelle une comparaison des circuits neuronaux utilisés pour le traitement de ces deux types d’expressions émotionnelles. Indépendamment de leur nature vocale ou musicale, une activité cérébrale spécifique à l'émotion a été observée dans le cortex auditif (centrée sur le gyrus temporal supérieur) et dans les régions limbiques (gyrus parahippocampique/amygdale), alors qu'aucune activité spécifique aux stimuli vocaux ou musicaux n'a été observée. Par la suite, nous avons comparé la perception des émotions vocales et musicales sous simulation d’implant cochléaire. Cette simulation affectant grandement la perception des indices acoustiques liés aux hauteurs tonales (important pour la discrimination émotionnelle), nous a permis de déterminer quels indices acoustiques secondaires à ceux-ci sont importants pour la perception émotionnelle chez les utilisateurs d’implant cochléaire. L’examen des caractéristiques acoustiques et des jugements émotionnels a permis de déterminer que certaines caractéristiques timbrales (clarté, énergie et rugosité) communes à la voix et la musique sont utilisées pour réaliser des jugements émotionnels sous simulations d’implant cochléaire, dans les deux domaines. L’attention que nous avons portée au choix des stimuli nous a permis de mettre de l’avant les grandes similarités (acoustique, neuronales) impliquées dans la perception des émotions vocales et musicales. Cette convergence d’évidence donne un appui important à l’hypothèse de circuits neuronaux fondamentaux commun pour le traitement des émotions vocales et musicales.The aim of this thesis is to compare the fundamental mechanisms related to vocal and musical emotion perception. This objective is supported by many reports and theories bringing forward the idea of common neural substrates for the treatment of vocal and musical emotions. It is proposed that music, in order to make us perceive emotions, recruits/recycles the emotional circuits that evolved mainly for the treatment of biologically important vocalisations (e.g. cries, screams). Although some studies have found great similarities between these two timbres (voice, music) from the cerebral (emotional treatment) and acoustic (emotional expressions) point of view, some acoustic and neural differences specific to each timbre have also been reported. It is possible that the differences described are not specific to the timbre but are observed due to factors specific to the stimuli used such as their complexity and length. Here, it is proposed to circumvent the problems of stimulus comparability by using the simplest emotional expressions in both domains. To achieve the overall objective of the thesis, the work was carried out in two stages. First, a battery of musical emotional stimuli comparable to the vocal stimuli already available (Montreal Affective Voices) was developed. Stimuli (Musical Emotional Bursts) expressing 4 emotions (happiness, fear, sadness, neutrality) performed on the violin and clarinet were recorded and validated. These Musical Emotional Bursts obtained a high recognition rate (M = 80.4%) and received arousal and valence judgments corresponding to the emotion they represented. Secondly, we were able to use these newly validated stimuli and the Montreal Affective Voices to perform two experimental comparison studies. First, functional magnetic resonance imaging was used to compare the neural circuits used to process these two types of emotional expressions. Independently of their vocal or musical nature, emotion-specific activity was observed in the auditory cortex (centered on the superior temporal gyrus) and in the limbic regions (amygdala/parahippocampal gyrus), whereas no activity specific to vocal or musical stimuli was observed. Subsequently, we compared the perception of vocal and musical emotions under cochlear implant simulation. This simulation greatly affects the perception of acoustic indices related to pitch (important for emotional discrimination), allowing us to determine which acoustic indices secondary to these are important for emotional perception in cochlear implant users. Examination of acoustic characteristics and emotional judgments determined that certain timbral characteristics (brightness, energy, and roughness) common to voice and music are used to make emotional judgments in both domains, under cochlear implant simulations. The specific attention to our stimuli selection has allowed us to put forward the similarities (acoustic, neuronal) involved in the perception of vocal and musical emotions. This convergence of evidence provides important support to the hypothesis of a fundamental common neural circuit for the processing of vocal and musical emotions

    Strategies of auditory categorisation in cochlear implant users and normal hearing listeners

    Get PDF
    La catégorisation auditive est un processus essentiel pour faire face aux nombreux sons qui nous entourent dans le monde réel. Toutefois ces capacités sont altérées par l'utilisation d'un Implant Cochléaire (IC). Bien les utilisateurs d'IC montrent de très bonnes capacités à percevoir la parole, leur capacité à percevoir d'autres types de sons est altérée en comparaison avec des Entendants Normaux (EN). Dans le projet présenté ici nous utiliserons une nouvelle approche en étudiant la perception des sons à un niveau catégoriel plutôt qu'individuel. Dans une première étude les utilisateurs d'IC et des EN ont été testé afin de mesuré avec quel précision ils pouvaient catégoriser des sons vocaux, environnementaux et musicaux. Les résultats montrent que les utilisateurs d'IC étant appareillés depuis le plus longtemps, et ayant donc une plus grande expérience d'audition avec l'appareil, atteignent des performances plus similaires aux EN que des personnes ayant eu moins d'expérience avec l'IC. Une deuxième étude n'utilisant que des sons vocaux a montré que les informations liées aux émotions et à l'âge du locuteur étaient utilisées afin de catégoriser les différents locuteurs et que le geme était peu perçu par les sujets. Enfin, une troisième étude s'est quant à elle intéressée comment différents sons environnementaux étaient catégorisés et si le contexte auditif (i.e. la localisation) pouvait aider la catégorisation et l'identification de sons vocodés. Bien que nous n'ayons pas observé d'effet de facilitation du contexte les résultats montrent la robustesse de certaines informations pour la perception des sons comme la perception de l'action et la matière les produisant, même lorsque les sujets ne pouvaient les identifier. Le domaine de recherche sur la catégorisation auditive n'est pas aussi développé que celui de la catégorisation visuelle et ce projet apporte de précieuse informations pour mieux comprendre comment la catégorisation dans le domaine auditif est effectuée et quelles catégories sont communément utilisées par les auditeurs. Nous avons pu par exemple mettre en évidence une catégorie regroupant les sons produits par des actions ou des matériaux ou encore des catégories de sons correspondant aux voix et actions humaines, à la nature, aux sons mécaniques et aux sons musicaux, ce qui est en accord avec les résultats obtenus dans des études menées précédemment. Concernant les résultats des utilisateurs d'IC il apparaît que les auditeurs ayant une longue expérience d'utilisation de l'appareil ont moins de difficultés à catégoriser les sons qu'à identifier des sons individuels. Ceci constitue la première étude à tester les utilisateurs d'IC dans une tâche de classification libre et s'ajoute aux quelques études ayant utilisés des tâches de catégorisation auditive avec des utilisateurs d'IC qui suggèrent que la perception catégorielle pourrait être une façon appropriée et efficace de tester et rééduquer les utilisateurs d'IC à percevoir différentes catégories de sons.Auditory categorisation is a process essential for coping with the large amount of sounds encountered in the real world. However it is affected by the use of a cochlear implant (CI) deviee. Whilst CI users may attain high levels of speech performance,ad their ability to perceive other kinds of sounds is impaired in comparison to Normal Hearing Listeners (NHL). The current project therefore proposes a new approach, looking at the perception of sounds at the leve!of categories rather than individual sounds. In the first study CI users and NHL were tested to see how accurately they categorised a series of vocal, environmental and musical sounds. Results showed that CI users with the longest duration of implantation and therefore of listening experience demonstrated results more similar to those ofNHL. A second study involving oniy vocal sounds showed that information pertaining to the emotion and age of a speaker was used to categorise different speakers and that gender was not strongly perceived. A third study looked at how different environmental sounds were categorised and whether or not the auditory context (i.e. location) was helpful to the categorisation and identification of vocoded sounds. Although context information did not appear to aid listeners the results showed the robustness of certain information regarding the perception of the sound producing action and material, even when listeners could not identify sounds. The research domain of auditory categorisation is not as large as that for the visual domain and subsequent!y this project is important for the further understanding of how sounds are categorised and what categories are commoniy used by listeners. For example the perception of the sound producing action and material as weil as results that show categories of sounds corresponding to hnman vocalisations, hnman actions, nature, mechanical and musical sounds, which agrees with previously conducted studies Concerning the results of CI users it appears that experienced listeners may have fewer problems perceiving auditory categories as compared to identifying individual sounds. This is the first study to test CI users in a free-sorting task and in addition to the few studies that have also tested CI users auditory categorisation suggests that categorical perception may be a useful way in which to test and rehabilitate CI users to different kinds of sounds

    Physiology, Psychoacoustics and Cognition in Normal and Impaired Hearing

    Get PDF

    Physiology, Psychoacoustics and Cognition in Normal and Impaired Hearing

    Get PDF

    Physiology, Psychoacoustics and Cognition in Normal and Impaired Hearing

    Get PDF
    otorhinolaryngology; neurosciences; hearin

    Physiology, Psychoacoustics and Cognition in Normal and Impaired Hearing

    Get PDF
    corecore