171 research outputs found

    Outlook Magazine, Spring 2015

    Get PDF
    https://digitalcommons.wustl.edu/outlook/1195/thumbnail.jp

    Temporal integration in cochlear implants and the effect of high pulse rates

    Get PDF
    Although cochlear implants (CIs) have proven to be an invaluable help for many people afflicted with severe hearing loss, there are still many hurdles left before a full restoration of hearing. A better understanding of how individual stimuli in a pulse train interact temporally to form a conjoined percept, and what effects the stimulation rate has on the percept of loudness will be beneficial for further improvements in the development of new coding strategies and thus in the quality of life of CI-wearers. Two experiments presented here deal on the topic of temporal integration with CIs, and raise the question of the effects of the high stimulation rates made possible by the broad spread of stimulation. To this effect, curves of equal loudness were measured as a function of pulse train length for different stimulation characteristics. In the first exploratory experiment, threshold and maximum acceptable loudness (MAL) were measured, and the existence and behaviour of the critical duration of integration in cochlear implants is discussed. In the second experiment, the effect of level was further investigated by including MAL measurements at shorter durations, as well as a line of equal loudness at a comfortable level. It is found that the amount of temporal integration (the slope of integration as a function of duration) is greatly decreased in electrical hearing compared to acoustic hearing. The higher stimulation rates seem to have a compensating effect on this, increasing the slope with increasing rate. The highest rates investigated here lead to slopes that are even comparable to those found in persons with normal hearing and hearing impaired. The rate also has an increasing effect on the dynamic range, which is otherwise taken to be a correlate of good performance. The values presented here point towards larger effects of rate on dynamic range than what has been found so far in the literature for more moderate ranges. While rate effects on threshold, dynamic range and integration slope seem to act uniformly for the different test subjects, the critical duration of integration varies strongly but in a non-consistent way, possibly reflecting more central, individual-specific effects. Additionally, measurements on the voltage spread of human CI-wearers are presented which are used to validate a 3D computational model of the human cochlea developed in our group. The theoretical model falls squarely inside of the distribution of measurements. A single, implant dependent voltage-offset seems to adequately explain most of the variability

    Temporal integration in cochlear implants and the effect of high pulse rates

    Get PDF
    Although cochlear implants (CIs) have proven to be an invaluable help for many people afflicted with severe hearing loss, there are still many hurdles left before a full restoration of hearing. A better understanding of how individual stimuli in a pulse train interact temporally to form a conjoined percept, and what effects the stimulation rate has on the percept of loudness will be beneficial for further improvements in the development of new coding strategies and thus in the quality of life of CI-wearers. Two experiments presented here deal on the topic of temporal integration with CIs, and raise the question of the effects of the high stimulation rates made possible by the broad spread of stimulation. To this effect, curves of equal loudness were measured as a function of pulse train length for different stimulation characteristics. In the first exploratory experiment, threshold and maximum acceptable loudness (MAL) were measured, and the existence and behaviour of the critical duration of integration in cochlear implants is discussed. In the second experiment, the effect of level was further investigated by including MAL measurements at shorter durations, as well as a line of equal loudness at a comfortable level. It is found that the amount of temporal integration (the slope of integration as a function of duration) is greatly decreased in electrical hearing compared to acoustic hearing. The higher stimulation rates seem to have a compensating effect on this, increasing the slope with increasing rate. The highest rates investigated here lead to slopes that are even comparable to those found in persons with normal hearing and hearing impaired. The rate also has an increasing effect on the dynamic range, which is otherwise taken to be a correlate of good performance. The values presented here point towards larger effects of rate on dynamic range than what has been found so far in the literature for more moderate ranges. While rate effects on threshold, dynamic range and integration slope seem to act uniformly for the different test subjects, the critical duration of integration varies strongly but in a non-consistent way, possibly reflecting more central, individual-specific effects. Additionally, measurements on the voltage spread of human CI-wearers are presented which are used to validate a 3D computational model of the human cochlea developed in our group. The theoretical model falls squarely inside of the distribution of measurements. A single, implant dependent voltage-offset seems to adequately explain most of the variability

    Experiential Perspectives on Sound and Music for Virtual Reality Technologies

    Get PDF
    This thesis examines the intersection of sound, music, and virtuality within current and next-generation virtual reality technologies, with a specific focus on exploring the experiential perspectives of users and participants within virtual experiences. The first half of the thesis constructs a new theoretical model for examining intersections of sound and virtual experience. In Chapter 1, a new framework for virtual experience is constructed consisting of three key elements: virtual hardware (e.g., displays, speakers); virtual software (e.g., rules and systems of interaction); and virtual externalities (i.e., physical spaces used for engaging in virtual experiences). Through using and applying this new model, methodical examinations of complex virtual experiences are possible. Chapter 2 examines the second axis of the thesis through constructing an understanding of how sound is designed, implemented, and received within virtual reality. The concept of soundscapes is explored in the context of experiential perspectives, serving as a useful approach for describing received auditory phenomena. Auditory environments are proposed as a new model for exploring how auditory phenomena can be broadcast to audiences. Chapter 3 explores how inauthenticity within sound can impact users in virtual experience and uses authenticity to critically examine challenges surrounding sound in virtual reality. Constructions of authenticity in music performance are used to illustrate how authenticity is constructed within virtual experience. Chapter 4 integrates music into the understanding of auditory phenomena constructed throughout the thesis: music is rarely part of the created world in a virtual experience. Rather, it is typically something which only the audience ā€“ as external observers of the created world ā€“ can hear. Therefore, music within immersive virtual reality may be challenging as the audience is placed within the created world.The second half of this thesis uses this theoretical model to consider contemporary and future approaches to virtual experiences. Chapter 5 constructs a series of case studies to demonstrate the use of the framework as a trans-medial and intra/inter-contextual tool of analysis. Through use of the framework, varying approaches to implementation of sound and music in virtual reality technologies are considered, which reveals trans-medial commonalities of immersion and engagement with virtual experiences through sound. Chapter 6 examines near-future technologies, including brain-computer interfaces and other full-immersion technologies, to identify key issues in the design and implementation of future virtual experiences and suggest how interdisciplinary collaboration may help to develop solutions to these issues. Chapter 7 considers how the proposed model for virtuality might allow for methodical examination of similar issues within other fields, such as acoustics and architecture, and examines the ethical considerations that may become relevant as virtual technology develops within the 21st Century.This research explores and rationalises theoretical models of virtuality and sound. This permits designers and developers to improve the implementation of sound and music in virtual experiences for the purpose of improving user outcomes.<br/

    ā€œExtimateā€ Technologies and Techno-Cultural Discontent

    Get PDF
    According to a chorus of authors, the human life-world is currently invaded by an avalanche of high-tech devices referred to as ā€œemerging,ā€ ā€intimate,ā€ or ā€NBICā€ technologies: a new type of contrivances or gadgets designed to optimize cognitive or sensory performance and / or to enable mood management. Rather than manipulating objects in the outside world, they are designed to influence human bodies and brains more directly, and on a molecular scale. In this paper, these devices will be framed as ā€˜extimateā€™ technologies, a concept borrowed from Jacques Lacan. Although Lacan is not commonly regarded as a philosopher of technology, the dialectical relationship between human desire and technological artefacts runs as an important thread through his work. Moreover, he was remarkably prescient concerning the blending of life science and computer science, which is such a distinctive feature of the current techno-scientific turn. Building on a series of Lacanian concepts, my aim is to develop a psychoanalytical diagnostic of the technological present. Finally, I will indicate how such an analysis may inform our understanding of human life and embodiment as such

    Journal of Integrative Research & Reflection

    Get PDF
    Journal articles and additional content compiled into one document

    DĆ©coder les Ć©motions Ć  travers la musique et la voix

    Full text link
    Lā€™objectif de cette theĢ€se est de comparer les meĢcanismes fondamentaux lieĢs aĢ€ la perception eĢmotionnelle vocale et musicale. Cet objectif est sustenteĢ par de nombreux rapports et theĢories appuyant l'ideĢe de substrats neuronaux communs pour le traitement des eĢmotions vocales et musicales. Il est proposeĢ que la musique, afin de nous faire percevoir des eĢmotions, recrute/recycle les circuits eĢmotionnels qui ont eĢvolueĢ principalement pour le traitement des vocalisations biologiquement importantes (p.ex. cris pleurs). Bien que certaines eĢtudes ont releveĢ de grandes similariteĢs entre ces deux timbres (voix, musique) du point de vue ceĢreĢbral (traitement eĢmotionnel) et acoustique (expressions eĢmotionnelles), certaines diffeĢrences acoustiques et neuronales speĢcifique aĢ€ chaque timbre ont eĢgalement eĢteĢ observeĢes. Il est possible que les diffeĢrences rapporteĢes ne soient pas speĢcifiques au timbre, mais observeĢes en raison de facteurs speĢcifiques aux stimuli utiliseĢs tels que leur complexiteĢ et leur longueur. Ici, il est proposeĢ de contourner les probleĢ€mes de comparabiliteĢ de stimulus, par lā€™utilisation des expressions eĢmotionnelles les plus simples dans les deux domaines. Pour atteindre lā€™objectif global de la theĢ€se, les travaux ont eĢteĢ reĢaliseĢs en deux temps. PremieĢ€rement, une batterie de stimuli eĢmotionnels musicaux comparables aux stimuli vocaux deĢjaĢ€ disponibles (Voix Affectives MontreĢalaises) a eĢteĢ deĢveloppeĢe. Des stimuli (EĢclats EĢmotionnels Musicaux) exprimant 4 eĢmotions (joie, peur, tristesse, neutraliteĢ) performeĢs au violon et aĢ€ la clarinette ont eĢteĢ enregistreĢs et valideĢs. Ces EĢclats EĢmotionnels Musicaux ont obtenu un haut taux de reconnaissance (M=80.4%) et recĢ§u des jugements dā€™arousal (eĢveil/stimulation) et de valence correspondant aĢ€ lā€™eĢmotion quā€™il repreĢsentait. Nous avons donc pu, dans un deuxieĢ€me temps, utiliser ces stimuli nouvellement valideĢs et les Voix Affectives MontreĢalaises pour reĢaliser deux eĢtudes de comparaison expeĢrimentales. Dā€™abord, nous avons effectueĢ aĢ€ lā€™aide de lā€™imagerie par reĢsonnance magneĢtique fonctionnelle une comparaison des circuits neuronaux utiliseĢs pour le traitement de ces deux types dā€™expressions eĢmotionnelles. IndeĢpendamment de leur nature vocale ou musicale, une activiteĢ ceĢreĢbrale speĢcifique aĢ€ l'eĢmotion a eĢteĢ observeĢe dans le cortex auditif (centreĢe sur le gyrus temporal supeĢrieur) et dans les reĢgions limbiques (gyrus parahippocampique/amygdale), alors qu'aucune activiteĢ speĢcifique aux stimuli vocaux ou musicaux n'a eĢteĢ observeĢe. Par la suite, nous avons compareĢ la perception des eĢmotions vocales et musicales sous simulation dā€™implant cochleĢaire. Cette simulation affectant grandement la perception des indices acoustiques lieĢs aux hauteurs tonales (important pour la discrimination eĢmotionnelle), nous a permis de deĢterminer quels indices acoustiques secondaires aĢ€ ceux-ci sont importants pour la perception eĢmotionnelle chez les utilisateurs dā€™implant cochleĢaire. Lā€™examen des caracteĢristiques acoustiques et des jugements eĢmotionnels a permis de deĢterminer que certaines caracteĢristiques timbrales (clarteĢ, eĢnergie et rugositeĢ) communes aĢ€ la voix et la musique sont utiliseĢes pour reĢaliser des jugements eĢmotionnels sous simulations dā€™implant cochleĢaire, dans les deux domaines. Lā€™attention que nous avons porteĢe au choix des stimuli nous a permis de mettre de lā€™avant les grandes similariteĢs (acoustique, neuronales) impliqueĢes dans la perception des eĢmotions vocales et musicales. Cette convergence dā€™eĢvidence donne un appui important aĢ€ lā€™hypotheĢ€se de circuits neuronaux fondamentaux commun pour le traitement des eĢmotions vocales et musicales.The aim of this thesis is to compare the fundamental mechanisms related to vocal and musical emotion perception. This objective is supported by many reports and theories bringing forward the idea of common neural substrates for the treatment of vocal and musical emotions. It is proposed that music, in order to make us perceive emotions, recruits/recycles the emotional circuits that evolved mainly for the treatment of biologically important vocalisations (e.g. cries, screams). Although some studies have found great similarities between these two timbres (voice, music) from the cerebral (emotional treatment) and acoustic (emotional expressions) point of view, some acoustic and neural differences specific to each timbre have also been reported. It is possible that the differences described are not specific to the timbre but are observed due to factors specific to the stimuli used such as their complexity and length. Here, it is proposed to circumvent the problems of stimulus comparability by using the simplest emotional expressions in both domains. To achieve the overall objective of the thesis, the work was carried out in two stages. First, a battery of musical emotional stimuli comparable to the vocal stimuli already available (Montreal Affective Voices) was developed. Stimuli (Musical Emotional Bursts) expressing 4 emotions (happiness, fear, sadness, neutrality) performed on the violin and clarinet were recorded and validated. These Musical Emotional Bursts obtained a high recognition rate (M = 80.4%) and received arousal and valence judgments corresponding to the emotion they represented. Secondly, we were able to use these newly validated stimuli and the Montreal Affective Voices to perform two experimental comparison studies. First, functional magnetic resonance imaging was used to compare the neural circuits used to process these two types of emotional expressions. Independently of their vocal or musical nature, emotion-specific activity was observed in the auditory cortex (centered on the superior temporal gyrus) and in the limbic regions (amygdala/parahippocampal gyrus), whereas no activity specific to vocal or musical stimuli was observed. Subsequently, we compared the perception of vocal and musical emotions under cochlear implant simulation. This simulation greatly affects the perception of acoustic indices related to pitch (important for emotional discrimination), allowing us to determine which acoustic indices secondary to these are important for emotional perception in cochlear implant users. Examination of acoustic characteristics and emotional judgments determined that certain timbral characteristics (brightness, energy, and roughness) common to voice and music are used to make emotional judgments in both domains, under cochlear implant simulations. The specific attention to our stimuli selection has allowed us to put forward the similarities (acoustic, neuronal) involved in the perception of vocal and musical emotions. This convergence of evidence provides important support to the hypothesis of a fundamental common neural circuit for the processing of vocal and musical emotions
    • ā€¦
    corecore