171 research outputs found
Outlook Magazine, Spring 2015
https://digitalcommons.wustl.edu/outlook/1195/thumbnail.jp
Temporal integration in cochlear implants and the effect of high pulse rates
Although cochlear implants (CIs) have proven to be an invaluable help for many people afflicted with severe hearing loss, there are still many hurdles left before a full restoration of hearing. A better understanding of how individual stimuli in a pulse train interact temporally to form a conjoined percept, and what effects the stimulation rate has on the percept of loudness will be beneficial for further improvements in the development of new coding strategies and thus in the quality of life of CI-wearers.
Two experiments presented here deal on the topic of temporal integration with CIs, and raise the question of the effects of the high stimulation rates made possible by the broad spread of stimulation. To this effect, curves of equal loudness were measured as a function of pulse train length for different stimulation characteristics.
In the first exploratory experiment, threshold and maximum acceptable loudness (MAL) were measured, and the existence and behaviour of the critical duration of integration in cochlear implants is discussed. In the second experiment, the effect of level was further investigated by including MAL measurements at shorter durations, as well as a line of equal loudness at a comfortable level.
It is found that the amount of temporal integration (the slope of integration as a function of duration) is greatly decreased in electrical hearing compared to acoustic hearing. The higher stimulation rates seem to have a compensating effect on this, increasing the slope with increasing rate. The highest rates investigated here lead to slopes that are even comparable to those found in persons with normal hearing and hearing impaired.
The rate also has an increasing effect on the dynamic range, which is otherwise taken to be a correlate of good performance.
The values presented here point towards larger effects of rate on dynamic range than what has been found so far in the literature for more moderate ranges. While rate effects on threshold, dynamic range and integration slope seem to act uniformly for the different test subjects, the critical duration of integration varies strongly but in a non-consistent way, possibly reflecting more central, individual-specific effects.
Additionally, measurements on the voltage spread of human CI-wearers are presented which are used to validate a 3D computational model of the human cochlea developed in our group. The theoretical model falls squarely inside of the distribution of measurements. A single, implant dependent voltage-offset seems to adequately explain most of the variability
Temporal integration in cochlear implants and the effect of high pulse rates
Although cochlear implants (CIs) have proven to be an invaluable help for many people afflicted with severe hearing loss, there are still many hurdles left before a full restoration of hearing. A better understanding of how individual stimuli in a pulse train interact temporally to form a conjoined percept, and what effects the stimulation rate has on the percept of loudness will be beneficial for further improvements in the development of new coding strategies and thus in the quality of life of CI-wearers.
Two experiments presented here deal on the topic of temporal integration with CIs, and raise the question of the effects of the high stimulation rates made possible by the broad spread of stimulation. To this effect, curves of equal loudness were measured as a function of pulse train length for different stimulation characteristics.
In the first exploratory experiment, threshold and maximum acceptable loudness (MAL) were measured, and the existence and behaviour of the critical duration of integration in cochlear implants is discussed. In the second experiment, the effect of level was further investigated by including MAL measurements at shorter durations, as well as a line of equal loudness at a comfortable level.
It is found that the amount of temporal integration (the slope of integration as a function of duration) is greatly decreased in electrical hearing compared to acoustic hearing. The higher stimulation rates seem to have a compensating effect on this, increasing the slope with increasing rate. The highest rates investigated here lead to slopes that are even comparable to those found in persons with normal hearing and hearing impaired.
The rate also has an increasing effect on the dynamic range, which is otherwise taken to be a correlate of good performance.
The values presented here point towards larger effects of rate on dynamic range than what has been found so far in the literature for more moderate ranges. While rate effects on threshold, dynamic range and integration slope seem to act uniformly for the different test subjects, the critical duration of integration varies strongly but in a non-consistent way, possibly reflecting more central, individual-specific effects.
Additionally, measurements on the voltage spread of human CI-wearers are presented which are used to validate a 3D computational model of the human cochlea developed in our group. The theoretical model falls squarely inside of the distribution of measurements. A single, implant dependent voltage-offset seems to adequately explain most of the variability
Experiential Perspectives on Sound and Music for Virtual Reality Technologies
This thesis examines the intersection of sound, music, and virtuality within current and next-generation virtual reality technologies, with a specific focus on exploring the experiential perspectives of users and participants within virtual experiences. The first half of the thesis constructs a new theoretical model for examining intersections of sound and virtual experience. In Chapter 1, a new framework for virtual experience is constructed consisting of three key elements: virtual hardware (e.g., displays, speakers); virtual software (e.g., rules and systems of interaction); and virtual externalities (i.e., physical spaces used for engaging in virtual experiences). Through using and applying this new model, methodical examinations of complex virtual experiences are possible. Chapter 2 examines the second axis of the thesis through constructing an understanding of how sound is designed, implemented, and received within virtual reality. The concept of soundscapes is explored in the context of experiential perspectives, serving as a useful approach for describing received auditory phenomena. Auditory environments are proposed as a new model for exploring how auditory phenomena can be broadcast to audiences. Chapter 3 explores how inauthenticity within sound can impact users in virtual experience and uses authenticity to critically examine challenges surrounding sound in virtual reality. Constructions of authenticity in music performance are used to illustrate how authenticity is constructed within virtual experience. Chapter 4 integrates music into the understanding of auditory phenomena constructed throughout the thesis: music is rarely part of the created world in a virtual experience. Rather, it is typically something which only the audience ā as external observers of the created world ā can hear. Therefore, music within immersive virtual reality may be challenging as the audience is placed within the created world.The second half of this thesis uses this theoretical model to consider contemporary and future approaches to virtual experiences. Chapter 5 constructs a series of case studies to demonstrate the use of the framework as a trans-medial and intra/inter-contextual tool of analysis. Through use of the framework, varying approaches to implementation of sound and music in virtual reality technologies are considered, which reveals trans-medial commonalities of immersion and engagement with virtual experiences through sound. Chapter 6 examines near-future technologies, including brain-computer interfaces and other full-immersion technologies, to identify key issues in the design and implementation of future virtual experiences and suggest how interdisciplinary collaboration may help to develop solutions to these issues. Chapter 7 considers how the proposed model for virtuality might allow for methodical examination of similar issues within other fields, such as acoustics and architecture, and examines the ethical considerations that may become relevant as virtual technology develops within the 21st Century.This research explores and rationalises theoretical models of virtuality and sound. This permits designers and developers to improve the implementation of sound and music in virtual experiences for the purpose of improving user outcomes.<br/
āExtimateā Technologies and Techno-Cultural Discontent
According to a chorus of authors, the human life-world is currently invaded by an avalanche of high-tech devices referred to as āemerging,ā āintimate,ā or āNBICā technologies: a new type of contrivances or gadgets designed to optimize cognitive or sensory performance and / or to enable mood management. Rather than manipulating objects in the outside world, they are designed to influence human bodies and brains more directly, and on a molecular scale. In this paper, these devices will be framed as āextimateā technologies, a concept borrowed from Jacques Lacan. Although Lacan is not commonly regarded as a philosopher of technology, the dialectical relationship between human desire and technological artefacts runs as an important thread through his work. Moreover, he was remarkably prescient concerning the blending of life science and computer science, which is such a distinctive feature of the current techno-scientific turn. Building on a series of Lacanian concepts, my aim is to develop a psychoanalytical diagnostic of the technological present. Finally, I will indicate how such an analysis may inform our understanding of human life and embodiment as such
Journal of Integrative Research & Reflection
Journal articles and additional content compiled into one document
Recommended from our members
Music and the transhuman ear: Ultrasonics, material bodies, and the limits of sensation
Amid recent moves toward sound as vibrational force, this article argues that hearing has a special role in determining our natural sensory limits, and that recent attempts to push against these limits foreground the underlying matter of what status the biological body has in music perception and performance during the technological age.
Between 1876 and 1894, prominent German acousticiansāincluding Helmholtzāargued that humans could hear vibrations as high as 40,960Hz. While this was ultimately discredited, recent post-tonal works have notated pitches that explicitly play with, or exceed, the ordinary range of human hearing; (cf. Schoenberg, Per NĆørgĆ„rd, and Salvatore Sciarrino). In the context of existing ecological approaches to listening, this article asks what kind of listener such works imply. Specifically, it investigates the musical relevance of Umwelt theory by the Baltic German biologist Jakob von UexkĆ¼ll, in which individuals ācreateā the bubble of their perceivable environment according to a reciprocal interchange between limited sense capacity and mental habit. I contrast UexkĆ¼llās acceptance of human limits with a transhumanist worldview which anticipates the enhancement of biological sense capacities through technology. Such āmorphological freedom - the right to modify and enhance oneās bodyā (Bostrum 2009) putatively includes augmentation of the auditory system. Finally, by tracing the genealogy of human prosthesis back to the founder of a philosopher of technology (Kapp 1877), I critique the potential for technologies in clinical audiology to grant access to ultrasonic frequencies, and assess the implications of augmented, prosthetic hearing for non-impaired listeners.
The discourse of transhumanism poses questions for musical listening as soon as the body becomes an assemblage subject to variation. It raises the question of how identityāours as well as that of musical worksāmight be affected by āmorphological freedom,ā the extent to which self-identity becomes the lost referential when agency is distributed between biological and non-biological parts, and it asks what value are the new intellectual vistas that emerge when musical experience is conceived in material terms as communication between bodies
Recommended from our members
Music and the Transhuman Ear: Ultrasonics, Material Bodies, and the Limits of Sensation
Amid recent moves toward sound as vibrational force, this article argues that hearing has a special role in determining our natural sensory limits, and that recent attempts to push against these limits foreground the underlying matter of what status the biological body has in music perception and performance during the technological age.
Between 1876 and 1894, prominent German acousticiansāincluding Helmholtzāargued that humans could hear vibrations as high as 40,960Hz. While this was ultimately discredited, recent post-tonal works have notated pitches that explicitly play with, or exceed, the ordinary range of human hearing; (cf. Schoenberg, Per NĆørgĆ„rd, and Salvatore Sciarrino). In the context of existing ecological approaches to listening, this article asks what kind of listener such works imply. Specifically, it investigates the musical relevance of Umwelt theory by the Baltic German biologist Jakob von UexkĆ¼ll, in which individuals ācreateā the bubble of their perceivable environment according to a reciprocal interchange between limited sense capacity and mental habit. I contrast UexkĆ¼llās acceptance of human limits with a transhumanist worldview which anticipates the enhancement of biological sense capacities through technology. Such āmorphological freedom - the right to modify and enhance oneās bodyā (Bostrum 2009) putatively includes augmentation of the auditory system. Finally, by tracing the genealogy of human prosthesis back to the founder of a philosopher of technology (Kapp 1877), I critique the potential for technologies in clinical audiology to grant access to ultrasonic frequencies, and assess the implications of augmented, prosthetic hearing for non-impaired listeners.
The discourse of transhumanism poses questions for musical listening as soon as the body becomes an assemblage subject to variation. It raises the question of how identityāours as well as that of musical worksāmight be affected by āmorphological freedom,ā the extent to which self-identity becomes the lost referential when agency is distributed between biological and non-biological parts, and it asks what value are the new intellectual vistas that emerge when musical experience is conceived in material terms as communication between bodies
DĆ©coder les Ć©motions Ć travers la musique et la voix
Lāobjectif de cette theĢse est de comparer les meĢcanismes fondamentaux lieĢs aĢ la perception eĢmotionnelle vocale et musicale. Cet objectif est sustenteĢ par de nombreux rapports et theĢories appuyant l'ideĢe de substrats neuronaux communs pour le traitement des eĢmotions vocales et musicales. Il est proposeĢ que la musique, afin de nous faire percevoir des eĢmotions, recrute/recycle les circuits eĢmotionnels qui ont eĢvolueĢ principalement pour le traitement des vocalisations biologiquement importantes (p.ex. cris pleurs). Bien que certaines eĢtudes ont releveĢ de grandes similariteĢs entre ces deux timbres (voix, musique) du point de vue ceĢreĢbral (traitement eĢmotionnel) et acoustique (expressions eĢmotionnelles), certaines diffeĢrences acoustiques et neuronales speĢcifique aĢ chaque timbre ont eĢgalement eĢteĢ observeĢes. Il est possible que les diffeĢrences rapporteĢes ne soient pas speĢcifiques au timbre, mais observeĢes en raison de facteurs speĢcifiques aux stimuli utiliseĢs tels que leur complexiteĢ et leur longueur. Ici, il est proposeĢ de contourner les probleĢmes de comparabiliteĢ de stimulus, par lāutilisation des expressions eĢmotionnelles les plus simples dans les deux domaines.
Pour atteindre lāobjectif global de la theĢse, les travaux ont eĢteĢ reĢaliseĢs en deux temps. PremieĢrement, une batterie de stimuli eĢmotionnels musicaux comparables aux stimuli vocaux deĢjaĢ disponibles (Voix Affectives MontreĢalaises) a eĢteĢ deĢveloppeĢe. Des stimuli (EĢclats EĢmotionnels Musicaux) exprimant 4 eĢmotions (joie, peur, tristesse, neutraliteĢ) performeĢs au violon et aĢ la clarinette ont eĢteĢ enregistreĢs et valideĢs. Ces EĢclats EĢmotionnels Musicaux ont obtenu un haut taux de reconnaissance (M=80.4%) et recĢ§u des jugements dāarousal (eĢveil/stimulation) et de valence correspondant aĢ lāeĢmotion quāil repreĢsentait. Nous avons donc pu, dans un deuxieĢme temps, utiliser ces stimuli nouvellement valideĢs et les Voix Affectives MontreĢalaises pour reĢaliser deux eĢtudes de comparaison expeĢrimentales. Dāabord, nous avons effectueĢ aĢ lāaide de lāimagerie par reĢsonnance magneĢtique fonctionnelle une comparaison des circuits neuronaux utiliseĢs pour le traitement de ces deux types dāexpressions eĢmotionnelles. IndeĢpendamment de leur nature vocale ou musicale, une activiteĢ ceĢreĢbrale speĢcifique aĢ l'eĢmotion a eĢteĢ observeĢe dans le cortex auditif (centreĢe sur le gyrus temporal supeĢrieur) et dans les reĢgions limbiques (gyrus parahippocampique/amygdale), alors qu'aucune activiteĢ speĢcifique aux stimuli vocaux ou musicaux n'a eĢteĢ observeĢe. Par la suite, nous avons compareĢ la perception des eĢmotions vocales et musicales sous simulation dāimplant cochleĢaire. Cette simulation affectant grandement la perception des indices acoustiques lieĢs aux hauteurs tonales (important pour la discrimination eĢmotionnelle), nous a permis de deĢterminer quels indices acoustiques secondaires aĢ ceux-ci sont importants pour la perception eĢmotionnelle chez les utilisateurs dāimplant cochleĢaire. Lāexamen des caracteĢristiques acoustiques et des jugements eĢmotionnels a permis de deĢterminer que certaines caracteĢristiques timbrales (clarteĢ, eĢnergie et rugositeĢ) communes aĢ la voix et la musique sont utiliseĢes pour reĢaliser des jugements eĢmotionnels sous simulations dāimplant cochleĢaire, dans les deux domaines.
Lāattention que nous avons porteĢe au choix des stimuli nous a permis de mettre de lāavant les grandes similariteĢs (acoustique, neuronales) impliqueĢes dans la perception des eĢmotions vocales et musicales. Cette convergence dāeĢvidence donne un appui important aĢ lāhypotheĢse de circuits neuronaux fondamentaux commun pour le traitement des eĢmotions vocales et musicales.The aim of this thesis is to compare the fundamental mechanisms related to vocal and musical emotion perception. This objective is supported by many reports and theories bringing forward the idea of common neural substrates for the treatment of vocal and musical emotions. It is proposed that music, in order to make us perceive emotions, recruits/recycles the emotional circuits that evolved mainly for the treatment of biologically important vocalisations (e.g. cries, screams). Although some studies have found great similarities between these two timbres (voice, music) from the cerebral (emotional treatment) and acoustic (emotional expressions) point of view, some acoustic and neural differences specific to each timbre have also been reported. It is possible that the differences described are not specific to the timbre but are observed due to factors specific to the stimuli used such as their complexity and length. Here, it is proposed to circumvent the problems of stimulus comparability by using the simplest emotional expressions in both domains.
To achieve the overall objective of the thesis, the work was carried out in two stages. First, a battery of musical emotional stimuli comparable to the vocal stimuli already available (Montreal Affective Voices) was developed. Stimuli (Musical Emotional Bursts) expressing 4 emotions (happiness, fear, sadness, neutrality) performed on the violin and clarinet were recorded and validated. These Musical Emotional Bursts obtained a high recognition rate (M = 80.4%) and received arousal and valence judgments corresponding to the emotion they represented. Secondly, we were able to use these newly validated stimuli and the Montreal Affective Voices to perform two experimental comparison studies. First, functional magnetic resonance imaging was used to compare the neural circuits used to process these two types of emotional expressions. Independently of their vocal or musical nature, emotion-specific activity was observed in the auditory cortex (centered on the superior temporal gyrus) and in the limbic regions (amygdala/parahippocampal gyrus), whereas no activity specific to vocal or musical stimuli was observed. Subsequently, we compared the perception of vocal and musical emotions under cochlear implant simulation. This simulation greatly affects the perception of acoustic indices related to pitch (important for emotional discrimination), allowing us to determine which acoustic indices secondary to these are important for emotional perception in cochlear implant users. Examination of acoustic characteristics and emotional judgments determined that certain timbral characteristics (brightness, energy, and roughness) common to voice and music are used to make emotional judgments in both domains, under cochlear implant simulations.
The specific attention to our stimuli selection has allowed us to put forward the similarities (acoustic, neuronal) involved in the perception of vocal and musical emotions. This convergence of evidence provides important support to the hypothesis of a fundamental common neural circuit for the processing of vocal and musical emotions
- ā¦