1,788 research outputs found

    The Influence of Binaural Room Impulse Responses on Externalization in Virtual Reality Scenarios

    Get PDF
    A headphone-based virtual sound image can not be perceived as perfectly externalized if the acoustic of the synthesized room does not match that of the real listening environment. This effect has been well explored and is known as the room divergence effect (RDE). The RDE is important for perceived externalization of virtual sounds if listeners are aware of the room-related auditory information provided by the listening environment. In the case of virtual reality (VR) applications, users get a visual impression of the virtual room, but may not be aware of the auditory information of this room. It is unknown whether the acoustic congruence between the synthesized (binaurally rendered) room and the visual-only virtual listening environment is important for externalization. VR-based psychoacoustic experiments were performed and the results reveal that perceived externalization of virtual sounds depends on listeners’ expectations of the acoustic of the visual-only virtual room. The virtual sound images can be perceived as externalized, although there is an acoustic divergence between the binaurally synthesized room and the visual-only virtual listening environment. However, the “correct” room information in binaural sounds may lead to degraded externalization if the acoustic properties of the room do not match listeners’ expectations

    Aminoglycoside-Induced Phosphatidylserine Externalization in Sensory Hair Cells Is Regionally Restricted, Rapid, and Reversible

    Get PDF
    The aminophospholipid phosphatidylserine (PS) is normally restricted to the inner leaflet of the plasma membrane. During certain cellular processes, including apoptosis, PS translocates to the outer leaflet and can be labeled with externally applied annexin V, a calcium-dependent PS-binding protein. In mouse cochlear cultures, annexin V labeling reveals that the aminoglycoside antibiotic neomycin induces rapid PS externalization, specifically on the apical surface of hair cells. PS externalization is observed within ~75 s of neomycin perfusion, first on the hair bundle and then on membrane blebs forming around the apical surface. Whole-cell capacitance also increases significantly within minutes of neomycin application, indicating that blebbing is accompanied by membrane addition to the hair cell surface. PS externalization and membrane blebbing can, nonetheless, occur independently. Pretreating hair cells with calcium chelators, a procedure that blocks mechanotransduction, or overexpressing a phosphatidylinositol 4,5-biphosphate (PIP2)-binding pleckstrin homology domain, can reduce neomycin-induced PS externalization, suggesting that neomycin enters hair cells via transduction channels, clusters PIP2, and thereby activates lipid scrambling. The effects of short-term neomycin treatment are reversible. After neomycin washout, PS is no longer detected on the apical surface, apical membrane blebs disappear, and surface-bound annexin V is internalized, distributing throughout the supranuclear cytoplasm of the hair cell. Hair cells can therefore repair, and recover from, neomycin-induced surface damage. Hair cells lacking myosin VI, a minus-end directed actin-based motor implicated in endocytosis, can also recover from brief neomycin treatment. Internalized annexin V, however, remains below the apical surface, thereby pinpointing a critical role for myosin VI in the transport of endocytosed material away from the periphery of the hair cell

    Tissue-conducted spatial sound fields

    Get PDF
    We describe experiments using multiple cranial transducers to achieve auditory spatial perceptual impressions via bone (BC) and tissue conduction (TC), bypassing the peripheral hearing apparatus. This could be useful in cases of peripheral hearing damage or where ear-occlusion is undesirable. Previous work (e.g. Stanley and Walker 2006, MacDonald and Letowski 2006)1,2 indicated robust lateralization is feasible via tissue conduction. We have utilized discrete signals, stereo and first order ambisonics to investigate control of externalization, range, direction in azimuth and elevation, movement and spaciousness. Early results indicate robust and coherent effects. Current technological implementations are presented and potential development paths discussed

    Spatial Hearing with Incongruent Visual or Auditory Room Cues

    Get PDF
    In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli

    Computational models for listener-specific predictions of spatial audio quality

    Get PDF
    International audienceMillions of people use headphones every day for listening to music, watching movies, or communicating with others. Nevertheless, sounds presented via headphones are usually perceived inside the head instead of being localized at a naturally external position. Besides externalization and localization, spatial hearing also involves perceptual attributes like apparent source width, listener envelopment, and the ability to segregate sounds. The acoustic basis for spatial hearing is described by the listener-specific head-related transfer functions (HRTFs, Møller et al., 1995). Binaural virtual acoustics based on listener-specific HRTFs can create sounds presented via headphones being indistinguishable from natural sounds (Langendijk and Bronkhorst, 2000). In this talk, we will focus on the dimensions of sound localization that are particularly sensitive to listener-specific HRTFs, that is, along sagittal planes (i.e., vertical planes being orthogonal to the interaural axis) and near distances (sound externalization/internalization). We will discuss recent findings from binaural virtual acoustics and models aiming at predicting sound externalization (Hassager et al., 2016) and localization in sagittal planes (Baumgartner et al., 2014) considering the listener’s HRTFs. Sagittal-plane localization seems to be well understood and its model can already now reliably predict the localization performance in many listening situations (e.g., Marelli et al., 2015; Baumgartner and Majdak, 2015). In contrast, more investigation is required in order to better understand and create a valid model of sound externalization (Baumgartner et al., 2017). We aim to shed light onto the diversity of cues causing degraded sound externalization with spectral distortions by conducting a model-based meta-analysis of psychoacoustic studies. As potential cues we consider monaural and interaural spectral-shapes, spectral and temporal fluctuations of interaural level differences, interaural coherences, and broadband inconsistencies between interaural time and level differences in a highly comparable template-based modeling framework. Mere differences in sound pressure level between target and reference stimuli were used as a control cue. Our investigations revealed that the monaural spectral-shapes and the strengths of time-intensity trading are potent cues to explain previous results under anechoic conditions. However, future experiments will be required to unveil the actual essence of these cues.ReferencesBaumgartner, R., Majdak, P. (2015): Modeling Localization of Amplitude-Panned Virtual Sources in Sagittal Planes, in: Journal of Audio Engineering Society 63, 562-569.Baumgartner, R., Majdak, P., and Laback, B. (2014). “Modeling sound-source localization in sagittal planes for human listeners,” The Journal of the Acoustical Society of America 136, 791–802.Baumgartner, R., Reed, D. K., Tóth, B., Best, V., Majdak, P., Colburn, H. S., and Shinn-Cunningham, B. (2017). “Asymmetries in behavioral and neural responses to spectral cues demonstrate the generality of auditory looming bias,” Proceedings of the National Academy of Sciences 114, 9743–9748.Hassager, H. G., Gran, F., and Dau, T. (2016). “The role of spectral detail in the binaural transfer function on perceived externalization in a reverberant environment,” The Journal of the Acoustical Society of America 139, 2992–3000.Langendijk, E. H., and Bronkhorst, A. W. (2000). “Fidelity of three-dimensional-sound reproduction using a virtual auditory display,” J Acoust Soc Am 107, 528–37.Marelli, D., Baumgartner, R., and Majdak, P. (2015). “Efficient Approximation of Head-Related Transfer Functions in Subbands for Accurate Sound Localization,” IEEE Transactions on Audio, Speech, and Language Processing 23, 1130–1143.Møller, H., Sørensen, M. F., Hammershøi, D., and Jensen, C. B. (1995). “Head-related transfer functions of human subjects,” J Audio Eng Soc 43, 300–321

    Rendering Binaural Room Impulse Responses from Spherical Microphone Array Recordings Using Timbre Correction

    Get PDF
    The technique of rendering binaural room impulse responses from spatial data captured by spherical microphone arrays has been recently proposed and investigated perceptually. The finite spatial resolution enforced by the microphone configuration restricts the available frequency bandwidth and, accordingly, modifies the perceived timbre of the played-back material. This paper presents a feasibility study investigating the use of filters to correct such spectral artifacts. Listening tests are employed to gain a better understanding of how equalization affects externalization, source focus and timbre. Preliminary results suggest that timbre correction filters improve both timbral and spatial perception

    Explaining Schizophrenia: Auditory Verbal Hallucination and Self‐Monitoring

    Get PDF
    Do self‐monitoring accounts, a dominant account of the positive symptoms of schizophrenia, explain auditory verbal hallucination? In this essay, I argue that the account fails to answer crucial questions any explanation of auditory verbal hallucination must address. Where the account provides a plausible answer, I make the case for an alternative explanation: auditory verbal hallucination is not the result of a failed control mechanism, namely failed self‐monitoring, but, rather, of the persistent automaticity of auditory experience of a voice. My argument emphasizes the importance of careful examination of phenomenology as providing substantive constraints on causal models of the positive symptoms in schizophrenia
    • …
    corecore