402 research outputs found

    Hearing in three dimensions: Sound localization

    Get PDF
    The ability to localize a source of sound in space is a fundamental component of the three dimensional character of the sound of audio. For over a century scientists have been trying to understand the physical and psychological processes and physiological mechanisms that subserve sound localization. This research has shown that important information about sound source position is provided by interaural differences in time of arrival, interaural differences in intensity and direction-dependent filtering provided by the pinnae. Progress has been slow, primarily because experiments on localization are technically demanding. Control of stimulus parameters and quantification of the subjective experience are quite difficult problems. Recent advances, such as the ability to simulate a three dimensional sound field over headphones, seem to offer potential for rapid progress. Research using the new techniques has already produced new information. It now seems that interaural time differences are a much more salient and dominant localization cue than previously believed

    Psychophysical Evaluation of Three-Dimensional Auditory Displays

    Get PDF
    This report describes the progress made during the second year of a three-year Cooperative Research Agreement. The CRA proposed a program of applied psychophysical research designed to determine the requirements and limitations of three-dimensional (3-D) auditory display systems. These displays present synthesized stimuli to a pilot or virtual workstation operator that evoke auditory images at predetermined positions in space. The images can be either stationary or moving. In previous years, we completed a number of studies that provided data on listeners' abilities to localize stationary sound sources with 3-D displays. The current focus is on the use of 3-D displays in 'natural' listening conditions, which include listeners'head movements, moving sources, multiple sources and 'echoic' sources. The results of our research on one of these topics, the localization of multiple sources, was reported in the most recent Semi-Annual Progress Report (Appendix A). That same progress report described work on two related topics, the influence of a listener's a-priori knowledge of source characteristics and the discriminability of real and virtual sources. In the period since the last Progress Report we have conducted several new studies to evaluate the effectiveness of a new and simpler method for measuring the HRTF's that are used to synthesize virtual sources and have expanded our studies of multiple sources. The results of this research are described below

    Psychophysical Evaluation of Three-Dimensional Auditory Displays

    Get PDF
    This report describes the process made during the first year of a three-year Cooperative Research Agreement (CRA NCC2-542). The CRA proposed a program of applied of psychophysical research designed to determine the requirements and limitations of three-dimensional (3-D) auditory display systems. These displays present synthesized stimuli to a pilot or virtual workstation operator that evoke auditory images at predetermined positions in space. The images can be either stationary or moving. In previous years. we completed a number of studies that provided data on listeners' abilities to localize stationary sound sources with 3-D displays. The current focus is on the use of 3-D displays in 'natural' listening conditions, which include listeners' head movements, moving sources, multiple sources and 'echoic' sources. The results of our research on two of these topics, the role of head movements and the role of echoes and reflections, were reported in the most recent Semi-Annual Pro-ress Report (Appendix A). In the period since the last Progress Report we have been studying a third topic, the localizability of moving sources. The results of this research are described. The fidelity of a virtual auditory display is critically dependent on precise measurement of the listener''s Head-Related Transfer Functions (HRTFs), which are used to produce the virtual auditory images. We continue to explore methods for improving our HRTF measurement technique. During this reporting period we compared HRTFs measured using our standard open-canal probe tube technique and HRTFs measured with the closed-canal insert microphones from the Crystal River Engineering Snapshot system

    Auditory Spatial Layout

    Get PDF
    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving

    Structure-Function Study of Mammalian Munc18-1 and C. elegans UNC-18 Implicates Domain 3b in the Regulation of Exocytosis

    Get PDF
    Munc18-1 is an essential synaptic protein functioning during multiple stages of the exocytotic process including vesicle recruitment, docking and fusion. These functions require a number of distinct syntaxin-dependent interactions; however, Munc18-1 also regulates vesicle fusion via syntaxin-independent interactions with other exocytotic proteins. Although the structural regions of the Munc18-1 protein involved in closed-conformation syntaxin binding have been thoroughly examined, regions of the protein involved in other interactions are poorly characterised. To investigate this we performed a random transposon mutagenesis, identifying domain 3b of Munc18-1 as a functionally important region of the protein. Transposon insertion in an exposed loop within this domain specifically disrupted Mint1 binding despite leaving affinity for closed conformation syntaxin and binding to the SNARE complex unaffected. The insertion mutation significantly reduced total amounts of exocytosis as measured by carbon fiber amperometry in chromaffin cells. Introduction of the equivalent mutation in UNC-18 in Caenorhabditis elegans also reduced neurotransmitter release as assessed by aldicarb sensitivity. Correlation between the two experimental methods for recording changes in the number of exocytotic events was verified using a previously identified gain of function Munc18-1 mutation E466K (increased exocytosis in chromaffin cells and aldicarb hypersensitivity of C. elegans). These data implicate a novel role for an exposed loop in domain 3b of Munc18-1 in transducing regulation of vesicle fusion independent of closed-conformation syntaxin binding

    The importance of head movements for localizing virtual auditory display objects

    No full text
    Presented at 2nd International Conference on Auditory Display (ICAD), Santa Fe, New Mexico, November 7-9, 1994.In most of our research we produce virtual sound sources by filtering stimuli with head-related transfer functions (HRTF's) measured from discrete source positions and present the stimuli to listeners via headphones. With this synthesis procedure head movements create no change in the acoustical stimullus at the two ears, in contrast with what happens in natural listening conditions. To compare the localizability of virtual and real sources under these conditions, we require that listeners not m their heads, even when localizing real sources. Some listeners make large numbers of localization errors known as "front-back confusions" (a report of an apparent position in the front hemifield given a rear hemifield stimulus, and vice-versa). Head movements can, in theory, provide the cues needed to resolve front-back ambiguities. The experiment described here seeks to clarify the issue by meassuring both the nature and consequences of head movements during a sound localization task

    Sound localization in varying virtual acoustic environments

    No full text
    Presented at 2nd International Conference on Auditory Display (ICAD), Santa Fe, New Mexico, November 7-9, 1994.{Localization performance was examined in three types of headphone-presented virtual acoustic environments: an anechoic virtual environment, an echoic virtual environment, and an echoic virtual environment for which the directional information conveyed by the reflections was randomized. Virtual acoustic environments were generated utilizing individualized headrelated transfer functions and a three-dimensional image model of rectangular room acousticsa medium sized rectangular room (8m x 8m x 3m) with moderately reflective boundaries (absorption coefficien
    • …
    corecore