289 research outputs found

    Hearing in three dimensions: Sound localization

    Get PDF
    The ability to localize a source of sound in space is a fundamental component of the three dimensional character of the sound of audio. For over a century scientists have been trying to understand the physical and psychological processes and physiological mechanisms that subserve sound localization. This research has shown that important information about sound source position is provided by interaural differences in time of arrival, interaural differences in intensity and direction-dependent filtering provided by the pinnae. Progress has been slow, primarily because experiments on localization are technically demanding. Control of stimulus parameters and quantification of the subjective experience are quite difficult problems. Recent advances, such as the ability to simulate a three dimensional sound field over headphones, seem to offer potential for rapid progress. Research using the new techniques has already produced new information. It now seems that interaural time differences are a much more salient and dominant localization cue than previously believed

    Auditory Spatial Layout

    Get PDF
    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving

    Structure-Function Study of Mammalian Munc18-1 and C. elegans UNC-18 Implicates Domain 3b in the Regulation of Exocytosis

    Get PDF
    Munc18-1 is an essential synaptic protein functioning during multiple stages of the exocytotic process including vesicle recruitment, docking and fusion. These functions require a number of distinct syntaxin-dependent interactions; however, Munc18-1 also regulates vesicle fusion via syntaxin-independent interactions with other exocytotic proteins. Although the structural regions of the Munc18-1 protein involved in closed-conformation syntaxin binding have been thoroughly examined, regions of the protein involved in other interactions are poorly characterised. To investigate this we performed a random transposon mutagenesis, identifying domain 3b of Munc18-1 as a functionally important region of the protein. Transposon insertion in an exposed loop within this domain specifically disrupted Mint1 binding despite leaving affinity for closed conformation syntaxin and binding to the SNARE complex unaffected. The insertion mutation significantly reduced total amounts of exocytosis as measured by carbon fiber amperometry in chromaffin cells. Introduction of the equivalent mutation in UNC-18 in Caenorhabditis elegans also reduced neurotransmitter release as assessed by aldicarb sensitivity. Correlation between the two experimental methods for recording changes in the number of exocytotic events was verified using a previously identified gain of function Munc18-1 mutation E466K (increased exocytosis in chromaffin cells and aldicarb hypersensitivity of C. elegans). These data implicate a novel role for an exposed loop in domain 3b of Munc18-1 in transducing regulation of vesicle fusion independent of closed-conformation syntaxin binding

    The importance of head movements for localizing virtual auditory display objects

    No full text
    Presented at 2nd International Conference on Auditory Display (ICAD), Santa Fe, New Mexico, November 7-9, 1994.In most of our research we produce virtual sound sources by filtering stimuli with head-related transfer functions (HRTF's) measured from discrete source positions and present the stimuli to listeners via headphones. With this synthesis procedure head movements create no change in the acoustical stimullus at the two ears, in contrast with what happens in natural listening conditions. To compare the localizability of virtual and real sources under these conditions, we require that listeners not m their heads, even when localizing real sources. Some listeners make large numbers of localization errors known as "front-back confusions" (a report of an apparent position in the front hemifield given a rear hemifield stimulus, and vice-versa). Head movements can, in theory, provide the cues needed to resolve front-back ambiguities. The experiment described here seeks to clarify the issue by meassuring both the nature and consequences of head movements during a sound localization task

    Sound localization in varying virtual acoustic environments

    No full text
    Presented at 2nd International Conference on Auditory Display (ICAD), Santa Fe, New Mexico, November 7-9, 1994.{Localization performance was examined in three types of headphone-presented virtual acoustic environments: an anechoic virtual environment, an echoic virtual environment, and an echoic virtual environment for which the directional information conveyed by the reflections was randomized. Virtual acoustic environments were generated utilizing individualized headrelated transfer functions and a three-dimensional image model of rectangular room acousticsa medium sized rectangular room (8m x 8m x 3m) with moderately reflective boundaries (absorption coefficien

    Individual differences and age effects in a dichotic informational masking paradigm1

    No full text
    Sixty normally-hearing listeners, ages 5 to 61 years, participated in a monaural speech understanding task designed to assess the impact of a single-talker speech masker presented to the opposite ear. The speech targets were masked by ipsilateral speech-spectrum noise. Masker level was fixed and target level was varied to estimate psychometric functions. The target∕masker ratio that led to 51% correct performance in this task was taken as the baseline threshold. The impact of a modulated speech-spectrum noise, a male talker, or a female talker presented at a fixed level to the contralateral ear was quantified by the change in the baseline threshold and was assumed to reflect informational masking. The modulated-noise masker produced no informational masking across the entire age range. Speech maskers produced as much as 20 dB of informational masking for children aged 5–8 years and only 4 dB for adults. In contrast with previous studies using ipsilateral speech maskers, the male and female contralateral speech maskers produced comparable informational masking. Analyses of the developmental rate of change for informational masking and of the patterns of individual differences suggest that the informational masking produced by contralateral and ipsilateral maskers may be mediated by different mechanisms or processes
    • …
    corecore