37,274 research outputs found

    Sound Source Separation

    Get PDF
    This is the author's accepted pre-print of the article, first published as G. Evangelista, S. Marchand, M. D. Plumbley and E. Vincent. Sound source separation. In U. Zölzer (ed.), DAFX: Digital Audio Effects, 2nd edition, Chapter 14, pp. 551-588. John Wiley & Sons, March 2011. ISBN 9781119991298. DOI: 10.1002/9781119991298.ch14file: Proof:e\EvangelistaMarchandPlumbleyV11-sound.pdf:PDF owner: markp timestamp: 2011.04.26file: Proof:e\EvangelistaMarchandPlumbleyV11-sound.pdf:PDF owner: markp timestamp: 2011.04.2

    Perceptual and electrophysiological masking of the auditory brainstem response : a thesis presented in partial fulfilment of the requirements for the degree of Master of Arts in Psychology at Massey University

    Get PDF
    Effective masking levels of the auditory brainstem response (ABR) to tonepips were established on 10 normal-hearing subjects at 500, 1000, 2000 and 4000 Hz, using white noise. Effective masking levels of perceptual responses to the same stimuli were also established, for both presentation of single (1/second) and repeated (41.7/second) tonepips. Perceptual masking levels for repeated tonepips were significantly higher than levels for single tonepips, indicating temporal summation effects. Levels which effectively masked the ABR did not differ significantly from perceptual masking levels at either presentation rate. A signal-to-noise ratio of -5 to -10 dB was found to provide effective masking for all conditions. For the stimulus and recording parameters in the present study, a behavioural method of determining effective masking levels is considered appropriate. Behavioural thresholds determined for single tonepips were higher than thresholds for repeated tonepips, demonstrating dependence of nHL behavioural references for ABR thresholds on stimulus repetition rate. Effective masking levels determined in the present study may be applied to the use of tonepip ABRs to provide an objective frequency-specific measure of hearing in infants

    Decoding neural responses to temporal cues for sound localization

    Get PDF
    The activity of sensory neural populations carries information about the environment. This may be extracted from neural activity using different strategies. In the auditory brainstem, a recent theory proposes that sound location in the horizontal plane is decoded from the relative summed activity of two populations in each hemisphere, whereas earlier theories hypothesized that the location was decoded from the identity of the most active cells. We tested the performance of various decoders of neural responses in increasingly complex acoustical situations, including spectrum variations, noise, and sound diffraction. We demonstrate that there is insufficient information in the pooled activity of each hemisphere to estimate sound direction in a reliable way consistent with behavior, whereas robust estimates can be obtained from neural activity by taking into account the heterogeneous tuning of cells. These estimates can still be obtained when only contralateral neural responses are used, consistently with unilateral lesion studies. DOI: http://dx.doi.org/10.7554/eLife.01312.001

    Acoustical Ranging Techniques in Embedded Wireless Sensor Networked Devices

    Get PDF
    Location sensing provides endless opportunities for a wide range of applications in GPS-obstructed environments; where, typically, there is a need for higher degree of accuracy. In this article, we focus on robust range estimation, an important prerequisite for fine-grained localization. Motivated by the promise of acoustic in delivering high ranging accuracy, we present the design, implementation and evaluation of acoustic (both ultrasound and audible) ranging systems.We distill the limitations of acoustic ranging; and present efficient signal designs and detection algorithms to overcome the challenges of coverage, range, accuracy/resolution, tolerance to Doppler’s effect, and audible intensity. We evaluate our proposed techniques experimentally on TWEET, a low-power platform purpose-built for acoustic ranging applications. Our experiments demonstrate an operational range of 20 m (outdoor) and an average accuracy 2 cm in the ultrasound domain. Finally, we present the design of an audible-range acoustic tracking service that encompasses the benefits of a near-inaudible acoustic broadband chirp and approximately two times increase in Doppler tolerance to achieve better performance

    The Auditory Nerve Overlapped Waveform (ANOW) detects small endolymphatic manipulations that may go undetected by conventional measurements

    Get PDF
    Electrocochleography (ECochG) has been used to assess Ménière's disease, a pathology associated with endolymphatic hydrops and low-frequency sensorineural hearing loss. However, the current ECochG techniques are limited for use at high-frequencies only (≥1 kHz) and cannot be used to assess and understand the low-frequency sensorineural hearing loss in ears with Ménière's disease. In the current study, we use a relatively new ECochG technique to make measurements that originate from afferent auditory nerve fibers in the apical half of the cochlear spiral to assess effects of endolymphatic hydrops in guinea pig ears. These measurements are made from the Auditory Nerve Overlapped Waveform (ANOW). Hydrops was induced with artificial endolymph injections, iontophoretically applied Ca2+ to endolymph, and exposure to 200 Hz tones. The manipulations used in this study were far smaller than those used in previous investigations on hydrops. In response to all hydropic manipulations, ANOW amplitude to moderate level stimuli was markedly reduced but conventional ECochG measurements of compound action potential thresholds were unaffected (i.e., a less than 2 dB threshold shift). Given the origin of the ANOW, changes in ANOW amplitude likely reflect acute volume disturbances accumulate in the distensible cochlear apex. These results suggest that the ANOW could be used to advance our ability to identify initial stages of dysfunction in ears with Ménière's disease before the pathology progresses to an extent that can be detected with conventional measures

    A convolutional neural-network model of human cochlear mechanics and filter tuning for real-time applications

    Full text link
    Auditory models are commonly used as feature extractors for automatic speech-recognition systems or as front-ends for robotics, machine-hearing and hearing-aid applications. Although auditory models can capture the biophysical and nonlinear properties of human hearing in great detail, these biophysical models are computationally expensive and cannot be used in real-time applications. We present a hybrid approach where convolutional neural networks are combined with computational neuroscience to yield a real-time end-to-end model for human cochlear mechanics, including level-dependent filter tuning (CoNNear). The CoNNear model was trained on acoustic speech material and its performance and applicability were evaluated using (unseen) sound stimuli commonly employed in cochlear mechanics research. The CoNNear model accurately simulates human cochlear frequency selectivity and its dependence on sound intensity, an essential quality for robust speech intelligibility at negative speech-to-background-noise ratios. The CoNNear architecture is based on parallel and differentiable computations and has the power to achieve real-time human performance. These unique CoNNear features will enable the next generation of human-like machine-hearing applications
    • …
    corecore