3 research outputs found

    Delay-Performance Tradeoffs in Causal Microphone Array Processing

    Full text link
    In real-time listening enhancement applications, such as hearing aid signal processing, sounds must be processed with no more than a few milliseconds of delay to sound natural to the listener. Listening devices can achieve better performance with lower delay by using microphone arrays to filter acoustic signals in both space and time. Here, we analyze the tradeoff between delay and squared-error performance of causal multichannel Wiener filters for microphone array noise reduction. We compute exact expressions for the delay-error curves in two special cases and present experimental results from real-world microphone array recordings. We find that delay-performance characteristics are determined by both the spatial and temporal correlation structures of the signals.Comment: To appear at the International Workshop on Acoustic Signal Enhancement (IWAENC 2018

    Cooperative Audio Source Separation and Enhancement Using Distributed Microphone Arrays and Wearable Devices

    Full text link
    Augmented listening devices such as hearing aids often perform poorly in noisy and reverberant environments with many competing sound sources. Large distributed microphone arrays can improve performance, but data from remote microphones often cannot be used for delay-constrained real-time processing. We present a cooperative audio source separation and enhancement system that leverages wearable listening devices and other microphone arrays spread around a room. The full distributed array is used to separate sound sources and estimate their statistics. Each listening device uses these statistics to design real-time binaural audio enhancement filters using its own local microphones. The system is demonstrated experimentally using 10 speech sources and 160 microphones in a large, reverberant room.Comment: To appear at CAMSAP 201

    Binaural Audio Source Remixing with Microphone Array Listening Devices

    Full text link
    Augmented listening devices, such as hearing aids and augmented reality headsets, enhance human perception by changing the sounds that we hear. Microphone arrays can improve the performance of listening systems in noisy environments, but most array-based listening systems are designed to isolate a single sound source from a mixture. This work considers a source-remixing filter that alters the relative level of each source independently. Remixing rather than separating sounds can help to improve perceptual transparency: it causes less distortion to the signal spectrum and especially to the interaural cues that humans use to localize sounds in space.Comment: To appear at ICASSP 202
    corecore