10,439 research outputs found

    Structured Sparsity Models for Multiparty Speech Recovery from Reverberant Recordings

    Get PDF
    We tackle the multi-party speech recovery problem through modeling the acoustic of the reverberant chambers. Our approach exploits structured sparsity models to perform room modeling and speech recovery. We propose a scheme for characterizing the room acoustic from the unknown competing speech sources relying on localization of the early images of the speakers by sparse approximation of the spatial spectra of the virtual sources in a free-space model. The images are then clustered exploiting the low-rank structure of the spectro-temporal components belonging to each source. This enables us to identify the early support of the room impulse response function and its unique map to the room geometry. To further tackle the ambiguity of the reflection ratios, we propose a novel formulation of the reverberation model and estimate the absorption coefficients through a convex optimization exploiting joint sparsity model formulated upon spatio-spectral sparsity of concurrent speech representation. The acoustic parameters are then incorporated for separating individual speech signals through either structured sparse recovery or inverse filtering the acoustic channels. The experiments conducted on real data recordings demonstrate the effectiveness of the proposed approach for multi-party speech recovery and recognition.Comment: 31 page

    Speech Separation Using Partially Asynchronous Microphone Arrays Without Resampling

    Full text link
    We consider the problem of separating speech sources captured by multiple spatially separated devices, each of which has multiple microphones and samples its signals at a slightly different rate. Most asynchronous array processing methods rely on sample rate offset estimation and resampling, but these offsets can be difficult to estimate if the sources or microphones are moving. We propose a source separation method that does not require offset estimation or signal resampling. Instead, we divide the distributed array into several synchronous subarrays. All arrays are used jointly to estimate the time-varying signal statistics, and those statistics are used to design separate time-varying spatial filters in each array. We demonstrate the method for speech mixtures recorded on both stationary and moving microphone arrays.Comment: To appear at the International Workshop on Acoustic Signal Enhancement (IWAENC 2018

    Towards End-to-End Acoustic Localization using Deep Learning: from Audio Signal to Source Position Coordinates

    Full text link
    This paper presents a novel approach for indoor acoustic source localization using microphone arrays and based on a Convolutional Neural Network (CNN). The proposed solution is, to the best of our knowledge, the first published work in which the CNN is designed to directly estimate the three dimensional position of an acoustic source, using the raw audio signal as the input information avoiding the use of hand crafted audio features. Given the limited amount of available localization data, we propose in this paper a training strategy based on two steps. We first train our network using semi-synthetic data, generated from close talk speech recordings, and where we simulate the time delays and distortion suffered in the signal that propagates from the source to the array of microphones. We then fine tune this network using a small amount of real data. Our experimental results show that this strategy is able to produce networks that significantly improve existing localization methods based on \textit{SRP-PHAT} strategies. In addition, our experiments show that our CNN method exhibits better resistance against varying gender of the speaker and different window sizes compared with the other methods.Comment: 18 pages, 3 figures, 8 table

    Space Time MUSIC: Consistent Signal Subspace Estimation for Wide-band Sensor Arrays

    Full text link
    Wide-band Direction of Arrival (DOA) estimation with sensor arrays is an essential task in sonar, radar, acoustics, biomedical and multimedia applications. Many state of the art wide-band DOA estimators coherently process frequency binned array outputs by approximate Maximum Likelihood, Weighted Subspace Fitting or focusing techniques. This paper shows that bin signals obtained by filter-bank approaches do not obey the finite rank narrow-band array model, because spectral leakage and the change of the array response with frequency within the bin create \emph{ghost sources} dependent on the particular realization of the source process. Therefore, existing DOA estimators based on binning cannot claim consistency even with the perfect knowledge of the array response. In this work, a more realistic array model with a finite length of the sensor impulse responses is assumed, which still has finite rank under a space-time formulation. It is shown that signal subspaces at arbitrary frequencies can be consistently recovered under mild conditions by applying MUSIC-type (ST-MUSIC) estimators to the dominant eigenvectors of the wide-band space-time sensor cross-correlation matrix. A novel Maximum Likelihood based ST-MUSIC subspace estimate is developed in order to recover consistency. The number of sources active at each frequency are estimated by Information Theoretic Criteria. The sample ST-MUSIC subspaces can be fed to any subspace fitting DOA estimator at single or multiple frequencies. Simulations confirm that the new technique clearly outperforms binning approaches at sufficiently high signal to noise ratio, when model mismatches exceed the noise floor.Comment: 15 pages, 10 figures. Accepted in a revised form by the IEEE Trans. on Signal Processing on 12 February 1918. @IEEE201

    Self-Calibration Methods for Uncontrolled Environments in Sensor Networks: A Reference Survey

    Get PDF
    Growing progress in sensor technology has constantly expanded the number and range of low-cost, small, and portable sensors on the market, increasing the number and type of physical phenomena that can be measured with wirelessly connected sensors. Large-scale deployments of wireless sensor networks (WSN) involving hundreds or thousands of devices and limited budgets often constrain the choice of sensing hardware, which generally has reduced accuracy, precision, and reliability. Therefore, it is challenging to achieve good data quality and maintain error-free measurements during the whole system lifetime. Self-calibration or recalibration in ad hoc sensor networks to preserve data quality is essential, yet challenging, for several reasons, such as the existence of random noise and the absence of suitable general models. Calibration performed in the field, without accurate and controlled instrumentation, is said to be in an uncontrolled environment. This paper provides current and fundamental self-calibration approaches and models for wireless sensor networks in uncontrolled environments

    Source localization using acoustic vector sensors: a music approach

    Get PDF
    Traditionally, a large array of microphones is used to localize multiple far field sources in acoustics. We present a sound source localization technique that requires far less channels and measurement locations (affecting data channels, setup times and cabling issues). This is achieved by using an acoustic vector sensor (AVS) in air that consists of four collocated sensors: three orthogonally placed acoustic particle velocity sensors and an omnidirectional sound pressure transducer. Experimental evidence is presented demonstrating that a single 4 channel AVS based approach accurately localizes two uncorrelated sources. The method is extended to multiple AVS, increasing the number of sources that can be identified. Theory and measurement results are presented. Attention is paid to the theoretical possibilities and limitations of this approach, as well as the signal processing techniques based on the MUSIC method

    SoundCompass: a distributed MEMS microphone array-based sensor for sound source localization

    Get PDF
    Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass's hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field
    corecore