68 research outputs found

    Online DOA estimation using real eigenbeam ESPRIT with propagation vector matching

    Get PDF
    International audienceThe Eigenbeam estimation of signal parameters via rotational invariance technique (EB-ESPRIT) [1] is a method to estimate multiple directions-of-arrival (DOAs) of sound sources from a spherical microphone array recording in the spherical harmonics domain (SHD). The method, first, constructs a signal subspace from the SHD signal and then makes use of the fact that, for plane-wave sources, the signal subspace is spanned by the (complex conjugate) spherical harmonic vectors at the source directions. The DOAs are then estimated from the signal subspace using recurrence relations of spherical harmonics.In recent publications, the singularity and ambiguity problems of the original EB-ESPRIT have been solved by jointly combining several types of recurrence relations. The state-of-the-art EB-ESPRIT, denoted as DOA-vector EB-ESPRIT, is based on three recurrence relations [2,3]. This EB-ESPRIT variant can estimate the source DOAs with significantly higher accuracy compared to the other EB-ESPRIT variants [3]. However, a permutation problem arises, which can be solved by using, for example, a joint diagonalization method [3].For parametric spatial audio signal processing purposes in the short-time Fourier transform (STFT) domain, DOA estimates are usually needed per time-frame and frequency bin. In principle, one can use the DOA-vector EB-ESPRIT method to estimate the source DOAs per time-frequency bin in an online manner. However, due to the eigendecompostion of the PSD matrix and the joint diagonalization procedure, the computational cost might be too large for many real-time applications.In this work, we propose a computationally more efficient version of the DOA-vector EB-ESPRIT based on real spherical harmonics recurrence relations. First, we separate the real and imaginary parts of the real SHD signal in the STFT domain and then construct a real signal subspace thereof, which can be recursively estimated using the deflated projection approximation subspace tracking (PASTd) [4] method. For the case of one source per time-frequency bin, the joint diagonalization is not necessary and we can simplify the EB-ESPRIT equations. For the case of two sources, the plane-wave propagation vectors can directly be estimated from the signal subspace eigenvectors by employing properties of the propagation vectors. This method can be seen as a higher order ambisonics extension of the robust B-format DOA estimation in [5]. The proposed method for estimating two DOAs can be summarized as follows:1. Separate real and imaginary parts of the real SHD signal in the STFT domain.2. Recursively estimate the signal subspace eigenvectors using PASTd.3. Estimate the two plane-wave propagation vectors from the signal subspace eigenvectors by using that they span the same subspace and by using properties of the propagation vectors (subspace-propagation vector matching).4. Estimate the DOAs by using three types of real spherical harmonics recurrence relations.Alternatively, one can estimate the DOAs analogously to the complex DOA-vector EB-ESPRIT using the joint diagonalization method proposed in [3].For the evaluation, we simulate SHD signals up to third order with one and two speech sources in reverberant and noisy environments. For the one-source scenarios, we compare the real DOA-vector EB-ESPRIT with subspace estimation based on singular value decomposition (SVD) against PASTd. For the two-source scenarios, we compare the real DOA-vector EB-ESPRIT with joint diagonalization against subspace-propagation vector matching and the robust B-format DOA estimation method.We analyze the angular distributions of the DOA estimates and find, that the DOA estimation using PASTd for the signal subspace estimation is slightly less accurate than the SVD based method but computationally much more efficient. For the estimation of two DOAs, the EB-ESPRIT based methods outperform the robust B-format estimation method when higher SHD orders are considered. The joint diagonalization method is more accurate than the subspace-propagation vector matching method. However, the latter is computationally more efficient.References:[1] H. Teutsch and W. Kellermann, “Detection and localization of multiple wideband acoustic sources based on wavefield decomposition using spherical apertures,” in Proc. IEEE Intl. Conf. Acoust., Speech Signal Proc. (ICASSP), Mar. 2008, pp. 5276–5279.[2] B. Jo and J. W. Choi, “Nonsingular EB-ESPRIT for the localization of early reflections in a room,” J. Acoust. Soc. Am., vol. 144, no. 3, p. 1882, Sep. 2018.[3] A. Herzog and E. A. P. Habets, “Eigenbeam-ESPRIT for DOA-vector estimation,” IEEE Signal Process. Lett., vol. 26, no. 4, pp. 572-576, April 2019.[4] B. Yang – “Projection Approximation Subspace Tracking, IEEE Trans. Sig. Proc.,” vol. 43, no. 1, Jan. 1995.[5] O. Thiergart and E.A.P. Habets, “Robust direction-of-arrival estimation of two simultaneous plane waves from a B-format signal,” IEEE 27th Conv. of Electrical and Electronics Engineers in Israel, Nov. 2012

    Convolutive Blind Source Separation Methods

    Get PDF
    In this chapter, we provide an overview of existing algorithms for blind source separation of convolutive audio mixtures. We provide a taxonomy, wherein many of the existing algorithms can be organized, and we present published results from those algorithms that have been applied to real-world audio separation tasks

    Proceedings of the EAA Spatial Audio Signal Processing symposium: SASP 2019

    Get PDF
    International audienc

    Source Separation for Hearing Aid Applications

    Get PDF

    Efficient Multiband Algorithms for Blind Source Separation

    Get PDF
    The problem of blind separation refers to recovering original signals, called source signals, from the mixed signals, called observation signals, in a reverberant environment. The mixture is a function of a sequence of original speech signals mixed in a reverberant room. The objective is to separate mixed signals to obtain the original signals without degradation and without prior information of the features of the sources. The strategy used to achieve this objective is to use multiple bands that work at a lower rate, have less computational cost and a quicker convergence than the conventional scheme. Our motivation is the competitive results of unequal-passbands scheme applications, in terms of the convergence speed. The objective of this research is to improve unequal-passbands schemes by improving the speed of convergence and reducing the computational cost. The first proposed work is a novel maximally decimated unequal-passbands scheme.This scheme uses multiple bands that make it work at a reduced sampling rate, and low computational cost. An adaptation approach is derived with an adaptation step that improved the convergence speed. The performance of the proposed scheme was measured in different ways. First, the mean square errors of various bands are measured and the results are compared to a maximally decimated equal-passbands scheme, which is currently the best performing method. The results show that the proposed scheme has a faster convergence rate than the maximally decimated equal-passbands scheme. Second, when the scheme is tested for white and coloured inputs using a low number of bands, it does not yield good results; but when the number of bands is increased, the speed of convergence is enhanced. Third, the scheme is tested for quick changes. It is shown that the performance of the proposed scheme is similar to that of the equal-passbands scheme. Fourth, the scheme is also tested in a stationary state. The experimental results confirm the theoretical work. For more challenging scenarios, an unequal-passbands scheme with over-sampled decimation is proposed; the greater number of bands, the more efficient the separation. The results are compared to the currently best performing method. Second, an experimental comparison is made between the proposed multiband scheme and the conventional scheme. The results show that the convergence speed and the signal-to-interference ratio of the proposed scheme are higher than that of the conventional scheme, and the computation cost is lower than that of the conventional scheme

    Speech dereverberation and speaker separation using microphone arrays in realistic environments

    Get PDF
    This thesis concentrates on comparing novel and existing dereverberation and speaker separation techniques using multiple corpora, including a new corpus collected using a microphone array. Many corpora currently used for these techniques are recorded using head-mounted microphones in anechoic chambers. This novel corpus contains recordings with noise and reverberation made in office and workshop environments. Novel algorithms present a different way of approximating the reverberation, producing results that are competitive with existing algorithms. Dereverberation is evaluated using seven correlation-based algorithms and applied to two different corpora. Three of these are novel algorithms (Hs NTF, Cauchy WPE and Cauchy MIMO WPE). Both non-learning and learning algorithms are tested, with the learning algorithms performing better. For single and multi-channel speaker separation, unsupervised non-negative matrix factorization (NMF) algorithms are compared using three cost functions combined with sparsity, convolution and direction of arrival. The results show that the choice of cost function is important for improving the separation result. Furthermore, six different supervised deep learning algorithms are applied to single channel speaker separation. Historic information improves the result. When comparing NMF to deep learning, NMF is able to converge faster to a solution and provides a better result for the corpora used in this thesis

    Audio source separation for music in low-latency and high-latency scenarios

    Get PDF
    Aquesta tesi proposa mètodes per tractar les limitacions de les tècniques existents de separació de fonts musicals en condicions de baixa i alta latència. En primer lloc, ens centrem en els mètodes amb un baix cost computacional i baixa latència. Proposem l'ús de la regularització de Tikhonov com a mètode de descomposició de l'espectre en el context de baixa latència. El comparem amb les tècniques existents en tasques d'estimació i seguiment dels tons, que són passos crucials en molts mètodes de separació. A continuació utilitzem i avaluem el mètode de descomposició de l'espectre en tasques de separació de veu cantada, baix i percussió. En segon lloc, proposem diversos mètodes d'alta latència que milloren la separació de la veu cantada, gràcies al modelatge de components específics, com la respiració i les consonants. Finalment, explorem l'ús de correlacions temporals i anotacions manuals per millorar la separació dels instruments de percussió i dels senyals musicals polifònics complexes.Esta tesis propone métodos para tratar las limitaciones de las técnicas existentes de separación de fuentes musicales en condiciones de baja y alta latencia. En primer lugar, nos centramos en los métodos con un bajo coste computacional y baja latencia. Proponemos el uso de la regularización de Tikhonov como método de descomposición del espectro en el contexto de baja latencia. Lo comparamos con las técnicas existentes en tareas de estimación y seguimiento de los tonos, que son pasos cruciales en muchos métodos de separación. A continuación utilizamos y evaluamos el método de descomposición del espectro en tareas de separación de voz cantada, bajo y percusión. En segundo lugar, proponemos varios métodos de alta latencia que mejoran la separación de la voz cantada, gracias al modelado de componentes que a menudo no se toman en cuenta, como la respiración y las consonantes. Finalmente, exploramos el uso de correlaciones temporales y anotaciones manuales para mejorar la separación de los instrumentos de percusión y señales musicales polifónicas complejas.This thesis proposes specific methods to address the limitations of current music source separation methods in low-latency and high-latency scenarios. First, we focus on methods with low computational cost and low latency. We propose the use of Tikhonov regularization as a method for spectrum decomposition in the low-latency context. We compare it to existing techniques in pitch estimation and tracking tasks, crucial steps in many separation methods. We then use the proposed spectrum decomposition method in low-latency separation tasks targeting singing voice, bass and drums. Second, we propose several high-latency methods that improve the separation of singing voice by modeling components that are often not accounted for, such as breathiness and consonants. Finally, we explore using temporal correlations and human annotations to enhance the separation of drums and complex polyphonic music signals
    corecore