655 research outputs found
Differential fast fixed-point algorithms for underdetermined instantaneous and convolutive partial blind source separation
This paper concerns underdetermined linear instantaneous and convolutive
blind source separation (BSS), i.e., the case when the number of observed mixed
signals is lower than the number of sources.We propose partial BSS methods,
which separate supposedly nonstationary sources of interest (while keeping
residual components for the other, supposedly stationary, "noise" sources).
These methods are based on the general differential BSS concept that we
introduced before. In the instantaneous case, the approach proposed in this
paper consists of a differential extension of the FastICA method (which does
not apply to underdetermined mixtures). In the convolutive case, we extend our
recent time-domain fast fixed-point C-FICA algorithm to underdetermined
mixtures. Both proposed approaches thus keep the attractive features of the
FastICA and C-FICA methods. Our approaches are based on differential sphering
processes, followed by the optimization of the differential nonnormalized
kurtosis that we introduce in this paper. Experimental tests show that these
differential algorithms are much more robust to noise sources than the standard
FastICA and C-FICA algorithms.Comment: this paper describes our differential FastICA-like algorithms for
linear instantaneous and convolutive underdetermined mixture
Convolutive Blind Source Separation Methods
In this chapter, we provide an overview of existing algorithms for blind source separation of convolutive audio mixtures. We provide a taxonomy, wherein many of the existing algorithms can be organized, and we present published results from those algorithms that have been applied to real-world audio separation tasks
Blind separation of convolutive mixtures based on second order and third order statistics
This paper addresses the problem of blind separation of linear convolutive mixtures. We first reformulate the problem into a blind separation of linear instantaneous mixtures, and then a statistical approach is applied to solve the reformulated problem. From the statistics of the mixtures, two kinds of matrix pencils are constructed to estimate the mixing matrix. The original sources are then separated with the estimated mixing matrix. For the purpose of computational efficiency and robustness, in the matrix pencil, one matrix is constructed from the second order statistics, and the other is constructed from the third order statistics. The proposed novel methods do not require the exact knowledge of the channel order. Simulation results show that the methods are robust and have good performance.published_or_final_versio
Multichannel Speech Separation and Enhancement Using the Convolutive Transfer Function
This paper addresses the problem of speech separation and enhancement from
multichannel convolutive and noisy mixtures, \emph{assuming known mixing
filters}. We propose to perform the speech separation and enhancement task in
the short-time Fourier transform domain, using the convolutive transfer
function (CTF) approximation. Compared to time-domain filters, CTF has much
less taps, consequently it has less near-common zeros among channels and less
computational complexity. The work proposes three speech-source recovery
methods, namely: i) the multichannel inverse filtering method, i.e. the
multiple input/output inverse theorem (MINT), is exploited in the CTF domain,
and for the multi-source case, ii) a beamforming-like multichannel inverse
filtering method applying single source MINT and using power minimization,
which is suitable whenever the source CTFs are not all known, and iii) a
constrained Lasso method, where the sources are recovered by minimizing the
-norm to impose their spectral sparsity, with the constraint that the
-norm fitting cost, between the microphone signals and the mixing model
involving the unknown source signals, is less than a tolerance. The noise can
be reduced by setting a tolerance onto the noise power. Experiments under
various acoustic conditions are carried out to evaluate the three proposed
methods. The comparison between them as well as with the baseline methods is
presented.Comment: Submitted to IEEE/ACM Transactions on Audio, Speech and Language
Processin
- …