2,995 research outputs found
Probabilistic Modeling Paradigms for Audio Source Separation
This is the author's final version of the article, first published as E. Vincent, M. G. Jafari, S. A. Abdallah, M. D. Plumbley, M. E. Davies. Probabilistic Modeling Paradigms for Audio Source Separation. In W. Wang (Ed), Machine Audition: Principles, Algorithms and Systems. Chapter 7, pp. 162-185. IGI Global, 2011. ISBN 978-1-61520-919-4. DOI: 10.4018/978-1-61520-919-4.ch007file: VincentJafariAbdallahPD11-probabilistic.pdf:v\VincentJafariAbdallahPD11-probabilistic.pdf:PDF owner: markp timestamp: 2011.02.04file: VincentJafariAbdallahPD11-probabilistic.pdf:v\VincentJafariAbdallahPD11-probabilistic.pdf:PDF owner: markp timestamp: 2011.02.04Most sound scenes result from the superposition of several sources, which can be separately perceived and analyzed by human listeners. Source separation aims to provide machine listeners with similar skills by extracting the sounds of individual sources from a given scene. Existing separation systems operate either by emulating the human auditory system or by inferring the parameters of probabilistic sound models. In this chapter, the authors focus on the latter approach and provide a joint overview of established and recent models, including independent component analysis, local time-frequency models and spectral template-based models. They show that most models are instances of one of the following two general paradigms: linear modeling or variance modeling. They compare the merits of either paradigm and report objective performance figures. They also,conclude by discussing promising combinations of probabilistic priors and inference algorithms that could form the basis of future state-of-the-art systems
An adaptive stereo basis method for convolutive blind audio source separation
NOTICE: this is the author’s version of a work that was accepted for publication in Neurocomputing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in PUBLICATION, [71, 10-12, June 2008] DOI:neucom.2007.08.02
Audio Source Separation Using Sparse Representations
This is the author's final version of the article, first published as A. Nesbit, M. G. Jafari, E. Vincent and M. D. Plumbley. Audio Source Separation Using Sparse Representations. In W. Wang (Ed), Machine Audition: Principles, Algorithms and Systems. Chapter 10, pp. 246-264. IGI Global, 2011. ISBN 978-1-61520-919-4. DOI: 10.4018/978-1-61520-919-4.ch010file: NesbitJafariVincentP11-audio.pdf:n\NesbitJafariVincentP11-audio.pdf:PDF owner: markp timestamp: 2011.02.04file: NesbitJafariVincentP11-audio.pdf:n\NesbitJafariVincentP11-audio.pdf:PDF owner: markp timestamp: 2011.02.04The authors address the problem of audio source separation, namely, the recovery of audio signals from recordings of mixtures of those signals. The sparse component analysis framework is a powerful method for achieving this. Sparse orthogonal transforms, in which only few transform coefficients differ significantly from zero, are developed; once the signal has been transformed, energy is apportioned from each transform coefficient to each estimated source, and, finally, the signal is reconstructed using the inverse transform. The overriding aim of this chapter is to demonstrate how this framework, as exemplified here by two different decomposition methods which adapt to the signal to represent it sparsely, can be used to solve different problems in different mixing scenarios. To address the instantaneous (neither delays nor echoes) and underdetermined (more sources than mixtures) mixing model, a lapped orthogonal transform is adapted to the signal by selecting a basis from a library of predetermined bases. This method is highly related to the windowing methods used in the MPEG audio coding framework. In considering the anechoic (delays but no echoes) and determined (equal number of sources and mixtures) mixing case, a greedy adaptive transform is used based on orthogonal basis functions that are learned from the observed data, instead of being selected from a predetermined library of bases. This is found to encode the signal characteristics, by introducing a feedback system between the bases and the observed data. Experiments on mixtures of speech and music signals demonstrate that these methods give good signal approximations and separation performance, and indicate promising directions for future research
Differential fast fixed-point algorithms for underdetermined instantaneous and convolutive partial blind source separation
This paper concerns underdetermined linear instantaneous and convolutive
blind source separation (BSS), i.e., the case when the number of observed mixed
signals is lower than the number of sources.We propose partial BSS methods,
which separate supposedly nonstationary sources of interest (while keeping
residual components for the other, supposedly stationary, "noise" sources).
These methods are based on the general differential BSS concept that we
introduced before. In the instantaneous case, the approach proposed in this
paper consists of a differential extension of the FastICA method (which does
not apply to underdetermined mixtures). In the convolutive case, we extend our
recent time-domain fast fixed-point C-FICA algorithm to underdetermined
mixtures. Both proposed approaches thus keep the attractive features of the
FastICA and C-FICA methods. Our approaches are based on differential sphering
processes, followed by the optimization of the differential nonnormalized
kurtosis that we introduce in this paper. Experimental tests show that these
differential algorithms are much more robust to noise sources than the standard
FastICA and C-FICA algorithms.Comment: this paper describes our differential FastICA-like algorithms for
linear instantaneous and convolutive underdetermined mixture
Exploitation of source nonstationarity in underdetermined blind source separation with advanced clustering techniques
The problem of blind source separation (BSS) is
investigated. Following the assumption that the time-frequency
(TF) distributions of the input sources do not overlap, quadratic
TF representation is used to exploit the sparsity of the statistically
nonstationary sources. However, separation performance is shown
to be limited by the selection of a certain threshold in classifying
the eigenvectors of the TF matrices drawn from the observation
mixtures. Two methods are, therefore, proposed based on recently
introduced advanced clustering techniques, namely Gap statistics
and self-splitting competitive learning (SSCL), to mitigate the
problem of eigenvector classification. The novel integration of
these two approaches successfully overcomes the problem of artificial
sources induced by insufficient knowledge of the threshold and
enables automatic determination of the number of active sources
over the observation. The separation performance is thereby
greatly improved. Practical consequences of violating the TF orthogonality
assumption in the current approach are also studied,
which motivates the proposal of a new solution robust to violation
of orthogonality. In this new method, the TF plane is partitioned
into appropriate blocks and source separation is thereby carried
out in a block-by-block manner. Numerical experiments with
linear chirp signals and Gaussian minimum shift keying (GMSK)
signals are included which support the improved performance of
the proposed approaches
The LOST Algorithm: finding lines and separating speech mixtures
Robust clustering of data into linear subspaces is a frequently encountered problem. Here, we treat clustering of one-dimensional subspaces that cross the origin. This problem arises in blind source separation, where the subspaces correspond directly to columns of a mixing matrix. We propose the LOST algorithm, which identifies such subspaces using a procedure similar in spirit to EM.
This line finding procedure combined with a transformation into a sparse domain and an L1-norm minimisation constitutes a blind source separation algorithm for the separation of instantaneous mixtures with an arbitrary number of mixtures and sources. We perform an extensive investigation on the general separation performance of the LOST algorithm using randomly generated mixtures, and empirically estimate the performance of the algorithm in the presence of noise. Furthermore, we implement a simple
scheme whereby the number of sources present in the mixtures can be detected automaticall
- …