7,252 research outputs found
Probabilistic Modeling Paradigms for Audio Source Separation
This is the author's final version of the article, first published as E. Vincent, M. G. Jafari, S. A. Abdallah, M. D. Plumbley, M. E. Davies. Probabilistic Modeling Paradigms for Audio Source Separation. In W. Wang (Ed), Machine Audition: Principles, Algorithms and Systems. Chapter 7, pp. 162-185. IGI Global, 2011. ISBN 978-1-61520-919-4. DOI: 10.4018/978-1-61520-919-4.ch007file: VincentJafariAbdallahPD11-probabilistic.pdf:v\VincentJafariAbdallahPD11-probabilistic.pdf:PDF owner: markp timestamp: 2011.02.04file: VincentJafariAbdallahPD11-probabilistic.pdf:v\VincentJafariAbdallahPD11-probabilistic.pdf:PDF owner: markp timestamp: 2011.02.04Most sound scenes result from the superposition of several sources, which can be separately perceived and analyzed by human listeners. Source separation aims to provide machine listeners with similar skills by extracting the sounds of individual sources from a given scene. Existing separation systems operate either by emulating the human auditory system or by inferring the parameters of probabilistic sound models. In this chapter, the authors focus on the latter approach and provide a joint overview of established and recent models, including independent component analysis, local time-frequency models and spectral template-based models. They show that most models are instances of one of the following two general paradigms: linear modeling or variance modeling. They compare the merits of either paradigm and report objective performance figures. They also,conclude by discussing promising combinations of probabilistic priors and inference algorithms that could form the basis of future state-of-the-art systems
Bayesian orthogonal component analysis for sparse representation
This paper addresses the problem of identifying a lower dimensional space
where observed data can be sparsely represented. This under-complete dictionary
learning task can be formulated as a blind separation problem of sparse sources
linearly mixed with an unknown orthogonal mixing matrix. This issue is
formulated in a Bayesian framework. First, the unknown sparse sources are
modeled as Bernoulli-Gaussian processes. To promote sparsity, a weighted
mixture of an atom at zero and a Gaussian distribution is proposed as prior
distribution for the unobserved sources. A non-informative prior distribution
defined on an appropriate Stiefel manifold is elected for the mixing matrix.
The Bayesian inference on the unknown parameters is conducted using a Markov
chain Monte Carlo (MCMC) method. A partially collapsed Gibbs sampler is
designed to generate samples asymptotically distributed according to the joint
posterior distribution of the unknown model parameters and hyperparameters.
These samples are then used to approximate the joint maximum a posteriori
estimator of the sources and mixing matrix. Simulations conducted on synthetic
data are reported to illustrate the performance of the method for recovering
sparse representations. An application to sparse coding on under-complete
dictionary is finally investigated.Comment: Revised version. Accepted to IEEE Trans. Signal Processin
Blind source separation using statistical nonnegative matrix factorization
PhD ThesisBlind Source Separation (BSS) attempts to automatically extract and track a signal of interest in real world scenarios with other signals present. BSS addresses the problem of recovering the original signals from an observed mixture without relying on training knowledge. This research studied three novel approaches for solving the BSS problem based on the extensions of non-negative matrix factorization model and the sparsity regularization methods.
1) A framework of amalgamating pruning and Bayesian regularized cluster nonnegative tensor factorization with Itakura-Saito divergence for separating sources mixed in a stereo channel format: The sparse regularization term was adaptively tuned using a hierarchical Bayesian approach to yield the desired sparse decomposition. The modified Gaussian prior was formulated to express the correlation between different basis vectors. This algorithm automatically detected the optimal number of latent components of the individual source.
2) Factorization for single-channel BSS which decomposes an information-bearing matrix into complex of factor matrices that represent the spectral dictionary and temporal codes: A variational Bayesian approach was developed for computing the sparsity parameters for optimizing the matrix factorization. This approach combined the advantages of both complex matrix factorization (CMF) and variational -sparse analysis.
BLIND SOURCE SEPARATION USING STATISTICAL NONNEGATIVE MATRIX FACTORIZATION
ii
3) An imitated-stereo mixture model developed by weighting and time-shifting the original single-channel mixture where source signals can be modelled by the AR processes. The proposed mixing mixture is analogous to a stereo signal created by two microphones with one being real and another virtual. The imitated-stereo mixture employed the nonnegative tensor factorization for separating the observed mixture. The separability analysis of the imitated-stereo mixture was derived using Wiener masking.
All algorithms were tested with real audio signals. Performance of source separation was assessed by measuring the distortion between original source and the estimated one according to the signal-to-distortion (SDR) ratio. The experimental results demonstrate that the proposed uninformed audio separation algorithms have surpassed among the conventional BSS methods; i.e. IS-cNTF, SNMF and CMF methods, with average SDR improvement in the ranges from 2.6dB to 6.4dB per source.Payap Universit
Wavelet Domain Image Separation
In this paper, we consider the problem of blind signal and image separation
using a sparse representation of the images in the wavelet domain. We consider
the problem in a Bayesian estimation framework using the fact that the
distribution of the wavelet coefficients of real world images can naturally be
modeled by an exponential power probability density function. The Bayesian
approach which has been used with success in blind source separation gives also
the possibility of including any prior information we may have on the mixing
matrix elements as well as on the hyperparameters (parameters of the prior laws
of the noise and the sources). We consider two cases: first the case where the
wavelet coefficients are assumed to be i.i.d. and second the case where we
model the correlation between the coefficients of two adjacent scales by a
first order Markov chain. This paper only reports on the first case, the second
case results will be reported in a near future. The estimation computations are
done via a Monte Carlo Markov Chain (MCMC) procedure. Some simulations show the
performances of the proposed method. Keywords: Blind source separation,
wavelets, Bayesian estimation, MCMC Hasting-Metropolis algorithm.Comment: Presented at MaxEnt2002, the 22nd International Workshop on Bayesian
and Maximum Entropy methods (Aug. 3-9, 2002, Moscow, Idaho, USA). To appear
in Proceedings of American Institute of Physic
SZ and CMB reconstruction using Generalized Morphological Component Analysis
In the last decade, the study of cosmic microwave background (CMB) data has
become one of the most powerful tools to study and understand the Universe.
More precisely, measuring the CMB power spectrum leads to the estimation of
most cosmological parameters. Nevertheless, accessing such precious physical
information requires extracting several different astrophysical components from
the data. Recovering those astrophysical sources (CMB, Sunyaev-Zel'dovich
clusters, galactic dust) thus amounts to a component separation problem which
has already led to an intense activity in the field of CMB studies. In this
paper, we introduce a new sparsity-based component separation method coined
Generalized Morphological Component Analysis (GMCA). The GMCA approach is
formulated in a Bayesian maximum a posteriori (MAP) framework. Numerical
results show that this new source recovery technique performs well compared to
state-of-the-art component separation methods already applied to CMB data.Comment: 11 pages - Statistical Methodology - Special Issue on Astrostatistics
- in pres
Multi-modal dictionary learning for image separation with application in art investigation
In support of art investigation, we propose a new source separation method
that unmixes a single X-ray scan acquired from double-sided paintings. In this
problem, the X-ray signals to be separated have similar morphological
characteristics, which brings previous source separation methods to their
limits. Our solution is to use photographs taken from the front and back-side
of the panel to drive the separation process. The crux of our approach relies
on the coupling of the two imaging modalities (photographs and X-rays) using a
novel coupled dictionary learning framework able to capture both common and
disparate features across the modalities using parsimonious representations;
the common component models features shared by the multi-modal images, whereas
the innovation component captures modality-specific information. As such, our
model enables the formulation of appropriately regularized convex optimization
procedures that lead to the accurate separation of the X-rays. Our dictionary
learning framework can be tailored both to a single- and a multi-scale
framework, with the latter leading to a significant performance improvement.
Moreover, to improve further on the visual quality of the separated images, we
propose to train coupled dictionaries that ignore certain parts of the painting
corresponding to craquelure. Experimentation on synthetic and real data - taken
from digital acquisition of the Ghent Altarpiece (1432) - confirms the
superiority of our method against the state-of-the-art morphological component
analysis technique that uses either fixed or trained dictionaries to perform
image separation.Comment: submitted to IEEE Transactions on Images Processin
Bayesian separation of spectral sources under non-negativity and full additivity constraints
This paper addresses the problem of separating spectral sources which are
linearly mixed with unknown proportions. The main difficulty of the problem is
to ensure the full additivity (sum-to-one) of the mixing coefficients and
non-negativity of sources and mixing coefficients. A Bayesian estimation
approach based on Gamma priors was recently proposed to handle the
non-negativity constraints in a linear mixture model. However, incorporating
the full additivity constraint requires further developments. This paper
studies a new hierarchical Bayesian model appropriate to the non-negativity and
sum-to-one constraints associated to the regressors and regression coefficients
of linear mixtures. The estimation of the unknown parameters of this model is
performed using samples generated using an appropriate Gibbs sampler. The
performance of the proposed algorithm is evaluated through simulation results
conducted on synthetic mixture models. The proposed approach is also applied to
the processing of multicomponent chemical mixtures resulting from Raman
spectroscopy.Comment: v4: minor grammatical changes; Signal Processing, 200
Adaptive Langevin Sampler for Separation of t-Distribution Modelled Astrophysical Maps
We propose to model the image differentials of astrophysical source maps by
Student's t-distribution and to use them in the Bayesian source separation
method as priors. We introduce an efficient Markov Chain Monte Carlo (MCMC)
sampling scheme to unmix the astrophysical sources and describe the derivation
details. In this scheme, we use the Langevin stochastic equation for
transitions, which enables parallel drawing of random samples from the
posterior, and reduces the computation time significantly (by two orders of
magnitude). In addition, Student's t-distribution parameters are updated
throughout the iterations. The results on astrophysical source separation are
assessed with two performance criteria defined in the pixel and the frequency
domains.Comment: 12 pages, 6 figure
- âŠ