8,342 research outputs found
Bayesian separation of spectral sources under non-negativity and full additivity constraints
This paper addresses the problem of separating spectral sources which are
linearly mixed with unknown proportions. The main difficulty of the problem is
to ensure the full additivity (sum-to-one) of the mixing coefficients and
non-negativity of sources and mixing coefficients. A Bayesian estimation
approach based on Gamma priors was recently proposed to handle the
non-negativity constraints in a linear mixture model. However, incorporating
the full additivity constraint requires further developments. This paper
studies a new hierarchical Bayesian model appropriate to the non-negativity and
sum-to-one constraints associated to the regressors and regression coefficients
of linear mixtures. The estimation of the unknown parameters of this model is
performed using samples generated using an appropriate Gibbs sampler. The
performance of the proposed algorithm is evaluated through simulation results
conducted on synthetic mixture models. The proposed approach is also applied to
the processing of multicomponent chemical mixtures resulting from Raman
spectroscopy.Comment: v4: minor grammatical changes; Signal Processing, 200
An Extension of Generalized Linear Models to Finite Mixture Outcome Distributions
Finite mixture distributions arise in sampling a heterogeneous population.
Data drawn from such a population will exhibit extra variability relative to
any single subpopulation. Statistical models based on finite mixtures can
assist in the analysis of categorical and count outcomes when standard
generalized linear models (GLMs) cannot adequately account for variability
observed in the data. We propose an extension of GLM where the response is
assumed to follow a finite mixture distribution, while the regression of
interest is linked to the mixture's mean. This approach may be preferred over a
finite mixture of regressions when the population mean is the quantity of
interest; here, only a single regression function must be specified and
interpreted in the analysis. A technical challenge is that the mean of a finite
mixture is a composite parameter which does not appear explicitly in the
density. The proposed model is completely likelihood-based and maintains the
link to the regression through a certain random effects structure. We consider
typical GLM cases where means are either real-valued, constrained to be
positive, or constrained to be on the unit interval. The resulting model is
applied to two example datasets through a Bayesian analysis: one with
success/failure outcomes and one with count outcomes. Supporting the extra
variation is seen to improve residual plots and to appropriately widen
prediction intervals
Hyper-Spectral Image Analysis with Partially-Latent Regression and Spatial Markov Dependencies
Hyper-spectral data can be analyzed to recover physical properties at large
planetary scales. This involves resolving inverse problems which can be
addressed within machine learning, with the advantage that, once a relationship
between physical parameters and spectra has been established in a data-driven
fashion, the learned relationship can be used to estimate physical parameters
for new hyper-spectral observations. Within this framework, we propose a
spatially-constrained and partially-latent regression method which maps
high-dimensional inputs (hyper-spectral images) onto low-dimensional responses
(physical parameters such as the local chemical composition of the soil). The
proposed regression model comprises two key features. Firstly, it combines a
Gaussian mixture of locally-linear mappings (GLLiM) with a partially-latent
response model. While the former makes high-dimensional regression tractable,
the latter enables to deal with physical parameters that cannot be observed or,
more generally, with data contaminated by experimental artifacts that cannot be
explained with noise models. Secondly, spatial constraints are introduced in
the model through a Markov random field (MRF) prior which provides a spatial
structure to the Gaussian-mixture hidden variables. Experiments conducted on a
database composed of remotely sensed observations collected from the Mars
planet by the Mars Express orbiter demonstrate the effectiveness of the
proposed model.Comment: 12 pages, 4 figures, 3 table
Nonlinear Models Using Dirichlet Process Mixtures
We introduce a new nonlinear model for classification, in which we model the
joint distribution of response variable, y, and covariates, x,
non-parametrically using Dirichlet process mixtures. We keep the relationship
between y and x linear within each component of the mixture. The overall
relationship becomes nonlinear if the mixture contains more than one component.
We use simulated data to compare the performance of this new approach to a
simple multinomial logit (MNL) model, an MNL model with quadratic terms, and a
decision tree model. We also evaluate our approach on a protein fold
classification problem, and find that our model provides substantial
improvement over previous methods, which were based on Neural Networks (NN) and
Support Vector Machines (SVM). Folding classes of protein have a hierarchical
structure. We extend our method to classification problems where a class
hierarchy is available. We find that using the prior information regarding the
hierarchical structure of protein folds can result in higher predictive
accuracy
Robust EM algorithm for model-based curve clustering
Model-based clustering approaches concern the paradigm of exploratory data
analysis relying on the finite mixture model to automatically find a latent
structure governing observed data. They are one of the most popular and
successful approaches in cluster analysis. The mixture density estimation is
generally performed by maximizing the observed-data log-likelihood by using the
expectation-maximization (EM) algorithm. However, it is well-known that the EM
algorithm initialization is crucial. In addition, the standard EM algorithm
requires the number of clusters to be known a priori. Some solutions have been
provided in [31, 12] for model-based clustering with Gaussian mixture models
for multivariate data. In this paper we focus on model-based curve clustering
approaches, when the data are curves rather than vectorial data, based on
regression mixtures. We propose a new robust EM algorithm for clustering
curves. We extend the model-based clustering approach presented in [31] for
Gaussian mixture models, to the case of curve clustering by regression
mixtures, including polynomial regression mixtures as well as spline or
B-spline regressions mixtures. Our approach both handles the problem of
initialization and the one of choosing the optimal number of clusters as the EM
learning proceeds, rather than in a two-fold scheme. This is achieved by
optimizing a penalized log-likelihood criterion. A simulation study confirms
the potential benefit of the proposed algorithm in terms of robustness
regarding initialization and funding the actual number of clusters.Comment: In Proceedings of the 2013 International Joint Conference on Neural
Networks (IJCNN), 2013, Dallas, TX, US
Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches
Imaging spectrometers measure electromagnetic energy scattered in their
instantaneous field view in hundreds or thousands of spectral channels with
higher spectral resolution than multispectral cameras. Imaging spectrometers
are therefore often referred to as hyperspectral cameras (HSCs). Higher
spectral resolution enables material identification via spectroscopic analysis,
which facilitates countless applications that require identifying materials in
scenarios unsuitable for classical spectroscopic analysis. Due to low spatial
resolution of HSCs, microscopic material mixing, and multiple scattering,
spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus,
accurate estimation requires unmixing. Pixels are assumed to be mixtures of a
few materials, called endmembers. Unmixing involves estimating all or some of:
the number of endmembers, their spectral signatures, and their abundances at
each pixel. Unmixing is a challenging, ill-posed inverse problem because of
model inaccuracies, observation noise, environmental conditions, endmember
variability, and data set size. Researchers have devised and investigated many
models searching for robust, stable, tractable, and accurate unmixing
algorithms. This paper presents an overview of unmixing methods from the time
of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models
are first discussed. Signal-subspace, geometrical, statistical, sparsity-based,
and spatial-contextual unmixing algorithms are described. Mathematical problems
and potential solutions are described. Algorithm characteristics are
illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of
Selected Topics in Applied Earth Observations and Remote Sensin
- …