978 research outputs found
Post-Nonlinear Mixtures and Beyond
Although sources in general nonlinear mixturm arc not separable iising only statistical
independence, a special and realistic case of nonlinear mixtnres, the post nonlinear
(PNL) mixture is separable choosing a suited separating system. Then, a natural approach is
based on the estimation of tho separating Bystem parameters by minimizing an indcpendence
criterion, like estimated mwce mutual information. This class of methods requires higher
(than 2) order statistics, and cannot separate Gaarsian sources. However, use of [weak) prior,
like source temporal correlation or nonstationarity, leads to other source separation Jgw
rithms, which are able to separate Gaussian sourra, and can even, for a few of them, works
with second-order statistics. Recently, modeling time correlated s011rces by Markov models,
we propose vcry efficient algorithms hmed on minimization of the conditional mutual information.
Currently, using the prior of temporally correlated sources, we investigate the fesihility
of inverting PNL mixtures with non-bijectiw non-liacarities, like quadratic functions. In this
paper, we review the main ICA and BSS results for riunlinear mixtures, present PNL models
and algorithms, and finish with advanced resutts using temporally correlated snu~s
Robust Linear Spectral Unmixing using Anomaly Detection
This paper presents a Bayesian algorithm for linear spectral unmixing of
hyperspectral images that accounts for anomalies present in the data. The model
proposed assumes that the pixel reflectances are linear mixtures of unknown
endmembers, corrupted by an additional nonlinear term modelling anomalies and
additive Gaussian noise. A Markov random field is used for anomaly detection
based on the spatial and spectral structures of the anomalies. This allows
outliers to be identified in particular regions and wavelengths of the data
cube. A Bayesian algorithm is proposed to estimate the parameters involved in
the model yielding a joint linear unmixing and anomaly detection algorithm.
Simulations conducted with synthetic and real hyperspectral images demonstrate
the accuracy of the proposed unmixing and outlier detection strategy for the
analysis of hyperspectral images
Hybrid spectral unmixing : using artificial neural networks for linear/non-linear switching
Spectral unmixing is a key process in identifying spectral signature of materials and quantifying their spatial distribution over an image. The linear model is expected to provide acceptable results when two assumptions are satisfied: (1) The mixing process should occur at macroscopic level and (2) Photons must interact with single material before reaching the sensor. However, these assumptions do not always hold and more complex nonlinear models are required. This study proposes a new hybrid method for switching between linear and nonlinear spectral unmixing of hyperspectral data based on artificial neural networks. The neural networks was trained with parameters within a window of the pixel under consideration. These parameters are computed to represent the diversity of the neighboring pixels and are based on the Spectral Angular Distance, Covariance and a non linearity parameter. The endmembers were extracted using Vertex Component Analysis while the abundances were estimated using the method identified by the neural networks (Vertex Component Analysis, Fully Constraint Least Square Method, Polynomial Post Nonlinear Mixing Model or Generalized Bilinear Model). Results show that the hybrid method performs better than each of the individual techniques with high overall accuracy, while the abundance estimation error is significantly lower than that obtained using the individual methods. Experiments on both synthetic dataset and real hyperspectral images demonstrated that the proposed hybrid switch method is efficient for solving spectral unmixing of hyperspectral images as compared to individual algorithms
The Minimal Modal Interpretation of Quantum Theory
We introduce a realist, unextravagant interpretation of quantum theory that
builds on the existing physical structure of the theory and allows experiments
to have definite outcomes, but leaves the theory's basic dynamical content
essentially intact. Much as classical systems have specific states that evolve
along definite trajectories through configuration spaces, the traditional
formulation of quantum theory asserts that closed quantum systems have specific
states that evolve unitarily along definite trajectories through Hilbert
spaces, and our interpretation extends this intuitive picture of states and
Hilbert-space trajectories to the case of open quantum systems as well. We
provide independent justification for the partial-trace operation for density
matrices, reformulate wave-function collapse in terms of an underlying
interpolating dynamics, derive the Born rule from deeper principles, resolve
several open questions regarding ontological stability and dynamics, address a
number of familiar no-go theorems, and argue that our interpretation is
ultimately compatible with Lorentz invariance. Along the way, we also
investigate a number of unexplored features of quantum theory, including an
interesting geometrical structure---which we call subsystem space---that we
believe merits further study. We include an appendix that briefly reviews the
traditional Copenhagen interpretation and the measurement problem of quantum
theory, as well as the instrumentalist approach and a collection of
foundational theorems not otherwise discussed in the main text.Comment: 73 pages + references, 9 figures; cosmetic changes, added figure,
updated references, generalized conditional probabilities with attendant
changes to the sections on the EPR-Bohm thought experiment and Lorentz
invariance; for a concise summary, see the companion letter at
arXiv:1405.675
Dimension reduction for systems with slow relaxation
We develop reduced, stochastic models for high dimensional, dissipative
dynamical systems that relax very slowly to equilibrium and can encode long
term memory. We present a variety of empirical and first principles approaches
for model reduction, and build a mathematical framework for analyzing the
reduced models. We introduce the notions of universal and asymptotic filters to
characterize `optimal' model reductions for sloppy linear models. We illustrate
our methods by applying them to the practically important problem of modeling
evaporation in oil spills.Comment: 48 Pages, 13 figures. Paper dedicated to the memory of Leo Kadanof
The Incomplete Rosetta Stone Problem: Identifiability Results for Multi-View Nonlinear ICA
We consider the problem of recovering a common latent source with independent
components from multiple views. This applies to settings in which a variable is
measured with multiple experimental modalities, and where the goal is to
synthesize the disparate measurements into a single unified representation. We
consider the case that the observed views are a nonlinear mixing of
component-wise corruptions of the sources. When the views are considered
separately, this reduces to nonlinear Independent Component Analysis (ICA) for
which it is provably impossible to undo the mixing. We present novel
identifiability proofs that this is possible when the multiple views are
considered jointly, showing that the mixing can theoretically be undone using
function approximators such as deep neural networks. In contrast to known
identifiability results for nonlinear ICA, we prove that independent latent
sources with arbitrary mixing can be recovered as long as multiple,
sufficiently different noisy views are available
- …