11,233 research outputs found
CS Decomposition Based Bayesian Subspace Estimation
In numerous applications, it is required to estimate the principal subspace of the data, possibly from a very limited number of samples. Additionally, it often occurs that some rough knowledge about this subspace is available and could be used to improve subspace estimation accuracy in this case. This is the problem we address herein and, in order to solve it, a Bayesian approach is proposed. The main idea consists of using the CS decomposition of the semi-orthogonal matrix whose columns span the subspace of interest. This parametrization is intuitively appealing and allows for non informative prior distributions of the matrices involved in the CS decomposition and very mild assumptions about the angles between the actual subspace and the prior subspace. The posterior distributions are derived and a Gibbs sampling scheme is presented to obtain the minimum mean-square distance estimator of the subspace of interest. Numerical simulations and an application to real hyperspectral data assess the validity and the performances of the estimator
Bayesian Nonparametric Unmixing of Hyperspectral Images
Hyperspectral imaging is an important tool in remote sensing, allowing for
accurate analysis of vast areas. Due to a low spatial resolution, a pixel of a
hyperspectral image rarely represents a single material, but rather a mixture
of different spectra. HSU aims at estimating the pure spectra present in the
scene of interest, referred to as endmembers, and their fractions in each
pixel, referred to as abundances. Today, many HSU algorithms have been
proposed, based either on a geometrical or statistical model. While most
methods assume that the number of endmembers present in the scene is known,
there is only little work about estimating this number from the observed data.
In this work, we propose a Bayesian nonparametric framework that jointly
estimates the number of endmembers, the endmembers itself, and their
abundances, by making use of the Indian Buffet Process as a prior for the
endmembers. Simulation results and experiments on real data demonstrate the
effectiveness of the proposed algorithm, yielding results comparable with
state-of-the-art methods while being able to reliably infer the number of
endmembers. In scenarios with strong noise, where other algorithms provide only
poor results, the proposed approach tends to overestimate the number of
endmembers slightly. The additional endmembers, however, often simply represent
noisy replicas of present endmembers and could easily be merged in a
post-processing step
The Ensemble Kalman Filter: A Signal Processing Perspective
The ensemble Kalman filter (EnKF) is a Monte Carlo based implementation of
the Kalman filter (KF) for extremely high-dimensional, possibly nonlinear and
non-Gaussian state estimation problems. Its ability to handle state dimensions
in the order of millions has made the EnKF a popular algorithm in different
geoscientific disciplines. Despite a similarly vital need for scalable
algorithms in signal processing, e.g., to make sense of the ever increasing
amount of sensor data, the EnKF is hardly discussed in our field.
This self-contained review paper is aimed at signal processing researchers
and provides all the knowledge to get started with the EnKF. The algorithm is
derived in a KF framework, without the often encountered geoscientific
terminology. Algorithmic challenges and required extensions of the EnKF are
provided, as well as relations to sigma-point KF and particle filters. The
relevant EnKF literature is summarized in an extensive survey and unique
simulation examples, including popular benchmark problems, complement the
theory with practical insights. The signal processing perspective highlights
new directions of research and facilitates the exchange of potentially
beneficial ideas, both for the EnKF and high-dimensional nonlinear and
non-Gaussian filtering in general
Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressedāeither explicitly or
implicitlyāto this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m Ć n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k))
floating-point operations (flops) in contrast to O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data
Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions
Low-rank matrix approximations, such as the truncated singular value
decomposition and the rank-revealing QR decomposition, play a central role in
data analysis and scientific computing. This work surveys and extends recent
research which demonstrates that randomization offers a powerful tool for
performing low-rank matrix approximation. These techniques exploit modern
computational architectures more fully than classical methods and open the
possibility of dealing with truly massive data sets.
This paper presents a modular framework for constructing randomized
algorithms that compute partial matrix decompositions. These methods use random
sampling to identify a subspace that captures most of the action of a matrix.
The input matrix is then compressed---either explicitly or implicitly---to this
subspace, and the reduced matrix is manipulated deterministically to obtain the
desired low-rank factorization. In many cases, this approach beats its
classical competitors in terms of accuracy, speed, and robustness. These claims
are supported by extensive numerical experiments and a detailed error analysis
- ā¦