846 research outputs found
Comparing sets of data sets on the Grassmann and flag manifolds with applications to data analysis in high and low dimensions
Includes bibliographical references.2020 Summer.This dissertation develops numerical algorithms for comparing sets of data sets utilizing shape and orientation of data clouds. Two key components for "comparing" are the distance measure between data sets and correspondingly the geodesic path in between. Both components will play a core role which connects two parts of this dissertation, namely data analysis on the Grassmann manifold and flag manifold. For the first part, we build on the well known geometric framework for analyzing and optimizing over data on the Grassmann manifold. To be specific, we extend the classical self-organizing mappings to the Grassamann manifold to visualize sets of high dimensional data sets in 2D space. We also propose an optimization problem on the Grassmannian to recover missing data. In the second part, we extend the geometric framework to the flag manifold to encode the variability of nested subspaces. There we propose a numerical algorithm for computing a geodesic path and distance between nested subspaces. We also prove theorems to show how to reduce the dimension of the algorithm for practical computations. The approach is shown to have advantages for analyzing data when the number of data points is larger than the number of features
Bayesian Nonparametric Adaptive Control using Gaussian Processes
This technical report is a preprint of an article submitted to a journal.Most current Model Reference Adaptive Control
(MRAC) methods rely on parametric adaptive elements, in
which the number of parameters of the adaptive element are
fixed a priori, often through expert judgment. An example of
such an adaptive element are Radial Basis Function Networks
(RBFNs), with RBF centers pre-allocated based on the expected
operating domain. If the system operates outside of the expected
operating domain, this adaptive element can become
non-effective in capturing and canceling the uncertainty, thus
rendering the adaptive controller only semi-global in nature.
This paper investigates a Gaussian Process (GP) based Bayesian
MRAC architecture (GP-MRAC), which leverages the power and
flexibility of GP Bayesian nonparametric models of uncertainty.
GP-MRAC does not require the centers to be preallocated, can
inherently handle measurement noise, and enables MRAC to
handle a broader set of uncertainties, including those that are
defined as distributions over functions. We use stochastic stability
arguments to show that GP-MRAC guarantees good closed loop
performance with no prior domain knowledge of the uncertainty.
Online implementable GP inference methods are compared in
numerical simulations against RBFN-MRAC with preallocated
centers and are shown to provide better tracking and improved
long-term learning.This research was supported in part by ONR MURI Grant
N000141110688 and NSF grant ECS #0846750
Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches
Imaging spectrometers measure electromagnetic energy scattered in their
instantaneous field view in hundreds or thousands of spectral channels with
higher spectral resolution than multispectral cameras. Imaging spectrometers
are therefore often referred to as hyperspectral cameras (HSCs). Higher
spectral resolution enables material identification via spectroscopic analysis,
which facilitates countless applications that require identifying materials in
scenarios unsuitable for classical spectroscopic analysis. Due to low spatial
resolution of HSCs, microscopic material mixing, and multiple scattering,
spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus,
accurate estimation requires unmixing. Pixels are assumed to be mixtures of a
few materials, called endmembers. Unmixing involves estimating all or some of:
the number of endmembers, their spectral signatures, and their abundances at
each pixel. Unmixing is a challenging, ill-posed inverse problem because of
model inaccuracies, observation noise, environmental conditions, endmember
variability, and data set size. Researchers have devised and investigated many
models searching for robust, stable, tractable, and accurate unmixing
algorithms. This paper presents an overview of unmixing methods from the time
of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models
are first discussed. Signal-subspace, geometrical, statistical, sparsity-based,
and spatial-contextual unmixing algorithms are described. Mathematical problems
and potential solutions are described. Algorithm characteristics are
illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of
Selected Topics in Applied Earth Observations and Remote Sensin
Robustness in Dimensionality Reduction
Dimensionality reduction is widely used in many statistical applications, such as image analysis, microarray analysis, or text mining. This thesis focuses on three problems that relate to the robustness in dimension reduction.
The first topic is the performance analysis in dimension reduction, that is, quantitatively assessing the performance of a algorithm on a given dataset. A criterion for success is established from the geometric point of view to address this issues. A family of goodness measures, called \textsl{local rank correlation}, is developed to assess the performance of dimensionality reduction methods. The potential application of the local rank correlation in selecting tuning parameters of dimension reduction algorithms is also explored. The second topic is the sensitivity analysis in dimension reduction. Two types of influence functions are developed as measures of robustness, based on which we develop graphical display strategies for visualizing the robustness of a dimension reduction method, and flagging potential outliers. In the third part of the thesis, a novel robust PCA framework, called \textsl{Performance-Weighted Bagging PCA}, is proposed from the perspective of model averaging. It obtains a robust linear subspace by weighted averaging a collection of subspaces produced by subsamples. The robustness against outliers is achieved by a proper weighting scheme, and possible choices of weighting scheme are investigated
- …