16,195 research outputs found
Scheduling Dimension Reduction of LPV Models -- A Deep Neural Network Approach
In this paper, the existing Scheduling Dimension Reduction (SDR) methods for
Linear Parameter-Varying (LPV) models are reviewed and a Deep Neural Network
(DNN) approach is developed that achieves higher model accuracy under
scheduling dimension reduction. The proposed DNN method and existing SDR
methods are compared on a two-link robotic manipulator, both in terms of model
accuracy and performance of controllers synthesized with the reduced models.
The methods compared include SDR for state-space models using Principal
Component Analysis (PCA), Kernel PCA (KPCA) and Autoencoders (AE). On the
robotic manipulator example, the DNN method achieves improved representation of
the matrix variations of the original LPV model in terms of the Frobenius norm
compared to the current methods. Moreover, when the resulting model is used to
accommodate synthesis, improved closed-loop performance is obtained compared to
the current methods.Comment: Accepted to American Control Conference (ACC) 2020, Denve
On dimension reduction in Gaussian filters
A priori dimension reduction is a widely adopted technique for reducing the
computational complexity of stationary inverse problems. In this setting, the
solution of an inverse problem is parameterized by a low-dimensional basis that
is often obtained from the truncated Karhunen-Loeve expansion of the prior
distribution. For high-dimensional inverse problems equipped with smoothing
priors, this technique can lead to drastic reductions in parameter dimension
and significant computational savings.
In this paper, we extend the concept of a priori dimension reduction to
non-stationary inverse problems, in which the goal is to sequentially infer the
state of a dynamical system. Our approach proceeds in an offline-online
fashion. We first identify a low-dimensional subspace in the state space before
solving the inverse problem (the offline phase), using either the method of
"snapshots" or regularized covariance estimation. Then this subspace is used to
reduce the computational complexity of various filtering algorithms - including
the Kalman filter, extended Kalman filter, and ensemble Kalman filter - within
a novel subspace-constrained Bayesian prediction-and-update procedure (the
online phase). We demonstrate the performance of our new dimension reduction
approach on various numerical examples. In some test cases, our approach
reduces the dimensionality of the original problem by orders of magnitude and
yields up to two orders of magnitude in computational savings
Modeling Dynamic Functional Connectivity with Latent Factor Gaussian Processes
Dynamic functional connectivity, as measured by the time-varying covariance
of neurological signals, is believed to play an important role in many aspects
of cognition. While many methods have been proposed, reliably establishing the
presence and characteristics of brain connectivity is challenging due to the
high dimensionality and noisiness of neuroimaging data. We present a latent
factor Gaussian process model which addresses these challenges by learning a
parsimonious representation of connectivity dynamics. The proposed model
naturally allows for inference and visualization of time-varying connectivity.
As an illustration of the scientific utility of the model, application to a
data set of rat local field potential activity recorded during a complex
non-spatial memory task provides evidence of stimuli differentiation
Validation of nonlinear PCA
Linear principal component analysis (PCA) can be extended to a nonlinear PCA
by using artificial neural networks. But the benefit of curved components
requires a careful control of the model complexity. Moreover, standard
techniques for model selection, including cross-validation and more generally
the use of an independent test set, fail when applied to nonlinear PCA because
of its inherent unsupervised characteristics. This paper presents a new
approach for validating the complexity of nonlinear PCA models by using the
error in missing data estimation as a criterion for model selection. It is
motivated by the idea that only the model of optimal complexity is able to
predict missing values with the highest accuracy. While standard test set
validation usually favours over-fitted nonlinear PCA models, the proposed model
validation approach correctly selects the optimal model complexity.Comment: 12 pages, 5 figure
- …