1,383 research outputs found
Statistical Shape Analysis using Kernel PCA
©2006 SPIE--The International Society for Optical Engineering. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.
The electronic version of this article is the complete one and can be found online at: http://dx.doi.org/10.1117/12.641417DOI:10.1117/12.641417Presented at Image Processing
Algorithms and Systems, Neural Networks, and Machine Learning, 16-18 January 2006, San Jose, California, USA.Mercer kernels are used for a wide range of image and signal processing tasks like de-noising, clustering, discriminant analysis etc. These algorithms construct their solutions in terms of the expansions in a high-dimensional feature space F. However, many applications like kernel PCA (principal component analysis) can be used more effectively if a pre-image of the projection in the feature space is available. In this paper, we propose a novel method to reconstruct a unique approximate pre-image of a feature vector and apply it for statistical shape analysis. We provide some experimental results to demonstrate the advantages of kernel PCA over linear PCA for shape learning, which include, but are not limited to, ability to learn and distinguish multiple geometries of shapes and robustness to occlusions
Kernel methods for detecting coherent structures in dynamical data
We illustrate relationships between classical kernel-based dimensionality
reduction techniques and eigendecompositions of empirical estimates of
reproducing kernel Hilbert space (RKHS) operators associated with dynamical
systems. In particular, we show that kernel canonical correlation analysis
(CCA) can be interpreted in terms of kernel transfer operators and that it can
be obtained by optimizing the variational approach for Markov processes (VAMP)
score. As a result, we show that coherent sets of particle trajectories can be
computed by kernel CCA. We demonstrate the efficiency of this approach with
several examples, namely the well-known Bickley jet, ocean drifter data, and a
molecular dynamics problem with a time-dependent potential. Finally, we propose
a straightforward generalization of dynamic mode decomposition (DMD) called
coherent mode decomposition (CMD). Our results provide a generic machine
learning approach to the computation of coherent sets with an objective score
that can be used for cross-validation and the comparison of different methods
Face Recognition Methodologies Using Component Analysis: The Contemporary Affirmation of The Recent Literature
This paper explored the contemporary affirmation of the recent literature in the context of face recognition systems, a review motivated by contradictory claims in the literature. This paper shows how the relative performance of recent claims based on methodologies such as PCA and ICA, which are depend on the task statement. It then explores the space of each model acclaimed in recent literature. In the process, this paper verifies the results of many of the face recognition models in the literature, and relates them to each other and to this work
Approximate Kernel PCA Using Random Features: Computational vs. Statistical Trade-off
Kernel methods are powerful learning methodologies that provide a simple way
to construct nonlinear algorithms from linear ones. Despite their popularity,
they suffer from poor scalability in big data scenarios. Various approximation
methods, including random feature approximation have been proposed to alleviate
the problem. However, the statistical consistency of most of these approximate
kernel methods is not well understood except for kernel ridge regression
wherein it has been shown that the random feature approximation is not only
computationally efficient but also statistically consistent with a minimax
optimal rate of convergence. In this paper, we investigate the efficacy of
random feature approximation in the context of kernel principal component
analysis (KPCA) by studying the trade-off between computational and statistical
behaviors of approximate KPCA. We show that the approximate KPCA is both
computationally and statistically efficient compared to KPCA in terms of the
error associated with reconstructing a kernel function based on its projection
onto the corresponding eigenspaces. Depending on the eigenvalue decay behavior
of the covariance operator, we show that only features (polynomial
decay) or features (exponential decay) are needed to match the
statistical performance of KPCA. We also investigate their statistical
behaviors in terms of the convergence of corresponding eigenspaces wherein we
show that only features are required to match the performance of
KPCA and if fewer than features are used, then approximate KPCA has
a worse statistical behavior than that of KPCA.Comment: 46 page
- …