2 research outputs found
Deep vs. Diverse Architectures for Classification Problems
This study compares various superlearner and deep learning architectures
(machine-learning-based and neural-network-based) for classification problems
across several simulated and industrial datasets to assess performance and
computational efficiency, as both methods have nice theoretical convergence
properties. Superlearner formulations outperform other methods at small to
moderate sample sizes (500-2500) on nonlinear and mixed linear/nonlinear
predictor relationship datasets, while deep neural networks perform well on
linear predictor relationship datasets of all sizes. This suggests faster
convergence of the superlearner compared to deep neural network architectures
on many messy classification problems for real-world data.
Superlearners also yield interpretable models, allowing users to examine
important signals in the data; in addition, they offer flexible formulation,
where users can retain good performance with low-computational-cost base
algorithms.
K-nearest-neighbor (KNN) regression demonstrates improvements using the
superlearner framework, as well; KNN superlearners consistently outperform deep
architectures and KNN regression, suggesting that superlearners may be better
able to capture local and global geometric features through utilizing a variety
of algorithms to probe the data space.Comment: Paper done as part of R&D project at Kaplan University, submitted to
GCAI 201
Locality preserving projection on SPD matrix Lie group: algorithm and analysis
Symmetric positive definite (SPD) matrices used as feature descriptors in
image recognition are usually high dimensional. Traditional manifold learning
is only applicable for reducing the dimension of high-dimensional vector-form
data. For high-dimensional SPD matrices, directly using manifold learning
algorithms to reduce the dimension of matrix-form data is impossible. The SPD
matrix must first be transformed into a long vector, and then the dimension of
this vector must be reduced. However, this approach breaks the spatial
structure of the SPD matrix space. To overcome this limitation, we propose a
new dimension reduction algorithm on SPD matrix space to transform
high-dimensional SPD matrices into low-dimensional SPD matrices. Our work is
based on the fact that the set of all SPD matrices with the same size has a Lie
group structure, and we aim to transform the manifold learning to the SPD
matrix Lie group. We use the basic idea of the manifold learning algorithm
called locality preserving projection (LPP) to construct the corresponding
Laplacian matrix on the SPD matrix Lie group. Thus, we call our approach
Lie-LPP to emphasize its Lie group character. We present a detailed algorithm
analysis and show through experiments that Lie-LPP achieves effective results
on human action recognition and human face recognition.Comment: 15 pages, 3 table