724 research outputs found
Manifold Elastic Net: A Unified Framework for Sparse Dimension Reduction
It is difficult to find the optimal sparse solution of a manifold learning
based dimensionality reduction algorithm. The lasso or the elastic net
penalized manifold learning based dimensionality reduction is not directly a
lasso penalized least square problem and thus the least angle regression (LARS)
(Efron et al. \cite{LARS}), one of the most popular algorithms in sparse
learning, cannot be applied. Therefore, most current approaches take indirect
ways or have strict settings, which can be inconvenient for applications. In
this paper, we proposed the manifold elastic net or MEN for short. MEN
incorporates the merits of both the manifold learning based dimensionality
reduction and the sparse learning based dimensionality reduction. By using a
series of equivalent transformations, we show MEN is equivalent to the lasso
penalized least square problem and thus LARS is adopted to obtain the optimal
sparse solution of MEN. In particular, MEN has the following advantages for
subsequent classification: 1) the local geometry of samples is well preserved
for low dimensional data representation, 2) both the margin maximization and
the classification error minimization are considered for sparse projection
calculation, 3) the projection matrix of MEN improves the parsimony in
computation, 4) the elastic net penalty reduces the over-fitting problem, and
5) the projection matrix of MEN can be interpreted psychologically and
physiologically. Experimental evidence on face recognition over various popular
datasets suggests that MEN is superior to top level dimensionality reduction
algorithms.Comment: 33 pages, 12 figure
A locally adaptive normal distribution
The multivariate normal density is a monotonic function of the distance to
the mean, and its ellipsoidal shape is due to the underlying Euclidean metric.
We suggest to replace this metric with a locally adaptive, smoothly changing
(Riemannian) metric that favors regions of high local density. The resulting
locally adaptive normal distribution (LAND) is a generalization of the normal
distribution to the "manifold" setting, where data is assumed to lie near a
potentially low-dimensional manifold embedded in . The LAND is
parametric, depending only on a mean and a covariance, and is the maximum
entropy distribution under the given metric. The underlying metric is, however,
non-parametric. We develop a maximum likelihood algorithm to infer the
distribution parameters that relies on a combination of gradient descent and
Monte Carlo integration. We further extend the LAND to mixture models, and
provide the corresponding EM algorithm. We demonstrate the efficiency of the
LAND to fit non-trivial probability distributions over both synthetic data, and
EEG measurements of human sleep
Discriminant feature extraction: exploiting structures within each sample and across samples.
Zhang, Wei.Thesis (M.Phil.)--Chinese University of Hong Kong, 2009.Includes bibliographical references (leaves 95-109).Abstract also in Chinese.Abstract --- p.iAcknowledgement --- p.ivChapter 1 --- Introduction --- p.1Chapter 1.1 --- Area of Machine Learning --- p.1Chapter 1.1.1 --- Types of Algorithms --- p.2Chapter 1.1.2 --- Modeling Assumptions --- p.4Chapter 1.2 --- Dimensionality Reduction --- p.4Chapter 1.3 --- Structure of the Thesis --- p.8Chapter 2 --- Dimensionality Reduction --- p.10Chapter 2.1 --- Feature Extraction --- p.11Chapter 2.1.1 --- Linear Feature Extraction --- p.11Chapter 2.1.2 --- Nonlinear Feature Extraction --- p.16Chapter 2.1.3 --- Sparse Feature Extraction --- p.19Chapter 2.1.4 --- Nonnegative Feature Extraction --- p.19Chapter 2.1.5 --- Incremental Feature Extraction --- p.20Chapter 2.2 --- Feature Selection --- p.20Chapter 2.2.1 --- Viewpoint of Feature Extraction --- p.21Chapter 2.2.2 --- Feature-Level Score --- p.22Chapter 2.2.3 --- Subset-Level Score --- p.22Chapter 3 --- Various Views of Feature Extraction --- p.24Chapter 3.1 --- Probabilistic Models --- p.25Chapter 3.2 --- Matrix Factorization --- p.26Chapter 3.3 --- Graph Embedding --- p.28Chapter 3.4 --- Manifold Learning --- p.28Chapter 3.5 --- Distance Metric Learning --- p.32Chapter 4 --- Tensor linear Laplacian discrimination --- p.34Chapter 4.1 --- Motivation --- p.35Chapter 4.2 --- Tensor Linear Laplacian Discrimination --- p.37Chapter 4.2.1 --- Preliminaries of Tensor Operations --- p.38Chapter 4.2.2 --- Discriminant Scatters --- p.38Chapter 4.2.3 --- Solving for Projection Matrices --- p.40Chapter 4.3 --- Definition of Weights --- p.44Chapter 4.3.1 --- Contextual Distance --- p.44Chapter 4.3.2 --- Tensor Coding Length --- p.45Chapter 4.4 --- Experimental Results --- p.47Chapter 4.4.1 --- Face Recognition --- p.48Chapter 4.4.2 --- Texture Classification --- p.50Chapter 4.4.3 --- Handwritten Digit Recognition --- p.52Chapter 4.5 --- Conclusions --- p.54Chapter 5 --- Semi-Supervised Semi-Riemannian Metric Map --- p.56Chapter 5.1 --- Introduction --- p.57Chapter 5.2 --- Semi-Riemannian Spaces --- p.60Chapter 5.3 --- Semi-Supervised Semi-Riemannian Metric Map --- p.61Chapter 5.3.1 --- The Discrepancy Criterion --- p.61Chapter 5.3.2 --- Semi-Riemannian Geometry Based Feature Extraction Framework --- p.63Chapter 5.3.3 --- Semi-Supervised Learning of Semi-Riemannian Metrics --- p.65Chapter 5.4 --- Discussion --- p.72Chapter 5.4.1 --- A General Framework for Semi-Supervised Dimensionality Reduction --- p.72Chapter 5.4.2 --- Comparison to SRDA --- p.74Chapter 5.4.3 --- Advantages over Semi-supervised Discriminant Analysis --- p.74Chapter 5.5 --- Experiments --- p.75Chapter 5.5.1 --- Experimental Setup --- p.76Chapter 5.5.2 --- Face Recognition --- p.76Chapter 5.5.3 --- Handwritten Digit Classification --- p.82Chapter 5.6 --- Conclusion --- p.84Chapter 6 --- Summary --- p.86Chapter A --- The Relationship between LDA and LLD --- p.89Chapter B --- Coding Length --- p.91Chapter C --- Connection between SRDA and ANMM --- p.92Chapter D --- From S3RMM to Graph-Based Approaches --- p.93Bibliography --- p.9
- …