22,345 research outputs found

    Nonlinear Dimension Reduction for Micro-array Data (Small n and Large p)

    Get PDF

    Manifold Elastic Net: A Unified Framework for Sparse Dimension Reduction

    Full text link
    It is difficult to find the optimal sparse solution of a manifold learning based dimensionality reduction algorithm. The lasso or the elastic net penalized manifold learning based dimensionality reduction is not directly a lasso penalized least square problem and thus the least angle regression (LARS) (Efron et al. \cite{LARS}), one of the most popular algorithms in sparse learning, cannot be applied. Therefore, most current approaches take indirect ways or have strict settings, which can be inconvenient for applications. In this paper, we proposed the manifold elastic net or MEN for short. MEN incorporates the merits of both the manifold learning based dimensionality reduction and the sparse learning based dimensionality reduction. By using a series of equivalent transformations, we show MEN is equivalent to the lasso penalized least square problem and thus LARS is adopted to obtain the optimal sparse solution of MEN. In particular, MEN has the following advantages for subsequent classification: 1) the local geometry of samples is well preserved for low dimensional data representation, 2) both the margin maximization and the classification error minimization are considered for sparse projection calculation, 3) the projection matrix of MEN improves the parsimony in computation, 4) the elastic net penalty reduces the over-fitting problem, and 5) the projection matrix of MEN can be interpreted psychologically and physiologically. Experimental evidence on face recognition over various popular datasets suggests that MEN is superior to top level dimensionality reduction algorithms.Comment: 33 pages, 12 figure

    Geometric Structures on Spaces of Weighted Submanifolds

    Get PDF
    In this paper we use a diffeo-geometric framework based on manifolds that are locally modeled on "convenient" vector spaces to study the geometry of some infinite dimensional spaces. Given a finite dimensional symplectic manifold (M,ω)(M,\omega), we construct a weak symplectic structure on each leaf Iw{\textbf I}_{w} of a foliation of the space of compact oriented isotropic submanifolds in MM equipped with top degree forms of total measure 1. These forms are called weightings and such manifolds are said to be weighted. We show that this symplectic structure on the particular leaves consisting of weighted Lagrangian submanifolds is equivalent to a heuristic weak symplectic structure of Weinstein [Adv. Math. 82 (1990), 133-159]. When the weightings are positive, these symplectic spaces are symplectomorphic to reductions of a weak symplectic structure of Donaldson [Asian J. Math. 3 (1999), 1-15] on the space of embeddings of a fixed compact oriented manifold into MM. When MM is compact, by generalizing a moment map of Weinstein we construct a symplectomorphism of each leaf Iw{\textbf I}_{w} consisting of positive weighted isotropic submanifolds onto a coadjoint orbit of the group of Hamiltonian symplectomorphisms of MM equipped with the Kirillov-Kostant-Souriau symplectic structure. After defining notions of Poisson algebras and Poisson manifolds, we prove that each space Iw{\textbf I}_{w} can also be identified with a symplectic leaf of a Poisson structure. Finally, we discuss a kinematic description of spaces of weighted submanifolds

    Dimensionality Reduction Mappings

    Get PDF
    A wealth of powerful dimensionality reduction methods has been established which can be used for data visualization and preprocessing. These are accompanied by formal evaluation schemes, which allow a quantitative evaluation along general principles and which even lead to further visualization schemes based on these objectives. Most methods, however, provide a mapping of a priorly given finite set of points only, requiring additional steps for out-of-sample extensions. We propose a general view on dimensionality reduction based on the concept of cost functions, and, based on this general principle, extend dimensionality reduction to explicit mappings of the data manifold. This offers simple out-of-sample extensions. Further, it opens a way towards a theory of data visualization taking the perspective of its generalization ability to new data points. We demonstrate the approach based on a simple global linear mapping as well as prototype-based local linear mappings.
    corecore