6,607 research outputs found

    A unified framework for subspace based face recognition.

    Get PDF
    Wang Xiaogang.Thesis (M.Phil.)--Chinese University of Hong Kong, 2003.Includes bibliographical references (leaves 88-91).Abstracts in English and Chinese.Abstract --- p.iAcknowledgments --- p.vTable of Contents --- p.viList of Figures --- p.viiiList of Tables --- p.xChapter Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Face recognition --- p.1Chapter 1.2 --- Subspace based face recognition technique --- p.2Chapter 1.3 --- Unified framework for subspace based face recognition --- p.4Chapter 1.4 --- Discriminant analysis in dual intrapersonal subspaces --- p.5Chapter 1.5 --- Face sketch recognition and hallucination --- p.6Chapter 1.6 --- Organization of this thesis --- p.7Chapter Chapter 2 --- Review of Subspace Methods --- p.8Chapter 2.1 --- PCA --- p.8Chapter 2.2 --- LDA --- p.9Chapter 2.3 --- Bayesian algorithm --- p.12Chapter Chapter 3 --- A Unified Framework --- p.14Chapter 3.1 --- PCA eigenspace --- p.16Chapter 3.2 --- Intrapersonal and extrapersonal subspaces --- p.17Chapter 3.3 --- LDA subspace --- p.18Chapter 3.4 --- Comparison of the three subspaces --- p.19Chapter 3.5 --- L-ary versus binary classification --- p.22Chapter 3.6 --- Unified subspace analysis --- p.23Chapter 3.7 --- Discussion --- p.26Chapter Chapter 4 --- Experiments on Unified Subspace Analysis --- p.28Chapter 4.1 --- Experiments on FERET database --- p.28Chapter 4.1.1 --- PCA Experiment --- p.28Chapter 4.1.2 --- Bayesian experiment --- p.29Chapter 4.1.3 --- Bayesian analysis in reduced PCA subspace --- p.30Chapter 4.1.4 --- Extract discriminant features from intrapersonal subspace --- p.33Chapter 4.1.5 --- Subspace analysis using different training sets --- p.34Chapter 4.2 --- Experiments on the AR face database --- p.36Chapter 4.2.1 --- "Experiments on PCA, LDA and Bayes" --- p.37Chapter 4.2.2 --- Evaluate the Bayesian algorithm for different transformation --- p.38Chapter Chapter 5 --- Discriminant Analysis in Dual Subspaces --- p.41Chapter 5.1 --- Review of LDA in the null space of and direct LDA --- p.42Chapter 5.1.1 --- LDA in the null space of --- p.42Chapter 5.1.2 --- Direct LDA --- p.43Chapter 5.1.3 --- Discussion --- p.44Chapter 5.2 --- Discriminant analysis in dual intrapersonal subspaces --- p.45Chapter 5.3 --- Experiment --- p.50Chapter 5.3.1 --- Experiment on FERET face database --- p.50Chapter 5.3.2 --- Experiment on the XM2VTS database --- p.53Chapter Chapter 6 --- Eigentransformation: Subspace Transform --- p.54Chapter 6.1 --- Face sketch recognition --- p.54Chapter 6.1.1 --- Eigentransformation --- p.56Chapter 6.1.2 --- Sketch synthesis --- p.59Chapter 6.1.3 --- Face sketch recognition --- p.61Chapter 6.1.4 --- Experiment --- p.63Chapter 6.2 --- Face hallucination --- p.69Chapter 6.2.1 --- Multiresolution analysis --- p.71Chapter 6.2.2 --- Eigentransformation for hallucination --- p.72Chapter 6.2.3 --- Discussion --- p.75Chapter 6.2.4 --- Experiment --- p.77Chapter 6.3 --- Discussion --- p.83Chapter Chapter 7 --- Conclusion --- p.85Publication List of This Thesis --- p.87Bibliography --- p.8

    Manifold Elastic Net: A Unified Framework for Sparse Dimension Reduction

    Full text link
    It is difficult to find the optimal sparse solution of a manifold learning based dimensionality reduction algorithm. The lasso or the elastic net penalized manifold learning based dimensionality reduction is not directly a lasso penalized least square problem and thus the least angle regression (LARS) (Efron et al. \cite{LARS}), one of the most popular algorithms in sparse learning, cannot be applied. Therefore, most current approaches take indirect ways or have strict settings, which can be inconvenient for applications. In this paper, we proposed the manifold elastic net or MEN for short. MEN incorporates the merits of both the manifold learning based dimensionality reduction and the sparse learning based dimensionality reduction. By using a series of equivalent transformations, we show MEN is equivalent to the lasso penalized least square problem and thus LARS is adopted to obtain the optimal sparse solution of MEN. In particular, MEN has the following advantages for subsequent classification: 1) the local geometry of samples is well preserved for low dimensional data representation, 2) both the margin maximization and the classification error minimization are considered for sparse projection calculation, 3) the projection matrix of MEN improves the parsimony in computation, 4) the elastic net penalty reduces the over-fitting problem, and 5) the projection matrix of MEN can be interpreted psychologically and physiologically. Experimental evidence on face recognition over various popular datasets suggests that MEN is superior to top level dimensionality reduction algorithms.Comment: 33 pages, 12 figure

    Joint & Progressive Learning from High-Dimensional Data for Multi-Label Classification

    Get PDF
    Despite the fact that nonlinear subspace learning techniques (e.g. manifold learning) have successfully applied to data representation, there is still room for improvement in explainability (explicit mapping), generalization (out-of-samples), and cost-effectiveness (linearization). To this end, a novel linearized subspace learning technique is developed in a joint and progressive way, called \textbf{j}oint and \textbf{p}rogressive \textbf{l}earning str\textbf{a}teg\textbf{y} (J-Play), with its application to multi-label classification. The J-Play learns high-level and semantically meaningful feature representation from high-dimensional data by 1) jointly performing multiple subspace learning and classification to find a latent subspace where samples are expected to be better classified; 2) progressively learning multi-coupled projections to linearly approach the optimal mapping bridging the original space with the most discriminative subspace; 3) locally embedding manifold structure in each learnable latent subspace. Extensive experiments are performed to demonstrate the superiority and effectiveness of the proposed method in comparison with previous state-of-the-art methods.Comment: accepted in ECCV 201

    A Unified Framework for Compositional Fitting of Active Appearance Models

    Get PDF
    Active Appearance Models (AAMs) are one of the most popular and well-established techniques for modeling deformable objects in computer vision. In this paper, we study the problem of fitting AAMs using Compositional Gradient Descent (CGD) algorithms. We present a unified and complete view of these algorithms and classify them with respect to three main characteristics: i) cost function; ii) type of composition; and iii) optimization method. Furthermore, we extend the previous view by: a) proposing a novel Bayesian cost function that can be interpreted as a general probabilistic formulation of the well-known project-out loss; b) introducing two new types of composition, asymmetric and bidirectional, that combine the gradients of both image and appearance model to derive better conver- gent and more robust CGD algorithms; and c) providing new valuable insights into existent CGD algorithms by reinterpreting them as direct applications of the Schur complement and the Wiberg method. Finally, in order to encourage open research and facilitate future comparisons with our work, we make the implementa- tion of the algorithms studied in this paper publicly available as part of the Menpo Project.Comment: 39 page
    • …
    corecore