7 research outputs found

    Discriminant feature extraction: exploiting structures within each sample and across samples.

    Get PDF
    Zhang, Wei.Thesis (M.Phil.)--Chinese University of Hong Kong, 2009.Includes bibliographical references (leaves 95-109).Abstract also in Chinese.Abstract --- p.iAcknowledgement --- p.ivChapter 1 --- Introduction --- p.1Chapter 1.1 --- Area of Machine Learning --- p.1Chapter 1.1.1 --- Types of Algorithms --- p.2Chapter 1.1.2 --- Modeling Assumptions --- p.4Chapter 1.2 --- Dimensionality Reduction --- p.4Chapter 1.3 --- Structure of the Thesis --- p.8Chapter 2 --- Dimensionality Reduction --- p.10Chapter 2.1 --- Feature Extraction --- p.11Chapter 2.1.1 --- Linear Feature Extraction --- p.11Chapter 2.1.2 --- Nonlinear Feature Extraction --- p.16Chapter 2.1.3 --- Sparse Feature Extraction --- p.19Chapter 2.1.4 --- Nonnegative Feature Extraction --- p.19Chapter 2.1.5 --- Incremental Feature Extraction --- p.20Chapter 2.2 --- Feature Selection --- p.20Chapter 2.2.1 --- Viewpoint of Feature Extraction --- p.21Chapter 2.2.2 --- Feature-Level Score --- p.22Chapter 2.2.3 --- Subset-Level Score --- p.22Chapter 3 --- Various Views of Feature Extraction --- p.24Chapter 3.1 --- Probabilistic Models --- p.25Chapter 3.2 --- Matrix Factorization --- p.26Chapter 3.3 --- Graph Embedding --- p.28Chapter 3.4 --- Manifold Learning --- p.28Chapter 3.5 --- Distance Metric Learning --- p.32Chapter 4 --- Tensor linear Laplacian discrimination --- p.34Chapter 4.1 --- Motivation --- p.35Chapter 4.2 --- Tensor Linear Laplacian Discrimination --- p.37Chapter 4.2.1 --- Preliminaries of Tensor Operations --- p.38Chapter 4.2.2 --- Discriminant Scatters --- p.38Chapter 4.2.3 --- Solving for Projection Matrices --- p.40Chapter 4.3 --- Definition of Weights --- p.44Chapter 4.3.1 --- Contextual Distance --- p.44Chapter 4.3.2 --- Tensor Coding Length --- p.45Chapter 4.4 --- Experimental Results --- p.47Chapter 4.4.1 --- Face Recognition --- p.48Chapter 4.4.2 --- Texture Classification --- p.50Chapter 4.4.3 --- Handwritten Digit Recognition --- p.52Chapter 4.5 --- Conclusions --- p.54Chapter 5 --- Semi-Supervised Semi-Riemannian Metric Map --- p.56Chapter 5.1 --- Introduction --- p.57Chapter 5.2 --- Semi-Riemannian Spaces --- p.60Chapter 5.3 --- Semi-Supervised Semi-Riemannian Metric Map --- p.61Chapter 5.3.1 --- The Discrepancy Criterion --- p.61Chapter 5.3.2 --- Semi-Riemannian Geometry Based Feature Extraction Framework --- p.63Chapter 5.3.3 --- Semi-Supervised Learning of Semi-Riemannian Metrics --- p.65Chapter 5.4 --- Discussion --- p.72Chapter 5.4.1 --- A General Framework for Semi-Supervised Dimensionality Reduction --- p.72Chapter 5.4.2 --- Comparison to SRDA --- p.74Chapter 5.4.3 --- Advantages over Semi-supervised Discriminant Analysis --- p.74Chapter 5.5 --- Experiments --- p.75Chapter 5.5.1 --- Experimental Setup --- p.76Chapter 5.5.2 --- Face Recognition --- p.76Chapter 5.5.3 --- Handwritten Digit Classification --- p.82Chapter 5.6 --- Conclusion --- p.84Chapter 6 --- Summary --- p.86Chapter A --- The Relationship between LDA and LLD --- p.89Chapter B --- Coding Length --- p.91Chapter C --- Connection between SRDA and ANMM --- p.92Chapter D --- From S3RMM to Graph-Based Approaches --- p.93Bibliography --- p.9

    Automated Extraction of Biomarkers for Alzheimer's Disease from Brain Magnetic Resonance Images

    No full text
    In this work, different techniques for the automated extraction of biomarkers for Alzheimer's disease (AD) from brain magnetic resonance imaging (MRI) are proposed. The described work forms part of PredictAD (www.predictad.eu), a joined European research project aiming at the identification of a unified biomarker for AD combining different clinical and imaging measurements. Two different approaches are followed in this thesis towards the extraction of MRI-based biomarkers: (I) the extraction of traditional morphological biomarkers based on neuronatomical structures and (II) the extraction of data-driven biomarkers applying machine-learning techniques. A novel method for a unified and automated estimation of structural volumes and volume changes is proposed. Furthermore, a new technique that allows the low-dimensional representation of a high-dimensional image population for data analysis and visualization is described. All presented methods are evaluated on images from the Alzheimer's Disease Neuroimaging Initiative (ADNI), providing a large and diverse clinical database. A rigorous evaluation of the power of all identified biomarkers to discriminate between clinical subject groups is presented. In addition, the agreement of automatically derived volumes with reference labels as well as the power of the proposed method to measure changes in a subject's atrophy rate are assessed. The proposed methods compare favorably to state-of-the art techniques in neuroimaging in terms of accuracy, robustness and run-time

    Contributions to High-Dimensional Pattern Recognition

    Full text link
    This thesis gathers some contributions to statistical pattern recognition particularly targeted at problems in which the feature vectors are high-dimensional. Three pattern recognition scenarios are addressed, namely pattern classification, regression analysis and score fusion. For each of these, an algorithm for learning a statistical model is presented. In order to address the difficulty that is encountered when the feature vectors are high-dimensional, adequate models and objective functions are defined. The strategy of learning simultaneously a dimensionality reduction function and the pattern recognition model parameters is shown to be quite effective, making it possible to learn the model without discarding any discriminative information. Another topic that is addressed in the thesis is the use of tangent vectors as a way to take better advantage of the available training data. Using this idea, two popular discriminative dimensionality reduction techniques are shown to be effectively improved. For each of the algorithms proposed throughout the thesis, several data sets are used to illustrate the properties and the performance of the approaches. The empirical results show that the proposed techniques perform considerably well, and furthermore the models learned tend to be very computationally efficient.Villegas Santamaría, M. (2011). Contributions to High-Dimensional Pattern Recognition [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/10939Palanci

    Linear Laplacian Discrimination for Feature Extraction

    No full text
    Discriminant feature extraction plays a fundamental role in pattern recognition. In this paper, we propose the Lin-ear Laplacian Discrimination (LLD) algorithm for discrim-inant feature extraction. LLD is an extension of Linear Discriminant Analysis (LDA). Our motivation is to address the issue that LDA cannot work well in cases where sam-ple spaces are non-Euclidean. Specifically, we define the within-class scatter and the between-class scatter using similarities which are based on pairwise distances in sam-ple spaces. Thus the structural information of classes is contained in the within-class and the between-class Lapla-cian matrices which are free from metrics of sample spaces. The optimal discriminant subspace can be derived by con-trolling the structural evolution of Laplacian matrices. Ex-periments are performed on the facial database for FRGC version 2. Experimental results show that LLD is effective in extracting discriminant features. 1
    corecore