9 research outputs found

    Margin maximizing discriminant analysis

    Get PDF
    Abstract. We propose a new feature extraction method called Margin Maximizing Discriminant Analysis (MMDA) which seeks to extract features suitable for classification tasks. MMDA is based on the principle that an ideal feature should convey the maximum information about the class labels and it should depend only on the geometry of the optimal decision boundary and not on those parts of the distribution of the input data that do not participate in shaping this boundary. Further, distinct feature components should convey unrelated information about the data. Two feature extraction methods are proposed for calculating the parameters of such a projection that are shown to yield equivalent results. The kernel mapping idea is used to derive non-linear versions. Experiments with several real-world, publicly available data sets demonstrate that the new method yields competitive results.

    Laplacian Support Vector Analysis for Subspace Discriminative Learning

    Get PDF
    In this paper we propose a novel dimensionality reduction method that is based on successive Laplacian SVM projections in orthogonal deflated subspaces. The proposed method, called Laplacian Support Vector Analysis, produces projection vectors, which capture the discriminant information that lies in the subspace orthogonal to the standard Laplacian SVMs. We show that the optimal vectors on these deflated subspaces can be computed by successively training a standard SVM with specially designed deflation kernels. The resulting normal vectors contain discriminative information that can be used for feature extraction. In our analysis, we derive an explicit form for the deflation matrix of the mapped features in both the initial and the Hilbert space by using the kernel trick and thus, we can handle linear and non-linear deflation transformations. Experimental results in several benchmark datasets illustrate the strength of our proposed algorithm

    Laplacian Support Vector Analysis for Subspace Discriminative Learning

    Full text link

    Gait Recognition from Motion Capture Data

    Full text link
    Gait recognition from motion capture data, as a pattern classification discipline, can be improved by the use of machine learning. This paper contributes to the state-of-the-art with a statistical approach for extracting robust gait features directly from raw data by a modification of Linear Discriminant Analysis with Maximum Margin Criterion. Experiments on the CMU MoCap database show that the suggested method outperforms thirteen relevant methods based on geometric features and a method to learn the features by a combination of Principal Component Analysis and Linear Discriminant Analysis. The methods are evaluated in terms of the distribution of biometric templates in respective feature spaces expressed in a number of class separability coefficients and classification metrics. Results also indicate a high portability of learned features, that means, we can learn what aspects of walk people generally differ in and extract those as general gait features. Recognizing people without needing group-specific features is convenient as particular people might not always provide annotated learning data. As a contribution to reproducible research, our evaluation framework and database have been made publicly available. This research makes motion capture technology directly applicable for human recognition.Comment: Preprint. Full paper accepted at the ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), special issue on Representation, Analysis and Recognition of 3D Humans. 18 pages. arXiv admin note: substantial text overlap with arXiv:1701.00995, arXiv:1609.04392, arXiv:1609.0693

    Walker-Independent Features for Gait Recognition from Motion Capture Data

    Get PDF
    MoCap-based human identification, as a pattern recognition discipline, can be optimized using a machine learning approach. Yet in some applications such as video surveillance new identities can appear on the fly and labeled data for all encountered people may not always be available. This work introduces the concept of learning walker-independent gait features directly from raw joint coordinates by a modification of the Fisher’s Linear Discriminant Analysis with Maximum Margin Criterion. Our new approach shows not only that these features can discriminate different people than who they are learned on, but also that the number of learning identities can be much smaller than the number of walkers encountered in the real operation

    Metric Learning as a Service with Covariance Embedding

    Full text link
    With the emergence of deep learning, metric learning has gained significant popularity in numerous machine learning tasks dealing with complex and large-scale datasets, such as information retrieval, object recognition and recommendation systems. Metric learning aims to maximize and minimize inter- and intra-class similarities. However, existing models mainly rely on distance measures to obtain a separable embedding space and implicitly maximize the intra-class similarity while neglecting the inter-class relationship. We argue that to enable metric learning as a service for high-performance deep learning applications, we should also wisely deal with inter-class relationships to obtain a more advanced and meaningful embedding space representation. In this paper, a novel metric learning is presented as a service methodology that incorporates covariance to signify the direction of the linear relationship between data points in an embedding space. Unlike conventional metric learning, our covariance-embedding-enhanced approach enables metric learning as a service to be more expressive for computing similar or dissimilar measures and can capture positive, negative, or neutral relationships. Extensive experiments conducted using various benchmark datasets, including natural, biomedical, and facial images, demonstrate that the proposed model as a service with covariance-embedding optimizations can obtain higher-quality, more separable, and more expressive embedding representations than existing models
    corecore