6,117 research outputs found

    Discriminant Projection Representation-based Classification for Vision Recognition

    Full text link
    Representation-based classification methods such as sparse representation-based classification (SRC) and linear regression classification (LRC) have attracted a lot of attentions. In order to obtain the better representation, a novel method called projection representation-based classification (PRC) is proposed for image recognition in this paper. PRC is based on a new mathematical model. This model denotes that the 'ideal projection' of a sample point xx on the hyper-space HH may be gained by iteratively computing the projection of xx on a line of hyper-space HH with the proper strategy. Therefore, PRC is able to iteratively approximate the 'ideal representation' of each subject for classification. Moreover, the discriminant PRC (DPRC) is further proposed, which obtains the discriminant information by maximizing the ratio of the between-class reconstruction error over the within-class reconstruction error. Experimental results on five typical databases show that the proposed PRC and DPRC are effective and outperform other state-of-the-art methods on several vision recognition tasks.Comment: Accepted by the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18

    Multiple Manifolds Metric Learning with Application to Image Set Classification

    Full text link
    In image set classification, a considerable advance has been made by modeling the original image sets by second order statistics or linear subspace, which typically lie on the Riemannian manifold. Specifically, they are Symmetric Positive Definite (SPD) manifold and Grassmann manifold respectively, and some algorithms have been developed on them for classification tasks. Motivated by the inability of existing methods to extract discriminatory features for data on Riemannian manifolds, we propose a novel algorithm which combines multiple manifolds as the features of the original image sets. In order to fuse these manifolds, the well-studied Riemannian kernels have been utilized to map the original Riemannian spaces into high dimensional Hilbert spaces. A metric Learning method has been devised to embed these kernel spaces into a lower dimensional common subspace for classification. The state-of-the-art results achieved on three datasets corresponding to two different classification tasks, namely face recognition and object categorization, demonstrate the effectiveness of the proposed method.Comment: 6 pages, 4 figures,ICPR 2018(accepted

    Face Recognition: From Traditional to Deep Learning Methods

    Full text link
    Starting in the seventies, face recognition has become one of the most researched topics in computer vision and biometrics. Traditional methods based on hand-crafted features and traditional machine learning techniques have recently been superseded by deep neural networks trained with very large datasets. In this paper we provide a comprehensive and up-to-date literature review of popular face recognition methods including both traditional (geometry-based, holistic, feature-based and hybrid methods) and deep learning methods

    Multilinear Class-Specific Discriminant Analysis

    Full text link
    There has been a great effort to transfer linear discriminant techniques that operate on vector data to high-order data, generally referred to as Multilinear Discriminant Analysis (MDA) techniques. Many existing works focus on maximizing the inter-class variances to intra-class variances defined on tensor data representations. However, there has not been any attempt to employ class-specific discrimination criteria for the tensor data. In this paper, we propose a multilinear subspace learning technique suitable for applications requiring class-specific tensor models. The method maximizes the discrimination of each individual class in the feature space while retains the spatial structure of the input. We evaluate the efficiency of the proposed method on two problems, i.e. facial image analysis and stock price prediction based on limit order book data.Comment: accepted in PR

    Gradient-orientation-based PCA subspace for novel face recognition

    Get PDF
    This article has been made available through the Brunel Open Access Publishing Fund.Face recognition is an interesting and a challenging problem that has been widely studied in the field of pattern recognition and computer vision. It has many applications such as biometric authentication, video surveillance, and others. In the past decade, several methods for face recognition were proposed. However, these methods suffer from pose and illumination variations. In order to address these problems, this paper proposes a novel methodology to recognize the face images. Since image gradients are invariant to illumination and pose variations, the proposed approach uses gradient orientation to handle these effects. The Schur decomposition is used for matrix decomposition and then Schurvalues and Schurvectors are extracted for subspace projection. We call this subspace projection of face features as Schurfaces, which is numerically stable and have the ability of handling defective matrices. The Hausdorff distance is used with the nearest neighbor classifier to measure the similarity between different faces. Experiments are conducted with Yale face database and ORL face database. The results show that the proposed approach is highly discriminant and achieves a promising accuracy for face recognition than the state-of-the-art approaches

    Multi-Subregion Based Correlation Filter Bank for Robust Face Recognition

    Full text link
    In this paper, we propose an effective feature extraction algorithm, called Multi-Subregion based Correlation Filter Bank (MS-CFB), for robust face recognition. MS-CFB combines the benefits of global-based and local-based feature extraction algorithms, where multiple correlation filters correspond- ing to different face subregions are jointly designed to optimize the overall correlation outputs. Furthermore, we reduce the computational complexi- ty of MS-CFB by designing the correlation filter bank in the spatial domain and improve its generalization capability by capitalizing on the unconstrained form during the filter bank design process. MS-CFB not only takes the d- ifferences among face subregions into account, but also effectively exploits the discriminative information in face subregions. Experimental results on various public face databases demonstrate that the proposed algorithm pro- vides a better feature representation for classification and achieves higher recognition rates compared with several state-of-the-art algorithms

    Feature Selection and Feature Extraction in Pattern Analysis: A Literature Review

    Full text link
    Pattern analysis often requires a pre-processing stage for extracting or selecting features in order to help the classification, prediction, or clustering stage discriminate or represent the data in a better way. The reason for this requirement is that the raw data are complex and difficult to process without extracting or selecting appropriate features beforehand. This paper reviews theory and motivation of different common methods of feature selection and extraction and introduces some of their applications. Some numerical implementations are also shown for these methods. Finally, the methods in feature selection and extraction are compared.Comment: 14 pages, 1 figure, 2 tables, survey (literature review) pape

    Disturbance Grassmann Kernels for Subspace-Based Learning

    Full text link
    In this paper, we focus on subspace-based learning problems, where data elements are linear subspaces instead of vectors. To handle this kind of data, Grassmann kernels were proposed to measure the space structure and used with classifiers, e.g., Support Vector Machines (SVMs). However, the existing discriminative algorithms mostly ignore the instability of subspaces, which would cause the classifiers misled by disturbed instances. Thus we propose considering all potential disturbance of subspaces in learning processes to obtain more robust classifiers. Firstly, we derive the dual optimization of linear classifiers with disturbance subject to a known distribution, resulting in a new kernel, Disturbance Grassmann (DG) kernel. Secondly, we research into two kinds of disturbance, relevant to the subspace matrix and singular values of bases, with which we extend the Projection kernel on Grassmann manifolds to two new kernels. Experiments on action data indicate that the proposed kernels perform better compared to state-of-the-art subspace-based methods, even in a worse environment.Comment: This paper include 3 figures, 10 pages, and has been accpeted to SIGKDD'1

    Enhancing Person Re-identification in a Self-trained Subspace

    Full text link
    Despite the promising progress made in recent years, person re-identification (re-ID) remains a challenging task due to the complex variations in human appearances from different camera views. For this challenging problem, a large variety of algorithms have been developed in the fully-supervised setting, requiring access to a large amount of labeled training data. However, the main bottleneck for fully-supervised re-ID is the limited availability of labeled training samples. To address this problem, in this paper, we propose a self-trained subspace learning paradigm for person re-ID which effectively utilizes both labeled and unlabeled data to learn a discriminative subspace where person images across disjoint camera views can be easily matched. The proposed approach first constructs pseudo pairwise relationships among unlabeled persons using the k-nearest neighbors algorithm. Then, with the pseudo pairwise relationships, the unlabeled samples can be easily combined with the labeled samples to learn a discriminative projection by solving an eigenvalue problem. In addition, we refine the pseudo pairwise relationships iteratively, which further improves the learning performance. A multi-kernel embedding strategy is also incorporated into the proposed approach to cope with the non-linearity in person's appearance and explore the complementation of multiple kernels. In this way, the performance of person re-ID can be greatly enhanced when training data are insufficient. Experimental results on six widely-used datasets demonstrate the effectiveness of our approach and its performance can be comparable to the reported results of most state-of-the-art fully-supervised methods while using much fewer labeled data.Comment: Accepted by ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM

    Tensor Representation in High-Frequency Financial Data for Price Change Prediction

    Full text link
    Nowadays, with the availability of massive amount of trade data collected, the dynamics of the financial markets pose both a challenge and an opportunity for high frequency traders. In order to take advantage of the rapid, subtle movement of assets in High Frequency Trading (HFT), an automatic algorithm to analyze and detect patterns of price change based on transaction records must be available. The multichannel, time-series representation of financial data naturally suggests tensor-based learning algorithms. In this work, we investigate the effectiveness of two multilinear methods for the mid-price prediction problem against other existing methods. The experiments in a large scale dataset which contains more than 4 millions limit orders show that by utilizing tensor representation, multilinear models outperform vector-based approaches and other competing ones.Comment: accepted in SSCI 2017, typos fixe
    • …
    corecore