340 research outputs found

    Quadratic Projection Based Feature Extraction with Its Application to Biometric Recognition

    Full text link
    This paper presents a novel quadratic projection based feature extraction framework, where a set of quadratic matrices is learned to distinguish each class from all other classes. We formulate quadratic matrix learning (QML) as a standard semidefinite programming (SDP) problem. However, the con- ventional interior-point SDP solvers do not scale well to the problem of QML for high-dimensional data. To solve the scalability of QML, we develop an efficient algorithm, termed DualQML, based on the Lagrange duality theory, to extract nonlinear features. To evaluate the feasibility and effectiveness of the proposed framework, we conduct extensive experiments on biometric recognition. Experimental results on three representative biometric recogni- tion tasks, including face, palmprint, and ear recognition, demonstrate the superiority of the DualQML-based feature extraction algorithm compared to the current state-of-the-art algorithm

    Extrinsic Methods for Coding and Dictionary Learning on Grassmann Manifolds

    Get PDF
    Sparsity-based representations have recently led to notable results in various visual recognition tasks. In a separate line of research, Riemannian manifolds have been shown useful for dealing with features and models that do not lie in Euclidean spaces. With the aim of building a bridge between the two realms, we address the problem of sparse coding and dictionary learning over the space of linear subspaces, which form Riemannian structures known as Grassmann manifolds. To this end, we propose to embed Grassmann manifolds into the space of symmetric matrices by an isometric mapping. This in turn enables us to extend two sparse coding schemes to Grassmann manifolds. Furthermore, we propose closed-form solutions for learning a Grassmann dictionary, atom by atom. Lastly, to handle non-linearity in data, we extend the proposed Grassmann sparse coding and dictionary learning algorithms through embedding into Hilbert spaces. Experiments on several classification tasks (gender recognition, gesture classification, scene analysis, face recognition, action recognition and dynamic texture classification) show that the proposed approaches achieve considerable improvements in discrimination accuracy, in comparison to state-of-the-art methods such as kernelized Affine Hull Method and graph-embedding Grassmann discriminant analysis.Comment: Appearing in International Journal of Computer Visio

    GII Representation-Based Cross-View Gait Recognition by Discriminative Projection With List-Wise Constraints

    Get PDF
    Remote person identification by gait is one of the most important topics in the field of computer vision and pattern recognition. However, gait recognition suffers severely from the appearance variance caused by the view change. It is very common that gait recognition has a high performance when the view is fixed but the performance will have a sharp decrease when the view variance becomes significant. Existing approaches have tried all kinds of strategies like tensor analysis or view transform models to slow down the trend of performance decrease but still have potential for further improvement. In this paper, a discriminative projection with list-wise constraints (DPLC) is proposed to deal with view variance in cross-view gait recognition, which has been further refined by introducing a rectification term to automatically capture the principal discriminative information. The DPLC with rectification (DPLCR) embeds list-wise relative similarity measurement among intraclass and inner-class individuals, which can learn a more discriminative and robust projection. Based on the original DPLCR, we have introduced the kernel trick to exploit nonlinear cross-view correlations and extended DPLCR to deal with the problem of multiview gait recognition. Moreover, a simple yet efficient gait representation, namely gait individuality image (GII), based on gait energy image is proposed, which could better capture the discriminative information for cross view gait recognition. Experiments have been conducted in the CASIA-B database and the experimental results demonstrate the outstanding performance of both the DPLCR framework and the new GII representation. It is shown that the DPLCR-based cross-view gait recognition has outperformed the-state-of-the-art approaches in almost all cases under large view variance. The combination of the GII representation and the DPLCR has further enhanced the performance to be a new benchmark for cross-view gait recognition

    Robust arbitrary-view gait recognition based on 3D partial similarity matching

    Get PDF
    Existing view-invariant gait recognition methods encounter difficulties due to limited number of available gait views and varying conditions during training. This paper proposes gait partial similarity matching that assumes a 3-dimensional (3D) object shares common view surfaces in significantly different views. Detecting such surfaces aids the extraction of gait features from multiple views. 3D parametric body models are morphed by pose and shape deformation from a template model using 2-dimensional (2D) gait silhouette as observation. The gait pose is estimated by a level set energy cost function from silhouettes including incomplete ones. Body shape deformation is achieved via Laplacian deformation energy function associated with inpainting gait silhouettes. Partial gait silhouettes in different views are extracted by gait partial region of interest elements selection and re-projected onto 2D space to construct partial gait energy images. A synthetic database with destination views and multi-linear subspace classifier fused with majority voting are used to achieve arbitrary view gait recognition that is robust to varying conditions. Experimental results on CMU, CASIA B, TUM-IITKGP, AVAMVG and KY4D datasets show the efficacy of the propose method

    GAIT RECOGNITION PROGRESS IN RECOGNIZING IMAGE CHARACTERISTICS

    Get PDF
    We present a humans credentials system centered on ambulation characteristics. This problem is as eminent as acoustic gait recognition. The objective of the scheme is to explore sounds radiated by walking persons (largely the musical note sounds) and identifies those folks. A cyclic model topology is engaged to denote individual gait cycles. This topology permits modeling and detecting individual steps, leading to very favorable identification rates

    Efficient Human Activity Recognition in Large Image and Video Databases

    Get PDF
    Vision-based human action recognition has attracted considerable interest in recent research for its applications to video surveillance, content-based search, healthcare, and interactive games. Most existing research deals with building informative feature descriptors, designing efficient and robust algorithms, proposing versatile and challenging datasets, and fusing multiple modalities. Often, these approaches build on certain conventions such as the use of motion cues to determine video descriptors, application of off-the-shelf classifiers, and single-factor classification of videos. In this thesis, we deal with important but overlooked issues such as efficiency, simplicity, and scalability of human activity recognition in different application scenarios: controlled video environment (e.g.~indoor surveillance), unconstrained videos (e.g.~YouTube), depth or skeletal data (e.g.~captured by Kinect), and person images (e.g.~Flicker). In particular, we are interested in answering questions like (a) is it possible to efficiently recognize human actions in controlled videos without temporal cues? (b) given that the large-scale unconstrained video data are often of high dimension low sample size (HDLSS) nature, how to efficiently recognize human actions in such data? (c) considering the rich 3D motion information available from depth or motion capture sensors, is it possible to recognize both the actions and the actors using only the motion dynamics of underlying activities? and (d) can motion information from monocular videos be used for automatically determining saliency regions for recognizing actions in still images
    corecore