40,748 research outputs found

    Multi-View Face Recognition From Single RGBD Models of the Faces

    Get PDF
    This work takes important steps towards solving the following problem of current interest: Assuming that each individual in a population can be modeled by a single frontal RGBD face image, is it possible to carry out face recognition for such a population using multiple 2D images captured from arbitrary viewpoints? Although the general problem as stated above is extremely challenging, it encompasses subproblems that can be addressed today. The subproblems addressed in this work relate to: (1) Generating a large set of viewpoint dependent face images from a single RGBD frontal image for each individual; (2) using hierarchical approaches based on view-partitioned subspaces to represent the training data; and (3) based on these hierarchical approaches, using a weighted voting algorithm to integrate the evidence collected from multiple images of the same face as recorded from different viewpoints. We evaluate our methods on three datasets: a dataset of 10 people that we created and two publicly available datasets which include a total of 48 people. In addition to providing important insights into the nature of this problem, our results show that we are able to successfully recognize faces with accuracies of 95% or higher, outperforming existing state-of-the-art face recognition approaches based on deep convolutional neural networks

    Gait recognition with shifted energy image and structural feature extraction

    Get PDF
    This is the author's accepted manuscript. The final published article is available from the link below. Copyright @ 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.In this paper, we present a novel and efficient gait recognition system. The proposed system uses two novel gait representations, i.e., the shifted energy image and the gait structural profile, which have increased robustness to some classes of structural variations. Furthermore, we introduce a novel method for the simulation of walking conditions and the generation of artificial subjects that are used for the application of linear discriminant analysis. In the decision stage, the two representations are fused. Thorough experimental evaluation, conducted using one traditional and two new databases, demonstrates the advantages of the proposed system in comparison with current state-of-the-art systems

    ID

    Full text link
    The five sculptors in ID challenge the conventions of representational self-portraiture. In their selective and often abstract use of figuration, these artists engage the identification of self as it is situated socially and institutionally—one’s “I.D.”—as well as the psychoanalytic dimensions of the “id.” The exhibition’s title introduces a kind of paradoxical conflict between public identification, found in various bureaucratic forms of I.D. (passports, drivers’ licenses, and Social Security numbers, for example), and the id, a Freudian classification for the most basic and unconscious physical drives (sex, food, aggression). All of these artists respond to the seeming incongruities of I.D. and id by exhibiting subtle awareness of the complicated construction of identity. Abandoning the tradition of simply mirroring one’s outward appearance, they not only reconsider what it means to represent oneself as an art object, but also question the literal and figurative boundaries of the human form in sculpture. [excerpt]https://cupola.gettysburg.edu/artcatalogs/1002/thumbnail.jp

    2.5D multi-view gait recognition based on point cloud registration

    Get PDF
    This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM
    corecore