6,537 research outputs found

    A Comparison of Hand-Geometry Recognition Methods Based on Low- and High-Level Features

    Get PDF
    This paper compares the performance of hand-geometry recognition based on high-level features and on low-level features. The difference between high- and low-level features is that the former are based on interpreting the biometric data, e.g. by locating a finger and measuring its dimensions, whereas the latter are not. The low-level features used here are landmarks on the contour of the hand. The high-level features are a standard set of geometrical features such as widths and lengths of fingers and angles, measured at preselected locations

    3D Face Tracking and Texture Fusion in the Wild

    Full text link
    We present a fully automatic approach to real-time 3D face reconstruction from monocular in-the-wild videos. With the use of a cascaded-regressor based face tracking and a 3D Morphable Face Model shape fitting, we obtain a semi-dense 3D face shape. We further use the texture information from multiple frames to build a holistic 3D face representation from the video frames. Our system is able to capture facial expressions and does not require any person-specific training. We demonstrate the robustness of our approach on the challenging 300 Videos in the Wild (300-VW) dataset. Our real-time fitting framework is available as an open source library at http://4dface.org

    Image processing for plastic surgery planning

    Get PDF
    This thesis presents some image processing tools for plastic surgery planning. In particular, it presents a novel method that combines local and global context in a probabilistic relaxation framework to identify cephalometric landmarks used in Maxillofacial plastic surgery. It also uses a method that utilises global and local symmetry to identify abnormalities in CT frontal images of the human body. The proposed methodologies are evaluated with the help of several clinical data supplied by collaborating plastic surgeons

    Multispectral Palmprint Encoding and Recognition

    Full text link
    Palmprints are emerging as a new entity in multi-modal biometrics for human identification and verification. Multispectral palmprint images captured in the visible and infrared spectrum not only contain the wrinkles and ridge structure of a palm, but also the underlying pattern of veins; making them a highly discriminating biometric identifier. In this paper, we propose a feature encoding scheme for robust and highly accurate representation and matching of multispectral palmprints. To facilitate compact storage of the feature, we design a binary hash table structure that allows for efficient matching in large databases. Comprehensive experiments for both identification and verification scenarios are performed on two public datasets -- one captured with a contact-based sensor (PolyU dataset), and the other with a contact-free sensor (CASIA dataset). Recognition results in various experimental setups show that the proposed method consistently outperforms existing state-of-the-art methods. Error rates achieved by our method (0.003% on PolyU and 0.2% on CASIA) are the lowest reported in literature on both dataset and clearly indicate the viability of palmprint as a reliable and promising biometric. All source codes are publicly available.Comment: Preliminary version of this manuscript was published in ICCV 2011. Z. Khan A. Mian and Y. Hu, "Contour Code: Robust and Efficient Multispectral Palmprint Encoding for Human Recognition", International Conference on Computer Vision, 2011. MATLAB Code available: https://sites.google.com/site/zohaibnet/Home/code

    Matching hand radiographs

    Get PDF
    Biometric verification and identification methods of medical images can be used to find possible inconsistencies in patient records. Such methods may also be useful for forensic research. In this work we present a method for identifying patients by their hand radiographs. We use active appearance model representations presented before [1] to extract 64 shape features per bone from the metacarpals, the proximal, and the middle phalanges. The number of features was reduced to 20 by applying principal component analysis. Subsequently, a likelihood ratio classifier [2] determines whether an image potentially belongs to another patient in the data set. Firstly, to study the symmetry between both hands, we use a likelihood-ratio classifier to match 45 left hand images to a database of 44 (matching) right hand images and vice versa. We found an average equal error probability of 6.4%, which indicates that both hand shapes are highly symmetrical. Therefore, to increase the number of samples per patient, the distinction between left and right hands was omitted. Secondly, we did multiple experiments with randomly selected training images from 24 patients. For several patients there were multiple image pairs available. Test sets were created by using the images of three different patients and 10 other images from patients that were in the training set. We estimated the equal error rate at 0.05%. Our experiments suggest that the shapes of the hand bones contain biometric information that can be used to identify persons
    • ā€¦
    corecore