11 research outputs found

    Automatic age estimation based on facial aging patterns

    Full text link
    While recognition of most facial variations, such as identity, expression, and gender, has been extensively studied, automatic age estimation has rarely been explored. In contrast to other facial variations, aging variation presents several unique characteristics which make age estimation a challenging task. This paper proposes an automatic age estimation method named AGES (AGing pattErn Subspace). The basic idea is to model the aging pattern, which is defined as the sequence of a particular individual\u27s face images sorted in time order, by constructing a representative subspace. The proper aging pattern for a previously unseen face image is determined by the projection in the subspace that can reconstruct the face image with minimum reconstruction error, while the position of the face image in that aging pattern will then indicate its age. In the experiments, AGES and its variants are compared with the limited existing age estimation methods (WAS and AAS) and some well-established classification methods (kNN, BP, C4.5, and SVM). Moreover, a comparison with human perception ability on age is conducted. It is interesting to note that the performance of AGES is not only significantly better than that of all the other algorithms, but also comparable to that of the human observers.<br /

    Principles and methods for face recognition and face modelling

    Get PDF
    This chapter focuses on the principles behind methods currently used for face recognition, which have a wide variety of uses from biometrics, surveillance and forensics. After a brief description of how faces can be detected in images, we describe 2D feature extraction methods that operate on all the image pixels in the face detected region: Eigenfaces and Fisherfaces first proposed in the early 1990s. Although Eigenfaces can be made to work reasonably well for faces captured in controlled conditions, such as frontal faces under the same illumination, recognition rates are poor. We discuss how greater accuracy can be achieved by extracting features from the boundaries of the faces by using Active Shape Models and, the skin textures, using Active Appearance Models, originally proposed by Cootes and Talyor. The remainder of the chapter on face recognition is dedicated such shape models, their implementation and use and their extension to 3D. We show that if multiple cameras are used the the 3D geometry of the captured faces can be recovered without the use of range scanning or structured light. 3D face models make recognition systems better at dealiing with pose and lighting variatio

    Illumination Processing in Face Recognition

    Get PDF

    3D Face Recognition

    Get PDF

    A Real-Time Angle- and Illumination-Aware Face Recognition System Based on Artificial Neural Network

    Get PDF
    Automatic authentication systems, using biometric technology, are becoming increasingly important with the increased need for person verification in our daily life. A few years back, fingerprint verification was done only in criminal investigations. Now fingerprints and face images are widely used in bank tellers, airports, and building entrances. Face images are easy to obtain, but successful recognition depends on proper orientation and illumination of the image, compared to the one taken at registration time. Facial features heavily change with illumination and orientation angle, leading to increased false rejection as well as false acceptance. Registering face images for all possible angles and illumination is impossible. In this work, we proposed a memory efficient way to register (store) multiple angle and changing illumination face image data, and a computationally efficient authentication technique, using multilayer perceptron (MLP). Though MLP is trained using a few registered images with different orientation, due to generalization property of MLP, interpolation of features for intermediate orientation angles was possible. The algorithm is further extended to include illumination robust authentication system. Results of extensive experiments verify the effectiveness of the proposed algorithm

    Light field image processing: an overview

    Get PDF
    Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data

    Patch-based models for visual object classes

    Get PDF
    This thesis concerns models for visual object classes that exhibit a reasonable amount of regularity, such as faces, pedestrians, cells and human brains. Such models are useful for making “within-object” inferences such as determining their individual characteristics and establishing their identity. For example, the model could be used to predict the identity of a face, the pose of a pedestrian or the phenotype of a cell and segment parts of a human brain. Existing object modelling techniques have several limitations. First, most current methods have targeted the above tasks individually using object specific representations; therefore, they cannot be applied to other problems without major alterations. Second, most methods have been designed to work with small databases which do not contain the variations in pose, illumination, occlusion and background clutter seen in ‘real world’ images. Consequently, many existing algorithms fail when tested on unconstrained databases. Finally, the complexity of the training procedure in these methods makes it impractical to use large datasets. In this thesis, we investigate patch-based models for object classes. Our models are capable of exploiting very large databases of objects captured in uncontrolled environments. We represent the test image with a regular grid of patches from a library of images of the same object. All the domain specific information is held in this library: we use one set of images of the object to help draw inferences about others. In each experimental chapter we investigate a different within-object inference task. In particular we develop models for classification, regression, semantic segmentation and identity recognition. In each task, we achieve results that are comparable to or better than the state of the art. We conclude that patch-based representation can be successfully used for the above tasks and shows promise for other applications such as generation and localization
    corecore