666 research outputs found

    Face Detection Technique Based on Skin Color and Facial Features

    Get PDF
    Face detection is an essential first step in face recognition systems with the purpose of localizing and extracting the face region from the background. Apart from increasing the efficiency of face recognition systems, face detection technique also opens up the door of opportunity for application areas such as content based image retrieval, video encoding, video conferencing, crowd surveillance and intelligent human computer interfaces. This thesis presents the design of face detection approach which is capable of detecting human faces from complex background. A skin color modeling process is adopted for the face segmentation process. Image enhancement is then used to improve the face candidate before feeding to the face object classifier based on Modified Hausdroff distance. The results indicate that the system is able to detect human faces with reasonable accurac

    Composite Feature-Based Face Detection Using Skin Color Modeling and SVM Classification

    Get PDF
    This report proposes a face detection algorithm based on skin color modeling and support vector machine (SVM) classification. Said classification is based on various face features used to detect specific faces in an input color image. A YCbCr color space is used to filter the skin color pixels from the input color image. Template matching is used on the result with various window sizes of the template created from an ORL face database. The candidates obtained above, are then classified by SVM classifiers using the histogram of oriented gradients, eigen features, edge ratio, and edge statistics features

    An efficient color compensation scheme for skin color segmentation

    Get PDF
    2002-2003 > Academic research: refereed > Refereed conference paperVersion of RecordPublishe

    Evaluating Novel Mask-RCNN Architectures for Ear Mask Segmentation

    Full text link
    The human ear is generally universal, collectible, distinct, and permanent. Ear-based biometric recognition is a niche and recent approach that is being explored. For any ear-based biometric algorithm to perform well, ear detection and segmentation need to be accurately performed. While significant work has been done in existing literature for bounding boxes, a lack of approaches output a segmentation mask for ears. This paper trains and compares three newer models to the state-of-the-art MaskRCNN (ResNet 101 +FPN) model across four different datasets. The Average Precision (AP) scores reported show that the newer models outperform the state-of-the-art but no one model performs the best over multiple datasets.Comment: Accepted into ICCBS 202

    Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz

    Full text link
    The reconstruction of dense 3D models of face geometry and appearance from a single image is highly challenging and ill-posed. To constrain the problem, many approaches rely on strong priors, such as parametric face models learned from limited 3D scan data. However, prior models restrict generalization of the true diversity in facial geometry, skin reflectance and illumination. To alleviate this problem, we present the first approach that jointly learns 1) a regressor for face shape, expression, reflectance and illumination on the basis of 2) a concurrently learned parametric face model. Our multi-level face model combines the advantage of 3D Morphable Models for regularization with the out-of-space generalization of a learned corrective space. We train end-to-end on in-the-wild images without dense annotations by fusing a convolutional encoder with a differentiable expert-designed renderer and a self-supervised training loss, both defined at multiple detail levels. Our approach compares favorably to the state-of-the-art in terms of reconstruction quality, better generalizes to real world faces, and runs at over 250 Hz.Comment: CVPR 2018 (Oral). Project webpage: https://gvv.mpi-inf.mpg.de/projects/FML

    MODELING AND ANALYSIS OF WRINKLES ON AGING HUMAN FACES

    Get PDF
    The analysis and modeling of aging human faces has been extensively studied in the past decade. Most of this work is based on matching learning techniques focused on appearance of faces at different ages incorporating facial features such as face shape/geometry and patch-based texture features. However, we do not find much work done on the analysis of facial wrinkles in general and specific to a person. The goal of this dissertation is to analyse and model facial wrinkles for different applications. Facial wrinkles are challenging low-level image features to analyse. In general, skin texture has drastically varying appearance due to its characteristic physical properties. A skin patch looks very different when viewed or illuminated from different angles. This makes subtle skin features like facial wrinkles difficult to be detected in images acquired in uncontrolled imaging settings. In this dissertation, we examine the image properties of wrinkles i.e. intensity gradients and geometric properties and use them for several applications including low-level image processing for automatic detection/localization of wrinkles, soft biometrics and removal of wrinkles using digital inpainting. First, we present results of detection/localization of wrinkles in images using Marked Point Process (MPP). Wrinkles are modeled as sequences of line segments in a Bayesian framework which incorporates a prior probability model based on the likely geometric properties of wrinkles and a data likelihood term based on image intensity gradients. Wrinkles are localized by sampling the posterior probability using a Reversible Jump Markov Chain Monte Carlo (RJMCMC) algorithm. We also present an evaluation algorithm to quantitatively evaluate the detection and false alarm rate of our algorithm and conduct experiments with images taken in uncontrolled settings. The MPP model, despite its promising localization results, requires a large number of iterations in the RJMCMC algorithm to reach global minimum resulting in considerable computation time. This motivated us to adopt a deterministic approach based on image morphology for fast localization of facial wrinkles. We propose image features based on Gabor filter banks to highlight subtle curvilinear discontinuities in skin texture caused by wrinkles. Then, image morphology is used to incorporate geometric constraints to localize curvilinear shapes of wrinkles at image sites of large Gabor filter responses. We conduct experiments on two sets of low and high resolution images to demonstrate faster and visually better localization results as compared to those obtained by MPP modeling. As a next application, we investigate the user-drawn and automatically detected wrinkles as a pattern for their discriminative power as a soft biometrics to recognize subjects from their wrinkle patterns only. A set of facial wrinkles from an image is treated as a curve pattern and used for subject recognition. Given the wrinkle patterns from a query and gallery images, several distance measures are calculated between the two patterns to quantify the similarity between them. This is done by finding the possible correspondences between curves from the two patterns using a simple bipartite graph matching algorithm. Then several metrics are used to calculate the similarity between the two wrinkle patterns. These metrics are based on Hausdorff distance and curve-to-curve correspondences. We conduct experiments on data sets of both hand drawn and automatically detected wrinkles. Finally, we apply digital inpainting to automatically remove wrinkles from facial images. Digital image inpainting refers to filling in the holes of arbitrary shapes in images so that they seem to be part of the original image. The inpainting methods target either the structure or the texture of an image or both. There are two limitations of existing inpainting methods for the removal of wrinkles. First, the differences in the attributes of structure and texture requires different inpainting methods. Facial wrinkles do not fall strictly under the category of structure or texture and can be considered as some where in between. Second, almost all of the image inpainting techniques are supervised i.e. the area/gap to be filled is provided by user interaction and the algorithms attempt to find the suitable image portion automatically. We present an unsupervised image inpainting method where facial regions with wrinkles are detected automatically using their characteristic intensity gradients and removed by painting the regions by the surrounding skin texture

    An Efficient Dorsal Hand Vein Recognition Based on Firefly Algorithm

    Get PDF
    Biometric technology is an efficient personal authentication andidentification technique. As one of the main-stream branches, dorsal handvein recognition has been recently attracted the attention of researchers. It is more preferable than the other types of biometrics becuse it’s impossible to steal or counterfeit the patterns and the pattern of the vessels of back of the hand is fixed and unique with repeatable biometric features. Also, the recent researches have been obtained no certain recognition rate yet becuse of the noises in the imaging patterns, and impossibility of Dimension reducing because of the non-complexity of the models, and proof of correctness of identification is required. Therefore, in this paper, first, the images of blood vessels on back of the hands of people is analysed, and after pre-processing of images and feature extraction (in the intersection between the vessels) we began to identify people using firefly clustering algorithms. This identification is done based on the distance patterns between crossing vessels and their matching place. The identification will be done based on the classification of each part of NCUT data set and it consisting of 2040 dorsal hand vein images. High speed in patterns recognition and less computation are the advantages of this method. The recognition rate of this method ismore accurate and the error is less than one percent. At the end thecorrectness percentage of this method (CLU-D-F-A) for identification iscompared with other various algorithms, and the superiority of the proposed method is proved.DOI:http://dx.doi.org/10.11591/ijece.v3i1.176

    Reconstruction of three-dimensional facial geometric features related to fetal alcohol syndrome using adult surrogates

    Get PDF
    Fetal alcohol syndrome (FAS) is a condition caused by prenatal alcohol exposure. The diagnosis of FAS is based on the presence of central nervous system impairments, evidence of growth abnormalities and abnormal facial features. Direct anthropometry has traditionally been used to obtain facial data to assess the FAS facial features. Research efforts have focused on indirect anthropometry such as 3D surface imaging systems to collect facial data for facial analysis. However, 3D surface imaging systems are costly. As an alternative, approaches for 3D reconstruction from a single 2D image of the face using a 3D morphable model (3DMM) were explored in this research study. The research project was accomplished in several steps. 3D facial data were obtained from the publicly available BU-3DFE database, developed by the State University of New York. The 3D face scans in the training set were landmarked by different observers. The reliability and precision in selecting 3D landmarks were evaluated. The intraclass correlation coefficients for intra- and inter-observer reliability were greater than 0.95. The average intra-observer error was 0.26 mm and the average inter-observer error was 0.89 mm. A rigid registration was performed on the 3D face scans in the training set. Following rigid registration, a dense point-to-point correspondence across a set of aligned face scans was computed using the Gaussian process model fitting approach. A 3DMM of the face was constructed from the fully registered 3D face scans. The constructed 3DMM of the face was evaluated based on generalization, specificity, and compactness. The quantitative evaluations show that the constructed 3DMM achieves reliable results. 3D face reconstructions from single 2D images were estimated based on the 3DMM. The MetropolisHastings algorithm was used to fit the 3DMM features to 2D image features to generate the 3D face reconstruction. Finally, the geometric accuracy of the reconstructed 3D faces was evaluated based on ground-truth 3D face scans. The average root mean square error for the surface-to-surface comparisons between the reconstructed faces and the ground-truth face scans was 2.99 mm. In conclusion, a framework to estimate 3D face reconstructions from single 2D facial images was developed and the reconstruction errors were evaluated. The geometric accuracy of the 3D face reconstructions was comparable to that found in the literature. However, future work should consider minimizing reconstruction errors to acceptable clinical standards in order for the framework to be useful for 3D-from-2D reconstruction in general, and also for developing FAS applications. Finally, future work should consider estimating a 3D face using multi-view 2D images to increase the information available for 3D-from-2D reconstruction
    corecore