170 research outputs found

    Eyes extraction from facial images using edge density

    Get PDF
    This paper proposes a novel method for eyes extraction in facial images using edge density information. The method is based on the observation that irrespective of skin colour, colour variations occur the most in the eye region. In the proposed method, edges are detected in the input facial image. Morphological dilation is applied twice and the holes are filled in the connected regions. This makes the high density edges regions appear as blobs. Certain shapes and geometrical rules are applied to these blobs to extract the eyes. The method was tested using images from the PICS facial images database. The accuracies of the initial blobs extraction and the final eyes extraction were 95% and 72% respectively

    A simple and efficient eye detection method in color images

    No full text
    International audienceIn this paper we propose a simple and efficient eye detection method for face detection tasks in color images. The algorithm first detects face regions in the image using a skin color model in the normalized RGB color space. Then, eye candidates are extracted within these regions. Finally, using the anthrophological characteristics of human eyes, the pairs of eye regions are selected. The proposed method is simple and fast, since it needs no template matching step for face verification. It is robust because it can deals with face rotation. Experimental results show the validity of our approach, a correct eye detection rate of 98.4% is achieved using a subset of the AR face database

    A generic face processing framework: technologies, analyses and applications.

    Get PDF
    Jang Kim-fung.Thesis (M.Phil.)--Chinese University of Hong Kong, 2003.Includes bibliographical references (leaves 108-124).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement --- p.iiiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Background --- p.1Chapter 1.2 --- Introduction about Face Processing Framework --- p.4Chapter 1.2.1 --- Basic architecture --- p.4Chapter 1.2.2 --- Face detection --- p.5Chapter 1.2.3 --- Face tracking --- p.6Chapter 1.2.4 --- Face recognition --- p.6Chapter 1.3 --- The scope and contributions of the thesis --- p.7Chapter 1.4 --- The outline of the thesis --- p.8Chapter 2 --- Facial Feature Representation --- p.10Chapter 2.1 --- Facial feature analysis --- p.10Chapter 2.1.1 --- Pixel information --- p.11Chapter 2.1.2 --- Geometry information --- p.13Chapter 2.2 --- Extracting and coding of facial feature --- p.14Chapter 2.2.1 --- Face recognition --- p.15Chapter 2.2.2 --- Facial expression classification --- p.38Chapter 2.2.3 --- Other related work --- p.44Chapter 2.3 --- Discussion about facial feature --- p.48Chapter 2.3.1 --- Performance evaluation for face recognition --- p.49Chapter 2.3.2 --- Evolution of the face recognition --- p.52Chapter 2.3.3 --- Evaluation of two state-of-the-art face recog- nition methods --- p.53Chapter 2.4 --- Problem for current situation --- p.58Chapter 3 --- Face Detection Algorithms and Committee Ma- chine --- p.61Chapter 3.1 --- Introduction about face detection --- p.62Chapter 3.2 --- Face Detection Committee Machine --- p.64Chapter 3.2.1 --- Review of three approaches for committee machine --- p.65Chapter 3.2.2 --- The approach of FDCM --- p.68Chapter 3.3 --- Evaluation --- p.70Chapter 4 --- Facial Feature Localization --- p.73Chapter 4.1 --- Algorithm for gray-scale image: template match- ing and separability filter --- p.73Chapter 4.1.1 --- Position of face and eye region --- p.74Chapter 4.1.2 --- Position of irises --- p.75Chapter 4.1.3 --- Position of lip --- p.79Chapter 4.2 --- Algorithm for color image: eyemap and separa- bility filter --- p.81Chapter 4.2.1 --- Position of eye candidates --- p.81Chapter 4.2.2 --- Position of mouth candidates --- p.83Chapter 4.2.3 --- Selection of face candidates by cost function --- p.84Chapter 4.3 --- Evaluation --- p.85Chapter 4.3.1 --- Algorithm for gray-scale image --- p.86Chapter 4.3.2 --- Algorithm for color image --- p.88Chapter 5 --- Face Processing System --- p.92Chapter 5.1 --- System architecture and limitations --- p.92Chapter 5.2 --- Pre-processing module --- p.93Chapter 5.2.1 --- Ellipse color model --- p.94Chapter 5.3 --- Face detection module --- p.96Chapter 5.3.1 --- Choosing the classifier --- p.96Chapter 5.3.2 --- Verifying the candidate region --- p.97Chapter 5.4 --- Face tracking module --- p.99Chapter 5.4.1 --- Condensation algorithm --- p.99Chapter 5.4.2 --- Tracking the region using Hue color model --- p.101Chapter 5.5 --- Face recognition module --- p.102Chapter 5.5.1 --- Normalization --- p.102Chapter 5.5.2 --- Recognition --- p.103Chapter 5.6 --- Applications --- p.104Chapter 6 --- Conclusion --- p.106Bibliography --- p.10

    Automatic method for detection of characteristic areas in thermal face images

    Get PDF
    The use of thermal images of a selected area of the head in screening systems, which perform fast and accurate analysis of the temperature distribution of individual areas, requires the use of profiled image analysis methods. There exist methods for automated face analysis which are used at airports or train stations and are designed to detect people with fever. However, they do not enable automatic separation of specific areas of the face. This paper presents an algorithm for image analysis which enables localization of characteristic areas of the face in thermograms. The algorithm is resistant to subjects’ variability and also to changes in the position and orientation of the head. In addition, an attempt was made to eliminate the impact of background and interference caused by hair and hairline. The algorithm automatically adjusts its operation parameters to suit the prevailing room conditions. Compared to previous studies (Marzec et al., J Med Inform Tech 16:151–159, 2010), the set of thermal images was expanded by 34 images. As a result, the research material was a total of 125 patients’ thermograms performed in the Department of Pediatrics and Child and Adolescent Neurology in Katowice, Poland. The images were taken interchangeably with several thermal cameras: AGEMA 590 PAL (sensitivity of 0.1 °C), ThermaCam S65 (sensitivity of 0.08 °C), A310 (sensitivity of 0.05 °C), T335 (sensitivity of 0.05 °C) with a 320×240 pixel optical resolution of detectors, maintaining the principles related to taking thermal images for medical thermography. In comparison to (Marzec et al., J Med Inform Tech 16:151–159, 2010), the approach presented there has been extended and modified. Based on the comparison with other methods presented in the literature, it was demonstrated that this method is more complex as it enables to determine the approximate areas of selected parts of the face including anthropometry. As a result of this comparison, better results were obtained in terms of localization accuracy of the center of the eye sockets and nostrils, giving an accuracy of 87 % for the eyes and 93 % for the nostrils

    HUMAN FACE RECOGNITION BASED ON FRACTAL IMAGE CODING

    Get PDF
    Human face recognition is an important area in the field of biometrics. It has been an active area of research for several decades, but still remains a challenging problem because of the complexity of the human face. In this thesis we describe fully automatic solutions that can locate faces and then perform identification and verification. We present a solution for face localisation using eye locations. We derive an efficient representation for the decision hyperplane of linear and nonlinear Support Vector Machines (SVMs). For this we introduce the novel concept of ρ\rho and η\eta prototypes. The standard formulation for the decision hyperplane is reformulated and expressed in terms of the two prototypes. Different kernels are treated separately to achieve further classification efficiency and to facilitate its adaptation to operate with the fast Fourier transform to achieve fast eye detection. Using the eye locations, we extract and normalise the face for size and in-plane rotations. Our method produces a more efficient representation of the SVM decision hyperplane than the well-known reduced set methods. As a result, our eye detection subsystem is faster and more accurate. The use of fractals and fractal image coding for object recognition has been proposed and used by others. Fractal codes have been used as features for recognition, but we need to take into account the distance between codes, and to ensure the continuity of the parameters of the code. We use a method based on fractal image coding for recognition, which we call the Fractal Neighbour Distance (FND). The FND relies on the Euclidean metric and the uniqueness of the attractor of a fractal code. An advantage of using the FND over fractal codes as features is that we do not have to worry about the uniqueness of, and distance between, codes. We only require the uniqueness of the attractor, which is already an implied property of a properly generated fractal code. Similar methods to the FND have been proposed by others, but what distinguishes our work from the rest is that we investigate the FND in greater detail and use our findings to improve the recognition rate. Our investigations reveal that the FND has some inherent invariance to translation, scale, rotation and changes to illumination. These invariances are image dependent and are affected by fractal encoding parameters. The parameters that have the greatest effect on recognition accuracy are the contrast scaling factor, luminance shift factor and the type of range block partitioning. The contrast scaling factor affect the convergence and eventual convergence rate of a fractal decoding process. We propose a novel method of controlling the convergence rate by altering the contrast scaling factor in a controlled manner, which has not been possible before. This helped us improve the recognition rate because under certain conditions better results are achievable from using a slower rate of convergence. We also investigate the effects of varying the luminance shift factor, and examine three different types of range block partitioning schemes. They are Quad-tree, HV and uniform partitioning. We performed experiments using various face datasets, and the results show that our method indeed performs better than many accepted methods such as eigenfaces. The experiments also show that the FND based classifier increases the separation between classes. The standard FND is further improved by incorporating the use of localised weights. A local search algorithm is introduced to find a best matching local feature using this locally weighted FND. The scores from a set of these locally weighted FND operations are then combined to obtain a global score, which is used as a measure of the similarity between two face images. Each local FND operation possesses the distortion invariant properties described above. Combined with the search procedure, the method has the potential to be invariant to a larger class of non-linear distortions. We also present a set of locally weighted FNDs that concentrate around the upper part of the face encompassing the eyes and nose. This design was motivated by the fact that the region around the eyes has more information for discrimination. Better performance is achieved by using different sets of weights for identification and verification. For facial verification, performance is further improved by using normalised scores and client specific thresholding. In this case, our results are competitive with current state-of-the-art methods, and in some cases outperform all those to which they were compared. For facial identification, under some conditions the weighted FND performs better than the standard FND. However, the weighted FND still has its short comings when some datasets are used, where its performance is not much better than the standard FND. To alleviate this problem we introduce a voting scheme that operates with normalised versions of the weighted FND. Although there are no improvements at lower matching ranks using this method, there are significant improvements for larger matching ranks. Our methods offer advantages over some well-accepted approaches such as eigenfaces, neural networks and those that use statistical learning theory. Some of the advantages are: new faces can be enrolled without re-training involving the whole database; faces can be removed from the database without the need for re-training; there are inherent invariances to face distortions; it is relatively simple to implement; and it is not model-based so there are no model parameters that need to be tweaked

    Facial Feature Extraction Using a 4D Stereo Camera System

    Get PDF
    Facial feature recognition has received much attention among the researchers in computer vision. This paper presents a new approach for facial feature extraction. The work can be broadly classified into two stages, face acquisition and feature extraction. Face acquisition is done by a 4D stereo camera system from Dimensional Imaging and the data is available in ‘obj’ files generated by the camera system. The second stage illustrates extraction of important facial features. The algorithm developed for this purpose is inspired from the natural biological shape and structure of human face. The accuracy of identifying the facial points has been shown using simulation results. The algorithm is able to identify the tip of the nose, the point where nose meets the forehead, and near corners of both the eyes from the faces acquired by the camera system

    Drunk Selfie Detection

    Get PDF
    The goal of this project was to extract key features from photographs of faces and use machine learning to classify subjects as either sober or drunk. To do this we analyzed photographs of 53 subjects after drinking wine and extracted key features which we used to classify drunkenness. We used random forest machine learning to achieve 81% accuracy. We built an android application that using our classifiers to estimate the subjects drunkenness from a selfie
    corecore