7 research outputs found

    Pengembangan Sistem Pengenalan Wajah Dengan Metode Pengklasifikasian Hibrid Berbasis Jaringan Fungsi Basis Radial Dan Pohon Keputusan Induktif

    Full text link
    Face recognition is a difficult task mostly because of the inherent variability of the image formation process ranging from the position/cropping of the face and its environment (distance and illumination) is totally controlled, to those involving little or no control over the background and viewpoint. Moreover, those are allowing for major changes in facial appearance due to factors expression, aging, and accessories such as glasses or changes in hairstyle. A solution has been proposed by considering hybrid classification architectures deal with the benefit of robustness via consensus provided by ensembles of Radial Basis Functions (RBF) networks and categorical classification using decision trees. A specific approach considers an ensemble of RBF Networks through its ability to cope with variability in the image formation. The experiments were carried out on images drawn randomly 50 unique subjects totalling to 500 facial images with rotation ± 50 encoded in greyscale. The faces are then normalized to account for geometrical and illumination changes using information about the eye location. Specifically performance true positive by Ensambles RBF1 (ERBF1) increased on ± 13,86% measures up to RBF while ERBF2 by ± 15,93%. On the contrary the false negative rate decreased by amount of ±5,8% for ERBF1 and somewhat less to ±5,6% for ERBF2. When the connectionist ERBF model is coupled with an Inductive Decision Tree - C4.5 - the performance improves over the case while only the connectionist ERBF module is used

    Skin texture features for face recognition

    Get PDF
    Face recognition has been deployed in a wide range of important applications including surveillance and forensic identification. However, it still seems to be a challenging problem as its performance severely degrades under illumination, pose and expression variations, as well as with occlusions, and aging. In this thesis, we have investigated the use of local facial skin data as a source of biometric information to improve human recognition. Skin texture features have been exploited in three major tasks, which include (i) improving the performance of conventional face recognition systems, (ii) building an adaptive skin-based face recognition system, and (iii) dealing with circumstances when a full view of the face may not be avai'lable. Additionally, a fully automated scheme is presented for localizing eyes and mouth and segmenting four facial regions: forehead, right cheek, left cheek and chin. These four regions are divided into nonoverlapping patches with equal size. A novel skin/non-skin classifier is proposed for detecting patches containing only skin texture and therefore detecting the pure-skin regions. Experiments using the XM2VTS database indicate that the forehead region has the most significant biometric information. The use of forehead texture features improves the rank-l identification of Eigenfaces system from 77.63% to 84.07%. The rank-l identification is equal 93.56% when this region is fused with Kernel Direct Discriminant Analysis algorithm

    An Investigation Into the Use Of Partial-Faces for Face Recognition

    No full text
    Even though numerous techniques for face recognition have been explored over the years, most research has primarily focussed on identification from full frontal/profile facial images. This paper conducts a first systemic study to assess the performance when using partial-faces for identification. Our specific approach considers an ensemble of Radial Basis Function (RBF) Networks. A specific advantage of using an ensemble is its ability to cope with the inherent variability in the image formation and data acquisition process. Our database consists of imagery corresponding to 150 unique subjects totaling to 3,000 facial images with � 5 0 rotation. Based on our experimental results, we observe that the average Cross Validation performance is the same even if only half the face image is used instead of the full-face image. Specifically we obtain 96 % when partial-faces are used and 97 % when full-faces are used. 1
    corecore