978 research outputs found

    Human face recognition under degraded conditions

    Get PDF
    Comparative studies on the state of the art feature extraction and classification techniques for human face recognition under low resolution problem, are proposed in this work. Also, the effect of applying resolution enhancement, using interpolation techniques, is evaluated. A gradient-based illumination insensitive preprocessing technique is proposed using the ratio between the gradient magnitude and the current intensity level of image which is insensitive against severe level of lighting effect. Also, a combination of multi-scale Weber analysis and enhanced DD-DT-CWT is demonstrated to have a noticeable stability versus illumination variation. Moreover, utilization of the illumination insensitive image descriptors on the preprocessed image leads to further robustness against lighting effect. The proposed block-based face analysis decreases the effect of occlusion by devoting different weights to the image subblocks, according to their discrimination power, in the score or decision level fusion. In addition, a hierarchical structure of global and block-based techniques is proposed to improve the recognition accuracy when different image degraded conditions occur. Complementary performance of global and local techniques leads to considerable improvement in the face recognition accuracy. Effectiveness of the proposed algorithms are evaluated on Extended Yale B, AR, CMU Multi-PIE, LFW, FERET and FRGC databases with large number of images under different degradation conditions. The experimental results show an improved performance under poor illumination, facial expression and, occluded images

    Illumination tolerance in facial recognition

    Get PDF
    In this research work, five different preprocessing techniques were experimented with two different classifiers to find the best match for preprocessor + classifier combination to built an illumination tolerant face recognition system. Hence, a face recognition system is proposed based on illumination normalization techniques and linear subspace model using two distance metrics on three challenging, yet interesting databases. The databases are CAS PEAL database, the Extended Yale B database, and the AT&T database. The research takes the form of experimentation and analysis in which five illumination normalization techniques were compared and analyzed using two different distance metrics. The performances and execution times of the various techniques were recorded and measured for accuracy and efficiency. The illumination normalization techniques were Gamma Intensity Correction (GIC), discrete Cosine Transform (DCT), Histogram Remapping using Normal distribution (HRN), Histogram Remapping using Log-normal distribution (HRL), and Anisotropic Smoothing technique (AS). The linear subspace models utilized were principal component analysis (PCA) and Linear Discriminant Analysis (LDA). The two distance metrics were Euclidean and Cosine distance. The result showed that for databases with both illumination (shadows), and lighting (over-exposure) variations like the CAS PEAL database the Histogram remapping technique with normal distribution produced excellent result when the cosine distance is used as the classifier. The result indicated 65% recognition rate in 15.8 ms/img. Alternatively for databases consisting of pure illumination variation, like the extended Yale B database, the Gamma Intensity Correction (GIC) merged with the Euclidean distance metric gave the most accurate result with 95.4% recognition accuracy in 1ms/img. It was further gathered from the set of experiments that the cosine distance produces more accurate result compared to the Euclidean distance metric. However the Euclidean distance is faster than the cosine distance in all the experiments conducted

    Unfamiliar facial identity registration and recognition performance enhancement

    Get PDF
    The work in this thesis aims at studying the problems related to the robustness of a face recognition system where specific attention is given to the issues of handling the image variation complexity and inherent limited Unique Characteristic Information (UCI) within the scope of unfamiliar identity recognition environment. These issues will be the main themes in developing a mutual understanding of extraction and classification tasking strategies and are carried out as a two interdependent but related blocks of research work. Naturally, the complexity of the image variation problem is built up from factors including the viewing geometry, illumination, occlusion and other kind of intrinsic and extrinsic image variation. Ideally, the recognition performance will be increased whenever the variation is reduced and/or the UCI is increased. However, the variation reduction on 2D facial images may result in loss of important clues or UCI data for a particular face alternatively increasing the UCI may also increase the image variation. To reduce the lost of information, while reducing or compensating the variation complexity, a hybrid technique is proposed in this thesis. The technique is derived from three conventional approaches for the variation compensation and feature extraction tasks. In this first research block, transformation, modelling and compensation approaches are combined to deal with the variation complexity. The ultimate aim of this combination is to represent (transformation) the UCI without losing the important features by modelling and discard (compensation) and reduce the level of the variation complexity of a given face image. Experimental results have shown that discarding a certain obvious variation will enhance the desired information rather than sceptical in losing the interested UCI. The modelling and compensation stages will benefit both variation reduction and UCI enhancement. Colour, gray level and edge image information are used to manipulate the UCI which involve the analysis on the skin colour, facial texture and features measurement respectively. The Derivative Linear Binary transformation (DLBT) technique is proposed for the features measurement consistency. Prior knowledge of input image with symmetrical properties, the informative region and consistency of some features will be fully utilized in preserving the UCI feature information. As a result, the similarity and dissimilarity representation for identity parameters or classes are obtained from the selected UCI representation which involves the derivative features size and distance measurement, facial texture and skin colour. These are mainly used to accommodate the strategy of unfamiliar identity classification in the second block of the research work. Since all faces share similar structure, classification technique should be able to increase the similarities within the class while increase the dissimilarity between the classes. Furthermore, a smaller class will result on less burden on the identification or recognition processes. The proposed method or collateral classification strategy of identity representation introduced in this thesis is by manipulating the availability of the collateral UCI for classifying the identity parameters of regional appearance, gender and age classes. In this regard, the registration of collateral UCI s have been made in such a way to collect more identity information. As a result, the performance of unfamiliar identity recognition positively is upgraded with respect to the special UCI for the class recognition and possibly with the small size of the class. The experiment was done using data from our developed database and open database comprising three different regional appearances, two different age groups and two different genders and is incorporated with pose and illumination image variations

    Face recognition via edge-based Gabor feature representation for plastic surgery-altered images

    Get PDF
    Plastic surgery procedures on the face introduce skin texture variations between images of the same person (intra-subject), thereby making the task of face recognition more difficult than in normal scenario. Usually, in contemporary face recognition systems, the original gray-level face image is used as input to the Gabor descriptor, which translates to encoding some texture properties of the face image. The texture-encoding process significantly degrades the performance of such systems in the case of plastic surgery due to the presence of surgically induced intra-subject variations. Based on the proposition that the shape of significant facial components such as eyes, nose, eyebrow, and mouth remains unchanged after plastic surgery, this paper employs an edge-based Gabor feature representation approach for the recognition of surgically altered face images. We use the edge information, which is dependent on the shapes of the significant facial components, to address the plastic surgery-induced texture variation problems. To ensure that the significant facial components represent useful edge information with little or no false edges, a simple illumination normalization technique is proposed for preprocessing. Gabor wavelet is applied to the edge image to accentuate on the uniqueness of the significant facial components for discriminating among different subjects. The performance of the proposed method is evaluated on the Georgia Tech (GT) and the Labeled Faces in the Wild (LFW) databases with illumination and expression problems, and the plastic surgery database with texture changes. Results show that the proposed edge-based Gabor feature representation approach is robust against plastic surgery-induced face variations amidst expression and illumination problems and outperforms the existing plastic surgery face recognition methods reported in the literature

    Face Recognition Under Varying Illumination

    Get PDF
    Tez (Yüksek Lisans) -- İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü, 2006Thesis (M.Sc.) -- İstanbul Technical University, Institute of Science and Technology, 2006Bu çalışmada, aydınlanma değişimlerine karşı gürbüz ve yenilikçi bir yüz tanıma sistemi oluşturulması amaçlanmıştır. Eğitim aşamasında her kişi için tek bir yüz görüntüsünün olduğu varsayılmıştır. Aydınlanma değişimlerine karşı Doğrusal Ayrışım Analizi’nin (DAA) Temel Bileşenli Analizi’ne (TBA) karşı daha başarılı olduğu bilindiğinden, sistemin verimliliğini arttırmak üzere DAA kullanmasına karar verilmiştir. Sınıfsal ayrışım yaklaşımlarda görünen “Az Örnek Sayısı” sorununu çözmek üzere “Oran Görüntü” adı verilen başarılı bir yöntem, görüntü sentezlemek için uygulanmıştır. Bu yöntem kullanılarak her giriş görüntüsü için bir görüntü uzayı oluşturulmuştur. Kullanılan yöntem ayrıca, herhangi bir ışıklandırma koşulunda alınmış görüntüyü, önden ışıklandırılmış hale geri çatabilmeye izin vermektedir. YaleB veritabanı üzerinde yapılan deneysel sonuçlar, var olan yöntemlerle karşılaştırıldığında, bu yaklaşımın daha başarılı sonuçlar elde ettiğini göstermektedir.This paper proposes a novel approach for creating a Face Recognition System robust to illumination variation. Is considered the case when only one image per person is available during the training phase. Knowing the superiority of Linear Discriminant Analysis (LDA) over Principal Component Analysis (PCA) in regard to variable illumination, it was decided to use this fact to improve the performance of this system. To solve the Small Sample Size (SSS) problem related with class-based discriminant approaches it was applied an image synthesis method based on a successful technique known as the Quotient Image to create the image space of any input image. Furthermore an iterative algorithm is used for the restoration of frontal illumination of a face illuminated by an arbitrary angle. Experimental results on the YaleB database show that this approach can achieve a top recognition rate compared with existing methods.Yüksek LisansM.Sc

    Engineering data compendium. Human perception and performance. User's guide

    Get PDF
    The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use

    Edge- and region-based processes of 2nd-order vision

    Get PDF
    The human visual system is sensitive to 2nd-order image properties (often called texture properties). Spatial gradients in certain 2nd-order properties are edge-based, in that contours are effortlessly perceived through a rapid segmentation process. Others, however, are region-based, in that they require regional integration in order to be discriminated. The five studies reported in this thesis explore these mechanisms of 2nd-order vision, referred to respectively as segmentation and discrimination. Study one compares the segmentation and discrimination of 2nd-order stimuli and uses flicker-defined-form to demonstrate that the former may be subserved by phase-insensitive mechanisms. In study two, through testing of a neuropsychological patient, it is shown that 2nd-order segmentation is achieved relatively early in the visual system and, contrary to some claims, does not require the region termed human “V4”. Study three demonstrates, through selective adaptation aftereffects, that orientation variance (a 2nd-order regional property) is encoded by a dedicated mechanism tuned broadly to high and low variance and insensitive to low-level pattern information. Furthermore, the finding that the variance-specific aftereffect is limited to a retinotopic (not spatiotopic) reference frame, and that a neuropsychological patient with mid- to high-level visual cortical damage retains some sensitivity to variance, suggests that this regional property may be encoded at an earlier cortical site than previously assumed. Study four examines how cues from different 2nd-order channels are temporally integrated to allow cue-invariant segmentation. Results from testing a patient with bilateral lateral occipital damage and from selective visual field testing in normal observers suggest that this is achieved prior to the level of lateral occipital complex, but at least at the level of V2. The final study demonstrates that objects that are segmented rapidly by 2nd-order channels are processed at a sufficiently high cortical level as to allow object-based attention without those objects ever reaching awareness

    Spaceborne synthetic-aperture imaging radars: Applications, techniques, and technology

    Get PDF
    In the last four years, the first two Earth-orbiting, space-borne, synthetic-aperture imaging radars (SAR) were successfully developed and operated. This was a major achievement in the development of spaceborne radar sensors and ground processors. The data acquired with these sensors extended the capability of Earth resources and ocean-surface observation into a new region of the electromagnetic spectrum. This paper is a review of the different aspects of spaceborne imaging radars. It includes a review of: 1) the unique characteristics of space-borne SAR systems; 2) the state of the art in spaceborne SAR hardware and SAR optical and digital processors; 3) the different data-handling techniques; and 4) the different applications of spaceborne SAR data
    corecore