1,703 research outputs found

    Automatic Recognition of African Bust using Modified Principal Component Analysis (MPCA)

    Get PDF
    This study identified and analysed the pattern recognition features of African bust. It also developed and evaluated a Modified Principal Component Analysis (MPCA) for recognizing those features. This was with a view to providing information on the developed MPCA for a robust approach to recognition of African bust.The developed MPCA used varying number of eigenvectors in creating the bust space. The characteristics of the bust in terms of facial dimension, types of marks, structure of facial components such as the eye, mouth, chin etc were analysed for identification. The bust images were resized for proper reshaping and cropped to adjust their backgrounds using the Microsoft Office Picture Manager. The system code was developed and run on the Matrix Laboratory software (MatLab7.0).The use of varying values of eigenvectors has proven positive result as far as the system evaluation was concerned. For instance, a sensitivity test carried out revealed that thirteen out of seventeen bust’s images were recognized by selecting only vectors of highest eigenvalues while all the test images were recognized with the inclusion of some vectors of low energy level. That is, the modification made to the Conventional PCA (i.e. Eigenface Algorithm) gave rise an increment of about twenty five percent (25%) as far as recognizing the test images was concerned.The study concluded that the Modification made to the conventional PCA has shown very good performance as far as the parameters involved were concerned. The performance of the MPCA was justified by the identification of all the test images, that is, the MPCA proved more efficient than the Conventional PCA technique especially for the recognition of features of the African busts. Keywords: Eigenvectors, Bust recognition, Modified Principal Component Analysis Technique (MPCA), African Bust

    Facial feature representation and recognition

    Get PDF
    Facial expression provides an important behavioral measure for studies of emotion, cognitive processes, and social interaction. Facial expression representation and recognition have become a promising research area during recent years. Its applications include human-computer interfaces, human emotion analysis, and medical care and cure. In this dissertation, the fundamental techniques will be first reviewed, and the developments of the novel algorithms and theorems will be presented later. The objective of the proposed algorithm is to provide a reliable, fast, and integrated procedure to recognize either seven prototypical, emotion-specified expressions (e.g., happy, neutral, angry, disgust, fear, sad, and surprise in JAFFE database) or the action units in CohnKanade AU-coded facial expression image database. A new application area developed by the Infant COPE project is the recognition of neonatal facial expressions of pain (e.g., air puff, cry, friction, pain, and rest in Infant COPE database). It has been reported in medical literature that health care professionals have difficulty in distinguishing newborn\u27s facial expressions of pain from facial reactions of other stimuli. Since pain is a major indicator of medical problems and the quality of patient care depends on the quality of pain management, it is vital that the methods to be developed should accurately distinguish an infant\u27s signal of pain from a host of minor distress signal. The evaluation protocol used in the Infant COPE project considers two conditions: person-dependent and person-independent. The person-dependent means that some data of a subject are used for training and other data of the subject for testing. The person-independent means that the data of all subjects except one are used for training and this left-out one subject is used for testing. In this dissertation, both evaluation protocols are experimented. The Infant COPE research of neonatal pain classification is a first attempt at applying the state-of-the-art face recognition technologies to actual medical problems. The objective of Infant COPE project is to bypass these observational problems by developing a machine classification system to diagnose neonatal facial expressions of pain. Since assessment of pain by machine is based on pixel states, a machine classification system of pain will remain objective and will exploit the full spectrum of information available in a neonate\u27s facial expressions. Furthermore, it will be capable of monitoring neonate\u27s facial expressions when he/she is left unattended. Experimental results using the Infant COPE database and evaluation protocols indicate that the application of face classification techniques in pain assessment and management is a promising area of investigation. One of the challenging problems for building an automatic facial expression recognition system is how to automatically locate the principal facial parts since most existing algorithms capture the necessary face parts by cropping images manually. In this dissertation, two systems are developed to detect facial features, especially for eyes. The purpose is to develop a fast and reliable system to detect facial features automatically and correctly. By combining the proposed facial feature detection, the facial expression and neonatal pain recognition systems can be robust and efficient

    Impact evaluation of skin color, gender, and hair on the performance of eigenface, ICA, and, CNN methods

    Get PDF
    Project Work presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Knowledge Management and Business IntelligenceAlthough face recognition has made remarkable progress in the past decades, it is still a challenging area. In addition to traditional flaws (such as illumination, pose, occlusion in part of face image), the low performance of the system with dark skin images and female faces raises questions that challenge transparency and accountability of the system. Recent work has suggested that available datasets are causing this issue, but little work has been done with other face recognition methods. Also, little work has been done on facial features such as hair as a key face feature in the face recognition system. To address the gaps this thesis examines the performance of three face recognition methods (eigenface, Independent Component Analysis (ICA) and Convolution Neuron Network (CNN)) with respect to skin color changes in two different face mode “only face” and “face with hair”. The following work is reported in this study, 1st rebuild approximate PPB dataset based on work done by “Joy Adowaa Buolamwini” in her thesis entitled “Gender shades”. 2nd new classifier tools developed, and the approximate PPB dataset classified based on new methods in 12 classes. 3rd the three methods assessed with approximate PPB dataset in two face mode. The evaluation of the three methods revealed an interesting result. In this work, the eigenface method performs better than ICA and CNN. Moreover, the result shows a strong positive correlation between the numbers of train sets and results that it can prove the previous finding about lack of image with dark skin. More interestingly, despite the claims, the models showed a proactive behavior in female’s face identification. Despite the female group shape 21% of the population in the top two skin type groups, the result shows 44% of the top 3 recall for female groups. Also, it confirms that adding hair to images in average boosts the results by up to 9%. The work concludes with a discussion of the results and recommends the impact of classes on each other for future stud

    The Face inversion Effect and Perceptual Learning: Features and Configurations

    Get PDF
    This thesis explores the causes of the face inversion effect, which is a substantial decrement in performance in recognising facial stimuli when they are presented upside down (Yin,1969). I will provide results from both behavioural and electrophysiological (EEG) experiments to aid in the analysis of this effect. Over the course of six chapters I summarise my work during the four years of my PhD, and propose an explanation of the face inversion effect that is based on the general mechanisms for learning that we also share with other animals. In Chapter 1 I describe and discuss some of the main theories of face inversion. Chapter 2 used behavioural and EEG techniques to test one of the most popular explanations of the face inversion effect proposed by Diamond and Carey (1986). They proposed that it is the disruption of the expertise needed to exploit configural information that leads to the inversion effect. The experiments reported in Chapter 2 were published as in the Proceedings of the 34th annual conference of the Cognitive Science Society. In Chapter 3 I explore other potential causes of the inversion effect confirming that not only configural information is involved, but also single feature orientation information plays an important part in the inversion effect. All the experiments included in Chapter 3 are part of a paper accepted for publication in the Quarterly Journal of Experimental Psychology. Chapter 4 of this thesis went on to attempt to answer the question of whether configural information is really necessary to obtain an inversion effect. All the experiments presented in Chapter 4 are part of a manuscript in preparation for submission to the Quarterly Journal of Experimental Psychology. Chapter 5 includes some of the most innovative experiments from my PhD work. In particular it offers some behavioural and electrophysiological evidence that shows that it is possible to apply an associative approach to face inversion. Chapter 5 is a key component of this thesis because on the one hand it explains the face inversion effect using general mechanisms of perceptual learning (MKM model). On the other hand it also shows that there seems to be something extra needed to explain face recognition entirely. All the experiments included in Chapter 5 were reported in a paper submitted to the Journal of Experimental Psychology; Animal Behaviour Processes. Finally in Chapter 6 I summarise the implications that this work will have for explanations of the face inversion effect and some of the general processes involved in face perception.EGF scolarshi

    Towards Addressing Key Visual Processing Challenges in Social Media Computing

    Get PDF
    abstract: Visual processing in social media platforms is a key step in gathering and understanding information in the era of Internet and big data. Online data is rich in content, but its processing faces many challenges including: varying scales for objects of interest, unreliable and/or missing labels, the inadequacy of single modal data and difficulty in analyzing high dimensional data. Towards facilitating the processing and understanding of online data, this dissertation primarily focuses on three challenges that I feel are of great practical importance: handling scale differences in computer vision tasks, such as facial component detection and face retrieval, developing efficient classifiers using partially labeled data and noisy data, and employing multi-modal models and feature selection to improve multi-view data analysis. For the first challenge, I propose a scale-insensitive algorithm to expedite and accurately detect facial landmarks. For the second challenge, I propose two algorithms that can be used to learn from partially labeled data and noisy data respectively. For the third challenge, I propose a new framework that incorporates feature selection modules into LDA models.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Inhibitory Processing of Sad Facial Expressions and Depression Vulnerability

    Get PDF
    Depression vulnerability has been frequently linked to selective attention biases, but these biases may partly result from an inhibitory deficit for processing depressive information (Joormann, 2004). Reduced inhibition when encountering sad interpersonal information (e.g., faces) could lead to greater associative processing, deeper encoding among related depressive content in memory, increased rumination, and perhaps could promote depressive episodes. Inhibition and selective attention can be examined through behavioral and psychophysiological indicators, including the N200, P300a, and P300b ERP components. The present study examined whether groups traditionally at risk of depression would show inhibitory deficits for depressive facial expressions as compared to a low-risk group. A 2 x 2 design yielded four groups with two levels of current dysphoria status (yes/no) and history of depression (yes/no), enabling comparisons of relative risk. Each participant completed two visual oddball tasks. In the experimental task, participants responded or inhibited a response to infrequently presented sad or happy target faces in the context of frequently presented neutral faces. In the non-affective control task, participants responded only to faces that fit into one of three broad age groupings. Behavioral (e.g., reaction times, response errors), psychophysiological (ERP components), and self-report (e.g., rumination) measures relevant to selective attention and inhibition were analyzed. Between- and within-groups contrasts were conducted to reveal whether at-risk groups exhibit attentional bias and inhibitory deficiency specific to depressive information. Also, the study examined whether different operationalizations of depression risk evince common or distinct mechanisms of vulnerability. Across the full sample, previous depression was associated with greater P3b amplitude for sad target faces than happy target faces, in contrast with the depression naĂŻve group. However in males, only the combination of previous depression and current dysphoria were linked to elevated P3s following sad targets. Evidence for a sad affect inhibition deficit was limited to dysphoric females' increased errors of commission following sad distracter faces. Results suggest that specific operationalizations of risk may be characterized by an attentional bias toward depressive facial affect in the social environment, which could promote additional depressogenic cognition and social behavior. Theoretical ramifications regarding gender and state versus trait vulnerability are also discussed

    Biometric Systems

    Get PDF
    Biometric authentication has been widely used for access control and security systems over the past few years. The purpose of this book is to provide the readers with life cycle of different biometric authentication systems from their design and development to qualification and final application. The major systems discussed in this book include fingerprint identification, face recognition, iris segmentation and classification, signature verification and other miscellaneous systems which describe management policies of biometrics, reliability measures, pressure based typing and signature verification, bio-chemical systems and behavioral characteristics. In summary, this book provides the students and the researchers with different approaches to develop biometric authentication systems and at the same time includes state-of-the-art approaches in their design and development. The approaches have been thoroughly tested on standard databases and in real world applications

    Automatic Landmarking for Non-cooperative 3D Face Recognition

    Get PDF
    This thesis describes a new framework for 3D surface landmarking and evaluates its performance for feature localisation on human faces. This framework has two main parts that can be designed and optimised independently. The first one is a keypoint detection system that returns positions of interest for a given mesh surface by using a learnt dictionary of local shapes. The second one is a labelling system, using model fitting approaches that establish a one-to-one correspondence between the set of unlabelled input points and a learnt representation of the class of object to detect. Our keypoint detection system returns local maxima over score maps that are generated from an arbitrarily large set of local shape descriptors. The distributions of these descriptors (scalars or histograms) are learnt for known landmark positions on a training dataset in order to generate a model. The similarity between the input descriptor value for a given vertex and a model shape is used as a descriptor-related score. Our labelling system can make use of both hypergraph matching techniques and rigid registration techniques to reduce the ambiguity attached to unlabelled input keypoints for which a list of model landmark candidates have been seeded. The soft matching techniques use multi-attributed hyperedges to reduce ambiguity, while the registration techniques use scale-adapted rigid transformation computed from 3 or more points in order to obtain one-to-one correspondences. Our final system achieves better or comparable (depending on the metric) results than the state-of-the-art while being more generic. It does not require pre-processing such as cropping, spike removal and hole filling and is more robust to occlusion of salient local regions, such as those near the nose tip and inner eye corners. It is also fully pose invariant and can be used with kinds of objects other than faces, provided that labelled training data is available
    • 

    corecore