170 research outputs found

    Deeply Smile Detection Based on Discriminative Features with Modified LeNet-5 Network

    Get PDF
    Facial expressions are caused by specific movements of the face muscles; they are regarded as a visible manifestation of a person\u27s inner thought process, internal emotional states, and intentions. A smile is a facial expression that often indicates happiness, satisfaction, or agreement. Many applications use smile detection such as automatic image capture, distance learning systems, interactive systems, video conferencing, patient monitoring, and product rating. The smile detection system is divided into two stages: feature extraction and classification. As a result, the accuracy of smile detection is dependent on both phases. In recent years, numerous researchers and scholars have identified various approaches to smile detection, however, their accuracy is still under the desired level. To this end, we propose an effective Convolutional Neural Network (CNN) architecture based on modified LeNet-5 Network (MLeNet-5) for detecting smiles in images. The proposed system generates low-level face identifiers and detect smiles using a strong binary classifier. In our experiments, the proposed MLenet-5 system used the SMILEsmilesD and (GENKI-4 K) databases in which the smile detection rate of the proposed method improves the accuracy by 2% on SMILEsmilesD database and 5% on GENKI-4 K database relative to LeNet-5-based CNN network. In addition, the proposed system decreases the number of parameters compared to LeNet-5-based CNN network and most of the existing models while maintaining the robustness and effectiveness of the results

    Deeply Smile Detection Based on Discriminative Features with Modified LeNet-5 Network

    Get PDF
    Facial expressions are caused by specific movements of the face muscles; they are regarded as a visible manifestation of a person\u27s inner thought process, internal emotional states, and intentions. A smile is a facial expression that often indicates happiness, satisfaction, or agreement. Many applications use smile detection such as automatic image capture, distance learning systems, interactive systems, video conferencing, patient monitoring, and product rating. The smile detection system is divided into two stages: feature extraction and classification. As a result, the accuracy of smile detection is dependent on both phases. In recent years, numerous researchers and scholars have identified various approaches to smile detection, however, their accuracy is still under the desired level. To this end, we propose an effective Convolutional Neural Network (CNN) architecture based on modified LeNet-5 Network (MLeNet-5) for detecting smiles in images. The proposed system generates low-level face identifiers and detect smiles using a strong binary classifier. In our experiments, the proposed MLenet-5 system used the SMILEsmilesD and (GENKI-4 K) databases in which the smile detection rate of the proposed method improves the accuracy by 2% on SMILEsmilesD database and 5% on GENKI-4 K database relative to LeNet-5-based CNN network. In addition, the proposed system decreases the number of parameters compared to LeNet-5-based CNN network and most of the existing models while maintaining the robustness and effectiveness of the results

    Automatic Kinship Verification in Unconstrained Faces using Deep Learning

    Get PDF
    Kinship verification has a number of applications such as organizing large collections of images and recognizing resemblances among humans. Identifying kinship relations has also garnered interest due to several potential applications in security and surveillance and organizing and tagging the enormous number of videos being uploaded on the Internet. This dissertation has a five-fold contribution where first, a study is conducted to gain insight into the kinship verification process used by humans. Besides this, two separate deep learning based methods are proposed to solve kinship verification in images and videos. Other contributions of this research include interlinking face verification with kinship verification and creation of two kinship databases to facilitate research in this field. WVU Kinship Database is created which consists of multiple images per subject to facilitate kinship verification research. Next, kinship video (KIVI) database of more than 500 individuals with variations due to illumination, pose, occlusion, ethnicity, and expression is collected for this research. It comprises a total of 355 true kin video pairs with over 250,000 still frames. In this dissertation, a human study is conducted to understand the capabilities of human mind and to identify the discriminatory areas of a face that facilitate kinship-cues. The visual stimuli presented to the participants determines their ability to recognize kin relationship using the whole face as well as specific facial regions. The effect of participant gender, age, and kin-relation pair of the stimulus is analyzed using quantitative measures such as accuracy, discriminability index d′, and perceptual information entropy. Next, utilizing the information obtained from the human study, a hierarchical Kinship Verification via Representation Learning (KVRL) framework is utilized to learn the representation of different face regions in an unsupervised manner. We propose a novel approach for feature representation termed as filtered contractive deep belief networks (fcDBN). The proposed feature representation encodes relational information present in images using filters and contractive regularization penalty. A compact representation of facial images of kin is extracted as the output from the learned model and a multi-layer neural network is utilized to verify the kin accurately. The results show that the proposed deep learning framework (KVRL-fcDBN) yields state-of-the-art kinship verification accuracy on the WVU Kinship database and on four existing benchmark datasets. Additionally, we propose a new deep learning framework for kinship verification in unconstrained videos using a novel Supervised Mixed Norm regularization Autoencoder (SMNAE). This new autoencoder formulation introduces class-specific sparsity in the weight matrix. The proposed three-stage SMNAE based kinship verification framework utilizes the learned spatio-temporal representation in the video frames for verifying kinship in a pair of videos. The effectiveness of the proposed framework is demonstrated on the KIVI database and six existing kinship databases. On the KIVI database, SMNAE yields videobased kinship verification accuracy of 83.18% which is at least 3.2% better than existing algorithms. The algorithm is also evaluated on six publicly available kinship databases and compared with best reported results. It is observed that the proposed SMNAE consistently yields best results on all the databases. Finally, we end by discussing the connections between face verification and kinship verification research. We explore the area of self-kinship which is age-invariant face recognition. Further, kinship information is used as a soft biometric modality to boost the performance of face verification via product of likelihood ratio and support vector machine based approaches. Using the proposed KVRL-fcDBN framework, an improvement of over 20% is observed in the performance of face verification. By addressing several problems of limited samples per kinship dataset, introducing real-world variations in unconstrained databases and designing two deep learning frameworks, this dissertation improves the understanding of kinship verification across humans and the performance of automated systems. The algorithms proposed in this research have been shown to outperform existing algorithms across six different kinship databases and has till date the best reported results in this field

    Real-Time Smile Detection using Deep Learning

    Get PDF
    Real-time smile detection from facial images is useful in many real world applications such as automatic photo capturing in mobile phone cameras or interactive distance learning. In this paper, we study different architectures of object detection deep networks for solving real-time smile detection problem. We then propose a combination of a lightweight convolutional neural network architecture (BKNet) with an efficient object detection framework (RetinaNet). The evaluation on the two datasets (GENKI-4K, UCF Selfie) with a mid-range hardware device (GTX TITAN Black) show that our proposed method helps in improving both accuracy and inference time of the original RetinaNet to reach real-time performance. In comparison with the state-of-the-art object detection framework (YOLO), our method has higher inference time, but still reaches real-time performance and obtains higher accuracy of smile detection on both experimented datasets

    Facial expression recognition in the wild : from individual to group

    Get PDF
    The progress in computing technology has increased the demand for smart systems capable of understanding human affect and emotional manifestations. One of the crucial factors in designing systems equipped with such intelligence is to have accurate automatic Facial Expression Recognition (FER) methods. In computer vision, automatic facial expression analysis is an active field of research for over two decades now. However, there are still a lot of questions unanswered. The research presented in this thesis attempts to address some of the key issues of FER in challenging conditions mentioned as follows: 1) creating a facial expressions database representing real-world conditions; 2) devising Head Pose Normalisation (HPN) methods which are independent of facial parts location; 3) creating automatic methods for the analysis of mood of group of people. The central hypothesis of the thesis is that extracting close to real-world data from movies and performing facial expression analysis on movies is a stepping stone in the direction of moving the analysis of faces towards real-world, unconstrained condition. A temporal facial expressions database, Acted Facial Expressions in the Wild (AFEW) is proposed. The database is constructed and labelled using a semi-automatic process based on closed caption subtitle based keyword search. Currently, AFEW is the largest facial expressions database representing challenging conditions available to the research community. For providing a common platform to researchers in order to evaluate and extend their state-of-the-art FER methods, the first Emotion Recognition in the Wild (EmotiW) challenge based on AFEW is proposed. An image-only based facial expressions database Static Facial Expressions In The Wild (SFEW) extracted from AFEW is proposed. Furthermore, the thesis focuses on HPN for real-world images. Earlier methods were based on fiducial points. However, as fiducial points detection is an open problem for real-world images, HPN can be error-prone. A HPN method based on response maps generated from part-detectors is proposed. The proposed shape-constrained method does not require fiducial points and head pose information, which makes it suitable for real-world images. Data from movies and the internet, representing real-world conditions poses another major challenge of the presence of multiple subjects to the research community. This defines another focus of this thesis where a novel approach for modeling the perception of mood of a group of people in an image is presented. A new database is constructed from Flickr based on keywords related to social events. Three models are proposed: averaging based Group Expression Model (GEM), Weighted Group Expression Model (GEM_w) and Augmented Group Expression Model (GEM_LDA). GEM_w is based on social contextual attributes, which are used as weights on each person's contribution towards the overall group's mood. Further, GEM_LDA is based on topic model and feature augmentation. The proposed framework is applied to applications of group candid shot selection and event summarisation. The application of Structural SIMilarity (SSIM) index metric is explored for finding similar facial expressions. The proposed framework is applied to the problem of creating image albums based on facial expressions, finding corresponding expressions for training facial performance transfer algorithms

    Facial Expression Analysis under Partial Occlusion: A Survey

    Full text link
    Automatic machine-based Facial Expression Analysis (FEA) has made substantial progress in the past few decades driven by its importance for applications in psychology, security, health, entertainment and human computer interaction. The vast majority of completed FEA studies are based on non-occluded faces collected in a controlled laboratory environment. Automatic expression recognition tolerant to partial occlusion remains less understood, particularly in real-world scenarios. In recent years, efforts investigating techniques to handle partial occlusion for FEA have seen an increase. The context is right for a comprehensive perspective of these developments and the state of the art from this perspective. This survey provides such a comprehensive review of recent advances in dataset creation, algorithm development, and investigations of the effects of occlusion critical for robust performance in FEA systems. It outlines existing challenges in overcoming partial occlusion and discusses possible opportunities in advancing the technology. To the best of our knowledge, it is the first FEA survey dedicated to occlusion and aimed at promoting better informed and benchmarked future work.Comment: Authors pre-print of the article accepted for publication in ACM Computing Surveys (accepted on 02-Nov-2017
    • …
    corecore