124 research outputs found

    Using Bezier Curve analysis in context of Expression Analysis

    Get PDF
    Affective computing is an area of research under increasing demand in the field of computer vision. Expression analysis, in particular, is a topic that has been undergoing research for many years. In this paper, an algorithm for expression determination and analysis is performed for the detection of seven expressions: sadness, anger, happiness, neutral, fear, disgust and surprise. First, the 68 landmarks of the face are detected and the face is realigned and warped to obtain a new image. Next, feature extraction is performed using LPQ. We then use a dimensionality reduction algorithm followed by a dual RBF-SVM and Adaboost classification algorithm to find the interest points in the features extracted. We then plot bezier curves on the regions of interest obtained. The curves are then used as the input to a CNN and this determines the facial expression. The results showed the algorithm to be extremely successfu

    Recognizing Emotions Conveyed through Facial Expressions

    Get PDF
    Emotional communication is a key element of habilitation care of persons with dementia. It is, therefore, highly preferable for assistive robots that are used to supplement human care provided to persons with dementia, to possess the ability to recognize and respond to emotions expressed by those who are being cared-for. Facial expressions are one of the key modalities through which emotions are conveyed. This work focuses on computer vision-based recognition of facial expressions of emotions conveyed by the elderly. Although there has been much work on automatic facial expression recognition, the algorithms have been experimentally validated primarily on young faces. The facial expressions on older faces has been totally excluded. This is due to the fact that the facial expression databases that were available and that have been used in facial expression recognition research so far do not contain images of facial expressions of people above the age of 65 years. To overcome this problem, we adopt a recently published database, namely, the FACES database, which was developed to address exactly the same problem in the area of human behavioural research. The FACES database contains 2052 images of six different facial expressions, with almost identical and systematic representation of the young, middle-aged and older age-groups. In this work, we evaluate and compare the performance of two of the existing imagebased approaches for facial expression recognition, over a broad spectrum of age ranging from 19 to 80 years. The evaluated systems use Gabor filters and uniform local binary patterns (LBP) for feature extraction, and AdaBoost.MH with multi-threshold stump learner for expression classification. We have experimentally validated the hypotheses that facial expression recognition systems trained only on young faces perform poorly on middle-aged and older faces, and that such systems confuse ageing-related facial features on neutral faces with other expressions of emotions. We also identified that, among the three age-groups, the middle-aged group provides the best generalization performance across the entire age spectrum. The performance of the systems was also compared to the performance of humans in recognizing facial expressions of emotions. Some similarities were observed, such as, difficulty in recognizing the expressions on older faces, and difficulty in recognizing the expression of sadness. The findings of our work establish the need for developing approaches for facial expression recognition that are robust to the effects of ageing on the face. The scientific results of our work can be used as a basis to guide future research in this direction

    Face alignment using a three layer predictor

    Get PDF
    Face alignment is an important feature for most facial images related algorithms such as expression analysis, face recognition or detection etc. Also, some images lose information due to factors such as occlusion and lighting and it is important to obtain those lost features. This paper proposes an innovative method for automatic face alignment by utilizing deep learning. First, we use second order gaussian derivatives along with RBF-SVM and Adaboost to classify a first layer of landmark points. Next, we use branching based cascaded regression to obtain a second layer of points which is further used as input to a parallel and multi-scale CNN that gives us the complete output. Results showed the algorithm gave excellent results in comparison to state-of-the-art algorithms

    A review of content-based video retrieval techniques for person identification

    Get PDF
    The rise of technology spurs the advancement in the surveillance field. Many commercial spaces reduced the patrol guard in favor of Closed-Circuit Television (CCTV) installation and even some countries already used surveillance drone which has greater mobility. In recent years, the CCTV Footage have also been used for crime investigation by law enforcement such as in Boston Bombing 2013 incident. However, this led us into producing huge unmanageable footage collection, the common issue of Big Data era. While there is more information to identify a potential suspect, the massive size of data needed to go over manually is a very laborious task. Therefore, some researchers proposed using Content-Based Video Retrieval (CBVR) method to enable to query a specific feature of an object or a human. Due to the limitations like visibility and quality of video footage, only certain features are selected for recognition based on Chicago Police Department guidelines. This paper presents the comprehensive reviews on CBVR techniques used for clothing, gender and ethnic recognition of the person of interest and how can it be applied in crime investigation. From the findings, the three recognition types can be combined to create a Content-Based Video Retrieval system for person identification

    Recognition of Facial Expressions using Local Mean Binary Pattern

    Get PDF
    In this paper, we propose a novel appearance based local feature extraction technique called Local Mean Binary Pattern (LMBP), which efficiently encodes the local texture and global shape of the face. LMBP code is produced by weighting the thresholded neighbor intensity values with respect to mean of 3 x 3 patch. LMBP produces highly discriminative code compared to other state of the art methods. The micro pattern is derived using the mean of the patch, and hence it is robust against illumination and noise variations. An image is divided into M x N regions and feature descriptor is derived by concatenating LMBP distribution of each region. We also propose a novel template matching strategy called Histogram Normalized Absolute Difference (HNAD) for comparing LMBP histograms. Rigorous experiments prove the effectiveness and robustness of LMBP operator. Experiments also prove the superiority of HNAD measure over well-known template matching methods such as L2 norm and Chi-Square measure. We also investigated LMBP for facial expression recognition low resolution. The performance of the proposed approach is tested on well-known datasets CK, JAFFE, and TFEID

    Facial Point Detection using Boosted Regression and Graph Models

    Get PDF
    Finding fiducial facial points in any frame of a video showing rich naturalistic facial behaviour is an unsolved problem. Yet this is a crucial step for geometric-featurebased facial expression analysis, and methods that use appearance-based features extracted at fiducial facial point locations. In this paper we present a method based on a combination of Support Vector Regression and Markov Random Fields to drastically reduce the time needed to search for a point’s location and increase the accuracy and robustness of the algorithm. Using Markov Random Fields allows us to constrain the search space by exploiting the constellations that facial points can form. The regressors on the other hand learn a mapping between the appearance of the area surrounding a point and the positions of these points, which makes detection of the points very fast and can make the algorithm robust to variations of appearance due to facial expression and moderate changes in head pose. The proposed point detection algorithm was tested on 1855 images, the results of which showed we outperform current state of the art point detectors
    corecore