4 research outputs found

    Facial expression recognition using histogram variances faces

    Full text link
    In human's expression recognition, the representation of expression features is essential for the recognition accuracy. In this work we propose a novel approach for extracting expression dynamic features from facial expression videos. Rather than utilising statistical models e.g. Hidden Markov Model (HMM), our approach integrates expression dynamic features into a static image, the Histogram Variances Face (HVF), by fusing histogram variances among the frames in a video. The HVFs can be automatically obtained from videos with different frame rates and immune to illumination interference. In our experiments, for the videos picturing the same facial expression, e.g., surprise, happy and sadness etc., their corresponding HVFs are similar, even though the performers and frame rates are different. Therefore the static facial recognition approaches can be utilised for the dynamic expression recognition. We have applied this approach on the well-known Cohn-Kanade AU-Coded Facial Expression database then classified HVFs using PCA and Support Vector Machine (SVMs), and found the accuracy of HVFs classification is very encouraging. © 2009 IEEE

    Helping Autistic Children Understand Their Emotions Using Facial Expression Recognition and Mobile Technologies

    Full text link
    One of the main challenges for autistic children is to identify and express emotions. Many emotion-learning apps are available for smartphones and tablets to assist autistic children and their carers. However, they do not use the full potential offered by mobile technology, such as using facial expression recognition and wireless biosensors to recognise and sense emotions. To fill this gap we developed CaptureMyEmotion, an Android App that uses wireless sensors to capture physiological data together with facial expression recognition to provide a very personalised way to help autistic children learn about their emotions. The App enables children to capture photos, videos or sounds, and simultaneously attach emotion data and a self-portrait photo. The material can then be reviewed and discussed together with a carer at a later stage. CaptureMyEmotion has the potential to help autistic children integrate better in the society by providing a new way for them to understand their emotions

    Design of emotion-aware mobile apps for autistic children

    Full text link
    Sensor technologies and facial expression recognition are now widely used by mobile devices to sense our environment and our own physical and mental state. With these technologies today, we have the ability to sense emotions and create emotion-aware apps. One target group that would benefit from emotion-aware Apps are autistic children as they have difficulty understanding and expressing emotions and they are keen mobile device users. However, current mobile apps aimed at autistic children are not emotion-aware. This led our team to design a suite of Apps, called CaptureMyEmotion, that uses wireless sensors to capture physiological data together with facial expression recognition to provide a very personalised way to help autistic children and their carers understanding and managing their emotions. This paper describes how we designed CaptureMyEmotion and it discusses our experience while using sensors and facial expression recognition to detect emotion. It presents in more details the first App we developed for Android phone and tablets, called MyMedia. MyMedia enables children to take photos, videos or sounds, and simultaneously attach emotion data to them. The photos can then be reviewed together with a carer providing them a new way to understand emotions and discussing their daily activities. © 2013 IUPESM and Springer-Verlag

    The Application of Evolutionary Algorithms to the Classification of Emotion from Facial Expressions

    Get PDF
    Emotions are an integral part of human daily life as they can influence behaviour. A reliable emotion detection system may help people in varied things, such as social contact, health care and gaming experience. Emotions can often be identified by facial expressions, but this can be difficult to achieve reliably as people are different and a person can mask or supress an expression. Instead of analysis on static image, the computing of the motion of an expression’s occurrence plays more important role for these reasons. The work described in this thesis considers an automated and objective approach to recognition of facial expressions using extracted optical flow, which may be a reliable alternative to human interpretation. The Farneback’s fast estimation has been used for the dense optical flow extraction. Evolutionary algorithms, inspired by Darwinian evolution, have been shown to perform well on complex,nonlinear datasets and are considered for the basis of this automated approach. Specifically, Cartesian Genetic Programming (CGP) is implemented, which can find computer programme that approaches user-defined tasks by the evolution of solutions, and modified to work as a classifier for the analysis of extracted flow data. Its performance compared with Support Vector Machine (SVM), which has been widely used in expression recognition problem, on a range of pre-recorded facial expressions obtained from two separate databases (MMI and FG-NET). CGP was shown flexible to optimise in the experiments: the imbalanced data classification problem is sharply reduced by applying an Area under Curve (AUC) based fitness function. Results presented suggest that CGP is capable to achieve better performance than SVM. An automatic expression recognition system has also been implemented based on the method described in the thesis. The future work is to propose investigation of an ensemble classifier implementing both CGP and SVM
    corecore