33 research outputs found

    A comparison of the performance of humans and computational models in the classification of facial expression

    Get PDF
    Recognizing expressions are a key part of human social interaction, and processing of facial expression information is largely automatic for humans, but it is a non-trivial task for a computational system. In the first part of the experiment, we develop computational models capable of differentiating between two human facial expressions. We perform pre-processing by Gabor filters and dimensionality reduction using the methods: Principal Component Analysis, and Curvilinear Component Analysis. Subsequently the faces are classified using a Support Vector Machines. We also asked human subjects to classify these images and then we compared the performance of the humans and the computational models. The main result is that for the Gabor pre-processed model, the probability that an individual face was classified in the given class by the computational model is inversely proportional to the reaction time for the human subjects

    A Classifier Model based on the Features Quantitative Analysis for Facial Expression Recognition

    Get PDF
    In recent decades computer technology has considerable developed in use of intelligent systems for classification. The development of HCI systems is highly depended on accurate understanding of emotions. However, facial expressions are difficult to classify by a mathematical models because of natural quality. In this paper, quantitative analysis is used in order to find the most effective features movements between the selected facial feature points. Therefore, the features are extracted not only based on the psychological studies, but also based on the quantitative methods to arise the accuracy of recognitions. Also in this model, fuzzy logic and genetic algorithm are used to classify facial expressions. Genetic algorithm is an exclusive attribute of proposed model which is used for tuning membership functions and increasing the accuracy

    Facial Emotion Recognition with Sparse Coding Descriptor

    Get PDF
    With the Corona Virus Disease 2019 (COVID-19) global pandemic ravaging the world, all sectors of life were affected including education. This led to many schools taking distance learning through the use of computer as a safer option. Facial emotion means a lot to teacher’s assessment of his performance and relation to his students. Researchers has been working on improving the face monitoring and human machine interface. In this paper we presented different types of face recognition methods which include: Principal component analysis (PCA); Speeded Up Robust Features (SURF); Local binary pattern (LBP); Gray-Level Co-occurrence Matrix (GLCM) and also the group sparse coding (GSC) and come up with the fusion of LBP, PCA, SURF GLCM with GSC. Linear Kernel Support Vector Machine (LSVM) Classifier out-performed Polynomial, RBF and Sigmoid kernels SVM in the emotion classification. Results obtained from experiments indicated that, the new fusion method is capable of differentiating different types of face emotions with higher accuracy compare with the state-of-the-art methods currently available

    Veido raumenų reakcijos į skausmo išraiškas kito asmens veide

    Get PDF
    The aim of this study was to record facial electromiograms (EMG) while subjects were viewing facial expressions of different pain levels (no-pain, medium pain and very painful) and to find objective criteria for measuring pain expressed in human’s face. The study involved 18 students with age 21 years. The magnitude of the EMG response of m. corrugator supercilii depended on voluntary performed facial pain expression in the subjects. EMG responses of voluntary performed facial pain expressions to mirrored pain reactions were detected at two time span intervals: 200–300 ms after stimulation in m. zygomaticus major, and 400–500 ms after stimulation in m. corrugator supercilii. These differences disappear after 1300 ms. In the second time interval, differences in EMG responses of both muscle groups occur 1600 ms after stimulus presentation, but disappear differently: 3100 ms after stimulation in m. zygomaticus major and 4000 ms in m. corrugator supercilii. Constant responding with “medium pain” expression when recognizing faces of different pain expressions have an effect on the voluntary EMG responses of individual subjects. Images with emotional expression “no pain” reduce m. corrugator supercilii activity and increase m. zygomaticus major activity for those observers.Tyrimu siekiama nustatyti, kokie elektromiogramų rodikliai yra susiję su atpažįstama skausmo išraiška veide, įvertinti, ar jie atspindi atpažįstamo veido skausmo lygmenį ir nustatyti galimus objektyvius kiekybinius skausmo lygmens įvertinimo kriterijus. Tirtos veido raumenų reakcijos atpažįstant virtualius veidus su skirtingo laipsnio skausmo išraiškomis. Eksperimentų metu atsitiktine tvarka buvo pateikiamos virtualios, iš skausmo skalės paimtos, veidų išraiškos (neskauda-skauda-labai skauda). Tiriamieji į jas reaguodavo dvejopai: 1) pagal matomą veido išraišką savo veido raumenimis išreikšdavo atitinkamą skausmo būklę (neskauda-skauda-labai skauda); 2) nepriklausomai nuo matomos virtualios skausmo išraiškos išreikšdavo įsivaizduojamo vidutinio skausmo išraišką. Nustatyta, kad antakių sutraukiamųjų raumenų (m. corrugator supercilii) EMG reakcijos dydis priklauso nuo tiriamųjų nutaisomos valingos skausmo išraiškos laipsnio. Valingai nutaisytų skausmo išraiškų EMG reakcijos atpažįstant matomo veido skausmo lygmenį pasireiškia dviejuose laiko intervaluose: didžiųjų skruostinių raumenų (m. zygomaticus major) srityse jos atsiranda praėjus 200–300 ms po stimulo pateikimo, antakių sutraukiamųjų raumenų – praėjus 400–500 ms. Šie skirtumai išnyksta po 1300 ms. Antrajame intervale abiejų raumenų grupių EMG reakcijų skirtumai atsiranda praėjus 1600 ms po stimulo pateikimo, o išnyksta skirtingai: didžiųjų skruostinių raumenų po 3100 ms, o antakių sutraukiamųjų raumenų – tęsiasi iki 4000 ms. Nuolat nutaisant vieną ir tą pačią vidutinio skausmo išraišką ir atpažįstant skirtingų skausmo išraiškų veidus, pasireiškia įtaka atskirų tiriamųjų valingoms EMG reakcijoms. Vaizdas su išraiška „neskauda” nevalingai sumažina antakių sutraukiamųjų raumenų ir padidina didžiųjų skruostinių raumenų EMG amplitudę

    Developmental changes in the critical information used for facial expression processing

    Get PDF
    Facial expression recognition skills are known to improve across childhood and adolescence, but the mechanisms driving the development of these important social abilities remain unclear. This study investigates directly whether there are qualitative differences in child and adult processing strategies for these emotional stimuli. With a novel adaptation of the Bubbles reverse-correlation paradigm (Gosselin & Schyns, 2001), we added noise to expressive face stimuli and presented sub-sets of randomly sampled information from each image at different locations and spatial frequency bands across experimental trials. Results from our large developmental sample: 71 young children (6 -9 years), 69 older children (10-13 years) and 54 adults, uniquely reveal flexible profiles of strategic information-use for categorisations of fear, sadness, happiness and anger at all ages. All three groups relied upon a distinct set of key facial features for each of these expressions, with fine-tuning of this diagnostic information (features and spatial frequency) observed across developmental time. Reported variability in the developmental trajectories for different emotional expressions is consistent with the notion of functional links between the refinement of information-use and processing ability

    Software architecture for smart emotion recognition and regulation of the ageing adult

    Get PDF
    This paper introduces the architecture of an emotion-aware ambient intelligent and gerontechnological project named “Improvement of the Elderly Quality of Life and Care through Smart Emotion Regulation”. The objective of the proposal is to find solutions for improving the quality of life and care of the elderly who can or want to continue living at home by using emotion regulation techniques. A series of sensors is used for monitoring the elderlies’ facial and gestural expression, activity and behaviour, as well as relevant physiological data. This way the older people’s emotions are inferred and recognized. Music, colour and light are the stimulating means to regulate their emotions towards a positive and pleasant mood. Then, the paper proposes a gerontechnological software architecture that enables real-time, continuous monitoring of the elderly and provides the best-tailored reactions of the ambience in order to regulate the older person’s emotions towards a positive mood. After describing the benefits of the approach for emotion recognition and regulation in the elderly, the eight levels that compose the architecture are described.This work was partially supported by Spanish Ministerio de Economía y Competitividad/FEDER under TIN2013-47074-C2-1-R grant. José Carlos Castillo was partially supported by a grant from Iceland, Liechtenstein and Norway through the EEA Financial Mechanism, operated by Universidad Complutense de Madrid.Publicad
    corecore