5 research outputs found

    Ensemble of Hankel Matrices for Face Emotion Recognition

    Full text link
    In this paper, a face emotion is considered as the result of the composition of multiple concurrent signals, each corresponding to the movements of a specific facial muscle. These concurrent signals are represented by means of a set of multi-scale appearance features that might be correlated with one or more concurrent signals. The extraction of these appearance features from a sequence of face images yields to a set of time series. This paper proposes to use the dynamics regulating each appearance feature time series to recognize among different face emotions. To this purpose, an ensemble of Hankel matrices corresponding to the extracted time series is used for emotion classification within a framework that combines nearest neighbor and a majority vote schema. Experimental results on a public available dataset shows that the adopted representation is promising and yields state-of-the-art accuracy in emotion classification.Comment: Paper to appear in Proc. of ICIAP 2015. arXiv admin note: text overlap with arXiv:1506.0500

    Kısmi ve tam yüz görüntüleri üzerinde makine öğrenmesi yöntemleriyle yüz ifadesi tespiti

    Get PDF
    06.03.2018 tarihli ve 30352 sayılı Resmi Gazetede yayımlanan “Yükseköğretim Kanunu İle Bazı Kanun Ve Kanun Hükmünde Kararnamelerde Değişiklik Yapılması Hakkında Kanun” ile 18.06.2018 tarihli “Lisansüstü Tezlerin Elektronik Ortamda Toplanması, Düzenlenmesi ve Erişime Açılmasına İlişkin Yönerge” gereğince tam metin erişime açılmıştır.Yüz ifadeleri insanlar arası iletişimin önemli bir parçası olduğu gibi insan makine etkileşiminde de önemli rol oynamaktadır. Suçlu tespiti, sürücü dikkatinin izlenmesi, hasta takibi gibi önemli konularda karar vermede yüz ifadesi tespiti kullanılmaktadır. Bu sebeple, yüz ifadelerinin sistemler aracılığı ile otomatik tespiti popüler bir makine öğrenmesi çalışma alanıdır. Bu tez çalışmasında yüz ifadesi sınıflandırma çalışmaları yapılmıştır. Yapılan yüz ifadesi tespiti uygulamaları genel olarak iki başlık altında toplanabilir. Bunlardan ilki kısmi yüz görüntülerinin klasik makine öğrenmesi yöntemleriyle analizi ve ikincisi ise tüm yüz görüntülerinin derin öğrenme yöntemleri ile analiz edilmesidir. Geliştirilen ilk uygulamada, yüz görüntülerinden duygu tespiti için literatürdeki çalışmalardan farklı olarak sadece göz ve kaşların bulunduğu bölgeler kullanılarak sınıflandırma yapılmış ve yüksek başarım elde edilmiştir. Önerilen bu yöntem sayesinde yüz ifadesi tespitleri alt yüz kapanmalarından veya ağız hareketlerinden etkilenmeyecek, gürbüz özniteliklerin seçimi ile daha az öznitelikle sınırlı kaynaklara sahip cihazlarda çalışabilecek niteliktedir. Ayrıca önerilen sistemin genelleme yeteneğinin yüksek olduğu karşılaştırmalı olarak deneysel çalışmalarla ortaya konulmuştur. Tez kapsamında yapılan diğer yüz ifadesi sınıflandırma çalışmaları tüm yüz görüntüleri kullanılarak derin öğrenme yöntemleri ile gerçeklenmiştir. Önerilen yaklaşımlardan birisi yüz bölütleme çalışmasıdır. Bu çalışmalar ile elde edilen bölütlenmiş görüntüde yüz ifadesi ile ilgili öznitelikler korunmakta, kişisel herhangi bir veri saklanmamakta ve böylece kişisel gizlilik de korunmuş olmaktadır. Ayrıca bölütlenmiş görüntü ile orijinal yüz görüntüsünün birleşimi; yüz ifadesi için önemli olan kaş, göz ve ağız bölgelerine odaklanılarak yüz ifadelerinin tanınma başarımının arttırılması sağlamıştır.Facial expressions are important for interpersonal communication also play an important role in human machine interaction. Facial expressions are used in many areas such as criminal detection, driver attention monitoring, patient monitoring. Therefore, automatic facial expression recognition systems are a popular machine learning problem. In this thesis study, facial expression recognition studies are performed. In general, the applications of facial expression recognition can be grouped under two topic in this thesis: analysis of partial facial images with classical machine learning methods and analysis of whole facial images with deep learning methods. In the first application, classification of the facial expressions from facial images was performed using only eye and eyebrows regions. This approach is different from the studies which are studied facial expression recognition in the literature and high success rate was achieved. With this approach, proposed system is more robust for under facial occlusions and mouth motion during speech. Further, according to our experiments, the generalization ability of the proposed system is high. In this thesis, the rest of the facial expression recognition applications was developed with whole face images using deep learning techniques. One of the proposed methods is segmentation of facial parts with CNN. After segmentation process, facial segmented images were obtained. With this segmented images, personal privacy is protected because the segmented images don't include any personal information. Also, the success rate of the classification was increased with combining original raw image and segmented image. Because; eyes, eyebrows and mouth are crucial for facial expression recognition and segmented images have these areas. Therefore, the proposed CNN architecture for classification forces the earlier layers of the CNN system to learn to detect and localize the facial regions, thus providing decoupled and guided training

    Using Hankel matrices for dynamics-based facial emotion recognition and pain detection

    No full text
    This paper proposes a new approach to model the temporal dynamics of a sequence of facial expressions. To this purpose, a sequence of Face Image Descriptors (FID) is regarded as the output of a Linear Time Invariant (LTI) system. The temporal dynamics of such sequence of descriptors are represented by means of a Hankel matrix. The paper presents different strategies to compute dynamics-based representation of a sequence of FID, and reports classification accuracy values of the proposed representations within different standard classification frameworks. The representations have been validated in two very challenging application domains: emotion recognition and pain detection. Experiments on two publicly available benchmarks and comparison with state-of-the-art approaches demonstrate that the dynamics-based FID representation attains competitive performance when off-the- shelf classification tools are adopted

    Automatic Monitoring of Physical Activity Related Affective States for Chronic Pain Rehabilitation

    Get PDF
    Chronic pain is a prevalent disorder that affects engagement in valued activities. This is a consequence of cognitive and affective barriers, particularly low self-efficacy and emotional distress (i.e. fear/anxiety and depressed mood), to physical functioning. Although clinicians intervene to reduce these barriers, their support is limited to clinical settings and its effects do not easily transfer to everyday functioning which is key to self-management for the person with pain. Analysis carried out in parallel with this thesis points to untapped opportunities for technology to support pain self-management or improved function in everyday activity settings. With this long-term goal for technology in mind, this thesis investigates the possibility of building systems that can automatically detect relevant psychological states from movement behaviour, making three main contributions. First, extension of the annotation of an existing dataset of participants with and without chronic pain performing physical exercises is used to develop a new model of chronic disabling pain where anxiety acts as mediator between pain and self-efficacy, emotional distress, and movement behaviour. Unlike previous models, which are largely theoretical and draw from broad measures of these variables, the proposed model uses event-specific data that better characterise the influence of pain and related states on engagement in physical activities. The model further shows that the relationship between these states and guarding during movement (the behaviour specified in the pain behaviour literature) is complex and behaviour descriptions of a lower level of granularity are needed for automatic classification of the states. The model also suggests that some of the states may be expressed via other movement behaviour types. Second, addressing this using the aforementioned dataset with the additional labels, and through an in-depth analysis of movement, this thesis provides an extended taxonomy of bodily cues for the automatic classification of pain, self-efficacy and emotional distress. In particular, the thesis provides understanding of novel cues of these states and deeper understanding of known cues of pain and emotional distress. Using machine learning algorithms, average F1 scores (mean across movement types) of 0.90, 0.87, and 0.86 were obtained for automatic detection of three levels of pain and self-efficacy and of two levels of emotional distress respectively, based on the bodily cues described and thus supporting the discriminative value of the proposed taxonomy. Third, based on this, the thesis acquired a new dataset of both functional and exercise movements of people with chronic pain based on low-cost wearable sensors designed for this thesis and informed by the previous studies. The modelling results of average F1 score of 0.78 for two-level detection of both pain and self-efficacy point to the possibility of automatic monitoring of these states in everyday functioning. With these contributions, the thesis provides understanding and tools necessary to advance the area of pain-related affective computing and groundbreaking insight that is critical to the understanding of chronic pain. Finally, the contributions lay the groundwork for physical rehabilitation technology to facilitate everyday functioning of people with chronic pain
    corecore