7 research outputs found

    "What is hidden behind the mask?" Facial emotion recognition at the time of COVID-19 pandemic in cognitively normal multiple sclerosis patients

    Get PDF
    Social cognition deficits have been described in people with multiple sclerosis (PwMS), even in absence of a global cognitive impairment, affecting predominantly the ability to adequately process emotions from human faces. The COVID-19 pandemic has forced people to wear face masks that might interfere with facial emotion recognition. Therefore, in the present study, we aimed at investigating the ability of emotion recognition in PwMS from faces wearing masks. We enrolled a total of 42 cognitively normal relapsing-remitting PwMS and a matched group of 20 healthy controls (HCs). Participants underwent a facial emotion recognition task in which they had to recognize from faces wearing or not surgical masks which of the six basic emotions (happiness, anger, fear, sadness, surprise, disgust) was presented. Results showed that face masks negatively affected emotion recognition in all participants (p < 0.001); in particular, PwMS showed a global worse accuracy than HCs (p = 0.005), mainly driven by the "no masked" (p = 0.021) than the "masked" (p = 0.064) condition. Considering individual emotions, PwMS showed a selective impairment in the recognition of fear, compared with HCs, in both the conditions investigated ("masked": p = 0.023; "no masked": p = 0.016). Face masks affected negatively also response times (p < 0.001); in particular, PwMS were globally hastier than HCs (p = 0.024), especially in the "masked" condition (p = 0.013). Furthermore, a detailed characterization of the performance of PwMS and HCs in terms of accuracy and response speed was proposed. Results from the present study showed the effect of face masks on the ability to process facial emotions in PwMS, compared with HCs. Healthcare professionals working with PwMS at the time of the COVID-19 outbreak should take into consideration this effect in their clinical practice. Implications in the everyday life of PwMS are also discussed

    Cross-pose Facial Expression Recognition

    Get PDF
    In real world facial expression recognition (FER) applications, it is not practical for a user to enroll his/her facial expressions under different pose angles. Therefore, a desirable property of a FER system would be to allow the user to enroll his/her facial expressions under a single pose, for example frontal, and be able to recognize them under different pose angles. In this paper, we address this problem and present a method to recognize six prototypic facial expressions of an individual across different pose angles. We use Partial Least Squares to map the expressions from different poses into a common subspace, in which covariance between them is maximized. We show that PLS can be effectively used for facial expression recognition across poses by training on coupled expressions of the same identity from two different poses. This way of training lets the learned bases model the differences between expressions of different poses by excluding the effect of the identity. We have evaluated the proposed approach on the BU3DFE database [1]. We experiment with intensity values and Gabor filters for local face representation. We demonstrate that two representations perform similarly in case frontal is the input pose, but Gabor outperforms intensity for other pose pairs. We also perform a detailed analysis of the parameters used in the experiments. We have shown that it is possible to successfully recognize expressions of an individual from arbitrary viewpoints by only having his/her expressions from a single pose, for example frontal pose as the most practical case. Especially, if the difference in view angle is relatively small, that is less than 30 degrees, then the accuracy is over 90%. The correct recognition rate is often around 99% if there is only 15 degrees difference between view angles of the matched faces. Overall, we achieved an average recognition rate of 87.6% when using frontal images as gallery and 86.6% when considering all pose pairs

    A PCA approach to the object constancy for faces using view-based models of the face

    Get PDF
    The analysis of object and face recognition by humans attracts a great deal of interest, mainly because of its many applications in various fields, including psychology, security, computer technology, medicine and computer graphics. The aim of this work is to investigate whether a PCA-based mapping approach can offer a new perspective on models of object constancy for faces in human vision. An existing system for facial motion capture and animation developed for performance-driven animation of avatars is adapted, improved and repurposed to study face representation in the context of viewpoint and lighting invariance. The main goal of the thesis is to develop and evaluate a new approach to viewpoint invariance that is view-based and allows mapping of facial variation between different views to construct a multi-view representation of the face. The thesis describes a computer implementation of a model that uses PCA to generate example- based models of the face. The work explores the joint encoding of expression and viewpoint using PCA and the mapping between viewspecific PCA spaces. The simultaneous, synchronised video recording of 6 views of the face was used to construct multi-view representations, which helped to investigate how well multiple views could be recovered from a single view via the content addressable memory property of PCA. A similar approach was taken to lighting invariance. Finally, the possibility of constructing a multi-view representation from asynchronous view-based data was explored. The results of this thesis have implications for a continuing research problem in computer vision – the problem of recognising faces and objects from different perspectives and in different lighting. It also provides a new approach to understanding viewpoint invariance and lighting invariance in human observers

    Technische UnterstĂĽtzung fĂĽr Menschen mit Demenz : Symposium 30.09. - 01.10.2013

    Get PDF
    Wie sollten technische Systeme zur Unterstützung von Menschen mit Demenz gestaltet sein? Was wünschen sich die Patienten, Angehörigen, Pflegenden, und Ärzte? Und was können technische Assistenzsysteme überhaupt leisten? Am KIT fand im Oktober 2013 ein Symposium zu diesen Fragen statt. Experten aus verschiedenen Disziplinen kamen zusammen, um den aktuellen Stand in den jeweiligen Gebieten zu erörtern. Dieser Band gibt einen Überblick über die Erkenntnisse aus den verschiedenen Blickwinkeln

    Contextual Person Identification in Multimedia Data

    Get PDF
    We propose methods to improve automatic person identification, regardless of the visibility of a face, by integration of multiple cues including multiple modalities and contextual information. We propose a joint learning approach using contextual information from videos to improve learned face models. Further, we integrate additional modalities in a global fusion framework. We evaluate our approaches on a novel TV series data set, consisting of over 100 000 annotated faces
    corecore