18 research outputs found

    Automatic analysis of facilitated taste-liking

    Get PDF
    This paper focuses on: (i) Automatic recognition of taste-liking from facial videos by comparatively training and evaluating models with engineered features and state-of-the-art deep learning architectures, and (ii) analysing the classification results along the aspects of facilitator type, and the gender, ethnicity, and personality of the participants. To this aim, a new beverage tasting dataset acquired under different conditions (human vs. robot facilitator and priming vs. non-priming facilitation) is utilised. The experimental results show that: (i) The deep spatiotemporal architectures provide better classification results than the engineered feature models; (ii) the classification results for all three classes of liking, neutral and disliking reach F1 scores in the range of 71%-91%; (iii) the personality-aware network that fuses participants’ personality information with that of facial reaction features provides improved classification performance; and (iv) classification results vary across participant gender, but not across facilitator type and participant ethnicity.EPSR

    Eigenface algorithm-based facial expression recognition in conversations - an experimental study

    Get PDF
    Recognising facial expressions is important in many fields such as computer-human interface. Though different approaches have been widely used in facial expression recognition systems, there are still many problems in practice to achieve the best implementation outcomes. Most systems are tested via the lab-based facial expressions, which may be unnatural. Particularly many systems have problems when they are used for recognising the facial expressions being used during conversation. This paper mainly conducts an experi-mental study on Eigenface algorithm-based facial expression recognition. It primarily aims to investigate the performance of both lab-based facial expressions and facial expressions used during conversation. The experiment also aims to probe the problems arising from the recognition of facial expression in conversations. The study is carried out using both the author’s facial expression as the basis for the lab-based expressions and the facial expression from one elderly person during conversation. The experiment showed a good result in lab-based facial expressions, but there are some issues observed when using the case of facial expressions obtained in conversation. By analysing the experimental results, future research focus has been highlighted as the investigation of how to recognise special emotions such as a wry smile and how to deal with the interferences in the lower part of face when speaking

    A survey of the state-of-the-art techniques for cognitive impairment detection in the elderly

    Get PDF
    With a growing number of elderly people in the UK, more and more of them suffer from various kinds of cognitive impairment. Cognitive impairment can be divided into different stages such as mild cognitive impairment (MCI) and severe cognitive impairment like dementia. Its early detection can be of great importance. However, it is challenging to detect cognitive impairment in the early stage with high accuracy and low cost, when most of the symptoms may not be fully expressed. This survey paper mainly reviews the state of the art techniques for the early detection of cognitive impairment and compares their advantages and weaknesses. In order to build an effective and low-cost automatic system for detecting and monitoring the cognitive impairment for a wide range of elderly people, the applications of computer vision techniques for the early detection of cognitive impairment by monitoring facial expressions, body movements and eye movements are highlighted in this paper. In additional to technique review, the main research challenges for the early detection of cognitive impairment with high accuracy and low cost are analysed in depth. Through carefully comparing and contrasting the currently popular techniques for their advantages and weaknesses, some important research directions are particularly pointed out and highlighted from the viewpoints of the authors alone

    Comparing methods for assessment of facial dynamics in patients with major neurocognitive disorders

    Get PDF
    International audienceAssessing facial dynamics in patients with major neurocogni-tive disorders and specifically with Alzheimers disease (AD) has shown to be highly challenging. Classically such assessment is performed by clinical staff, evaluating verbal and non-verbal language of AD-patients, since they have lost a substantial amount of their cognitive capacity, and hence communication ability. In addition, patients need to communicate important messages, such as discomfort or pain. Automated methods would support the current healthcare system by allowing for telemedicine, i.e., lesser costly and logistically inconvenient examination. In this work we compare methods for assessing facial dynamics such as talking, singing, neutral and smiling in AD-patients, captured during music mnemotherapy sessions. Specifically, we compare 3D Con-vNets, Very Deep Neural Network based Two-Stream ConvNets, as well as Improved Dense Trajectories. We have adapted these methods from prominent action recognition methods and our promising results suggest that the methods generalize well to the context of facial dynamics. The Two-Stream ConvNets in combination with ResNet-152 obtains the best performance on our dataset, capturing well even minor facial dynamics and has thus sparked high interest in the medical community

    Investigating Bias and Fairness in Facial Expression Recognition.

    No full text
    Recognition of expressions of emotions and a ect from facial images is a well-studied research problem in the elds of a ective computing and computer vision with a large number of datasets available containing facial images and corresponding expression labels. However, virtually none of these datasets have been acquired with consideration of fair distribution across the human population. Therefore, in this work, we undertake a systematic investigation of bias and fairness in facial expression recognition by comparing three di erent approaches, namely a baseline, an attribute-aware and a disentangled approach, on two wellknown datasets, RAF-DB and CelebA. Our results indicate that: (i) data augmentation improves the accuracy of the baseline model, but this alone is unable to mitigate the bias e ect; (ii) both the attribute-aware and the disentangled approaches equipped with data augmentation perform better than the baseline approach in terms of accuracy and fairness; (iii) the disentangled approach is the best for mitigating demographic bias; and (iv) the bias mitigation strategies are more suitable in the existence of uneven attribute distribution or imbalanced number of subgroup data.European Union H202
    corecore