8 research outputs found

    Cross-database representation and transfer learning of facial expressions

    Get PDF
    Our face is a key modality to convey emotions and infer intention. This makes face analysis an important factor in understanding the underlying mechanisms of interaction. Automatic solutions for facial expression recognition promise to deliver a significant fraction of the currently missing component of non-verbal communication to the human-machine interaction enabling more fulfilling experience closely modelling interpersonal communication. This thesis presents three major contributions aimed to overcome a number of issues currently preventing modern face analysis solutions from being applied in practice. The problem of reliable automatic discovery of facial actions is first considered from the point of view of manual feature craft, exploring ways to highlight features related to interpersonal commonalities in facial expression appearance, disregarding those corresponding to environmental conditions and subjective differences. It is then approached from the Multi-Task and Transfer learning perspective, presenting solutions for cost and performance efficient training of facial expression detection algorithms. Finally, a novel solution is proposed for multi-database heterogeneous data representation aimed to provide an environment for better generalisable face analysis solutions training and evaluation

    Learning to transfer: transferring latent task structures and its application to person-specific facial action unit detection

    Get PDF
    In this article we explore the problem of constructing person-specific models for the detection of facial Action Units (AUs), addressing the problem from the point of view of Transfer Learning and Multi-Task Learning. Our starting point is the fact that some expressions, such as smiles, are very easily elicited, annotated, and automatically detected, while others are much harder to elicit and to annotate. We thus consider a novel problem: all AU models for the tar- get subject are to be learnt using person-specific annotated data for a reference AU (AU12 in our case), and no data or little data regarding the target AU. In order to design such a model, we propose a novel Multi-Task Learning and the associated Transfer Learning framework, in which we con- sider both relations across subjects and AUs. That is to say, we consider a tensor structure among the tasks. Our approach hinges on learning the latent relations among tasks using one single reference AU, and then transferring these latent relations to other AUs. We show that we are able to effectively make use of the annotated data for AU12 when learning other person-specific AU models, even in the absence of data for the target task. Finally, we show the excellent performance of our method when small amounts of annotated data for the target tasks are made available

    Learning to transfer: transferring latent task structures and its application to person-specific facial action unit detection

    Get PDF
    In this article we explore the problem of constructing person-specific models for the detection of facial Action Units (AUs), addressing the problem from the point of view of Transfer Learning and Multi-Task Learning. Our starting point is the fact that some expressions, such as smiles, are very easily elicited, annotated, and automatically detected, while others are much harder to elicit and to annotate. We thus consider a novel problem: all AU models for the tar- get subject are to be learnt using person-specific annotated data for a reference AU (AU12 in our case), and no data or little data regarding the target AU. In order to design such a model, we propose a novel Multi-Task Learning and the associated Transfer Learning framework, in which we con- sider both relations across subjects and AUs. That is to say, we consider a tensor structure among the tasks. Our approach hinges on learning the latent relations among tasks using one single reference AU, and then transferring these latent relations to other AUs. We show that we are able to effectively make use of the annotated data for AU12 when learning other person-specific AU models, even in the absence of data for the target task. Finally, we show the excellent performance of our method when small amounts of annotated data for the target tasks are made available

    Cross-database representation and transfer learning of facial expressions

    No full text
    Our face is a key modality to convey emotions and infer intention. This makes face analysis an important factor in understanding the underlying mechanisms of interaction. Automatic solutions for facial expression recognition promise to deliver a significant fraction of the currently missing component of non-verbal communication to the human-machine interaction enabling more fulfilling experience closely modelling interpersonal communication. This thesis presents three major contributions aimed to overcome a number of issues currently preventing modern face analysis solutions from being applied in practice. The problem of reliable automatic discovery of facial actions is first considered from the point of view of manual feature craft, exploring ways to highlight features related to interpersonal commonalities in facial expression appearance, disregarding those corresponding to environmental conditions and subjective differences. It is then approached from the Multi-Task and Transfer learning perspective, presenting solutions for cost and performance efficient training of facial expression detection algorithms. Finally, a novel solution is proposed for multi-database heterogeneous data representation aimed to provide an environment for better generalisable face analysis solutions training and evaluation

    Distribution-based iterative pairwise classification of emotions in the wild using LGBP-TOP

    No full text
    Automatic facial expression analysis promises to be a game- changer in many application areas. But before this promise can be fulfilled, it has to move from the laboratory into the wild. The Emotion Recognition in the Wild challenge pro- vides an opportunity to develop approaches in this direction. We propose a novel Distribution-based Pairwise Iterative Classification scheme, which outperforms standard multi- class classification on this challenge data. We also verify that the recently proposed dynamic appearance descriptor, Local Gabor Patterns on Three Orthogonal Planes, performs well on this real-world data, indicating that it is robust to the type of facial misalignments that can be expected in such scenarios. Finally, we provide details of ACTC, our affective computing tools on the cloud, which is a new resource for researchers in the field of affective computing

    FERA 2015 - second Facial Expression Recognition and Analysis challenge

    No full text
    Abstract-Despite efforts towards evaluation standards in facial expression analysis (e.g. FERA 2011), there is a need for up-to-date standardised evaluation procedures, focusing in particular on current challenges in the field. One of the challenges that is actively being addressed is the automatic estimation of expression intensities. To continue to provide a standardisation platform and to help the field progress beyond its current limitations, the FG 2015 Facial Expression Recognition and Analysis challenge (FERA 2015) will challenge participants to estimate FACS Action Unit (AU) intensity as well as AU occurrence on a common benchmark dataset with reliable manual annotations. Evaluation will be done using a clear and well-defined protocol. In this paper we present the second such challenge in automatic recognition of facial expressions, to be held in conjunction with the 11 IEEE conference on Face and Gesture Recognition, May 2015, in Ljubljana, Slovenia. Three sub-challenges are defined: the detection of AU occurrence, the estimation of AU intensity for pre-segmented data, and fully automatic AU intensity estimation. In this work we outline the evaluation protocol, the data used, and the results of a baseline method for the three sub-challenges

    Fera 2015 - second facial expression recognition and analysis challenge

    No full text
    Abstract-Despite efforts towards evaluation standards in facial expression analysis (e.g. FERA 2011), there is a need for up-to-date standardised evaluation procedures, focusing in particular on current challenges in the field. One of the challenges that is actively being addressed is the automatic estimation of expression intensities. To continue to provide a standardisation platform and to help the field progress beyond its current limitations, the FG 2015 Facial Expression Recognition and Analysis challenge (FERA 2015) will challenge participants to estimate FACS Action Unit (AU) intensity as well as AU occurrence on a common benchmark dataset with reliable manual annotations. Evaluation will be done using a clear and well-defined protocol. In this paper we present the second such challenge in automatic recognition of facial expressions, to be held in conjunction with the 11 IEEE conference on Face and Gesture Recognition, May 2015, in Ljubljana, Slovenia. Three sub-challenges are defined: the detection of AU occurrence, the estimation of AU intensity for pre-segmented data, and fully automatic AU intensity estimation. In this work we outline the evaluation protocol, the data used, and the results of a baseline method for the three sub-challenges
    corecore