156 research outputs found

    Modelado de trastornos neurodegenerativos a través de sistemas afectivos

    Full text link
    El objetivo de este trabajo de fin de grado es estudiar como el aprendizaje profundo basado en dominios afectivos puede ayudar en diferentes tareas relacionadas con el análisis de expresiones faciales. Una de estas tareas será la detección de la enfermedad neurodegenerativa del Parkinson. Para conseguir nuestro objetivo empezamos el trabajo recopilando información sobre el estado del arte de los temas más importantes que íbamos a tratar: el análisis facial, dominios afectivos y Enfermedad de Parkinson. La literatura relacionada indica que los adultos mayores con Enfermedad de Parkinson tienen una menor expresividad facial, conocida como hipomimia. Para detectar la hipomimia y ser capaces de clasificar entre pacientes sanos y pacientes con la enfermedad, proponemos una serie de experimentos basados en los modelos de aprendizaje profundo para el análisis de expresiones faciales. Los experimentos se dividen en dos fases. En primer lugar se utilizarán dos bases de datos afectivas (Affectnet y CFEE) y redes neuronales pre-entrenadas (VGG y Resnet) para reconocimiento facial. Estos modelos se adaptarán al dominio afectivo a través de las bases de datos propuestas y las populares técnicas de Transfer Learning. Una vez obtenidos los resultados, se escogerá el modelo que mejor se adapte al escenario de Parkinson. Aprovechando las características aprendidas por el modelo, vamos se aplicará nuevamente la técnica de Transfer Learning en este caso para pasar del dominio afectivo al del Parkinson, quedándonos con todas las capas del modelo menos la última y añadiéndole un clasificador de dos salidas. Con este nuevo modelo vamos a realizar la segunda fase, la clasificación de una base de datos con pacientes sanos y pacientes con la Enfermedad de Parkinson. Gracias a este segundo experimento el modelo aprenderá características relacionadas con los pacientes con la Enfermedad de Parkinson. Finalmente se realizan las conclusiones acerca de lo que el modelo generado va a poder aportar y ayudar a la medicina y se proponen distintos temas para realizar un trabajo futuro acerca de esta investigación

    The Many Moods of Emotion

    Full text link
    This paper presents a novel approach to the facial expression generation problem. Building upon the assumption of the psychological community that emotion is intrinsically continuous, we first design our own continuous emotion representation with a 3-dimensional latent space issued from a neural network trained on discrete emotion classification. The so-obtained representation can be used to annotate large in the wild datasets and later used to trained a Generative Adversarial Network. We first show that our model is able to map back to discrete emotion classes with a objectively and subjectively better quality of the images than usual discrete approaches. But also that we are able to pave the larger space of possible facial expressions, generating the many moods of emotion. Moreover, two axis in this space may be found to generate similar expression changes as in traditional continuous representations such as arousal-valence. Finally we show from visual interpretation, that the third remaining dimension is highly related to the well-known dominance dimension from psychology

    In-the-wild Facial Expression Recognition in Extreme Poses

    Full text link
    In the computer research area, facial expression recognition is a hot research problem. Recent years, the research has moved from the lab environment to in-the-wild circumstances. It is challenging, especially under extreme poses. But current expression detection systems are trying to avoid the pose effects and gain the general applicable ability. In this work, we solve the problem in the opposite approach. We consider the head poses and detect the expressions within special head poses. Our work includes two parts: detect the head pose and group it into one pre-defined head pose class; do facial expression recognize within each pose class. Our experiments show that the recognition results with pose class grouping are much better than that of direct recognition without considering poses. We combine the hand-crafted features, SIFT, LBP and geometric feature, with deep learning feature as the representation of the expressions. The handcrafted features are added into the deep learning framework along with the high level deep learning features. As a comparison, we implement SVM and random forest to as the prediction models. To train and test our methodology, we labeled the face dataset with 6 basic expressions.Comment: Published on ICGIP201

    ARE EMOTIONAL DISPLAYS AN EVOLUTIONARY PRECURSOR TO COMPOSITIONALITY IN LANGUAGE?

    Get PDF
    Compositionality is a basic property of language, spoken and signed, according to which the meaning of a complex structure is determined by the meanings of its constituents and the way they combine (e.g., Jackendoff, 2011 for spoken language; Sandler 2012 for constituents conveyed by face and body signals in sign language; Kirby & Smith, 2012 for emergence of compositionality). Here we seek the foundations of this property in a more basic, and presumably prior, form of communication: the spontaneous expression of emotion. To this end, we ask whether features of facial expressions and body postures are combined and recombined to convey different complex meanings in extreme displays of emotions. There is evidence that facial expressions are processed in a compositional fashion (Chen & Chen, 2010). In addition, facial components such as nose wrinkles or eye opening elicit systematic confusion while decoding facial expressions of disgust and anger and fear and surprise, respectively (Jack et al., 2014), suggesting that other co-occurring signals contribute to their interpretation. In spontaneous emotional displays of athletes, the body – and not the face – better predicts participants’ correct assessments of victory and loss pictures, as conveying positive or negative emotions (Aviezer et al., 2012), suggesting at least that face and body make different contributions to interpretations of the displays. Taken together, such studies lead to the hypothesis that emotional displays are compositional - that each signal component, or possibly specific clusters of components (Du et al., 2014), may have their own interpretations, and make a contribution to the complex meaning of the whole. On the assumption that emotional displays are older than language in evolution, our research program aims to determine whether the crucial property of compositionality is indeed present in communicative displays of emotion

    Profil Musculi Facialis Pada Ekspresi Wajah Dan Emosi Dengan Menggunakan Facial Action Coding System Pada Calon Presiden Prabowo

    Full text link
    : Limbic system consists of several subsystems with their own roles to back-up human emotion. Human emotion can be observed through facial expression which is controlled by musculi facialis. One of the tools that are used to determine basic emotion of human through facial expression is Facial Action Coding System (FACS) and its action units (AUs). This study aimed to obtain musculi facialis that oftenly and rarely be used by Prabowo and his emotion duringthe first session of 2014-Presidential election debate. This was a retrospective descriptive study. Samples were 30 photos of Prabowo's emotional expression. The observation was performed by using FACS. The results showed that the most commonly used AU was AU 4 (26.92%), meanwhile the most rarely used AUs were AU 9 and AU 29, both were 0.96%. The obtained emotional expressions were happy (6.67%), sad (6.67%), fear (6.67%), angry (46.67%), surprised (3.33%), and disgusted (3.33%). Conclusion: The most commonly used musculus facialis was corrugator supercilii whereas the most rarely used ones were levator labii superioris alaquae nasi and masseter. The emotional expressions, consecutively from the most commonly to the most rarely observed, were angry; happy, as well as sad and fear, and surprised as well as disgust

    Describing Common Human Visual Actions in Images

    Get PDF
    Which common human actions and interactions are recognizable in monocular still images? Which involve objects and/or other people? How many is a person performing at a time? We address these questions by exploring the actions and interactions that are detectable in the images of the MS COCO dataset. We make two main contributions. First, a list of 140 common `visual actions', obtained by analyzing the largest on-line verb lexicon currently available for English (VerbNet) and human sentences used to describe images in MS COCO. Second, a complete set of annotations for those `visual actions', composed of subject-object and associated verb, which we call COCO-a (a for `actions'). COCO-a is larger than existing action datasets in terms of number of actions and instances of these actions, and is unique because it is data-driven, rather than experimenter-biased. Other unique features are that it is exhaustive, and that all subjects and objects are localized. A statistical analysis of the accuracy of our annotations and of each action, interaction and subject-object combination is provided

    An Open Source Assistant for Human Emotion Analytics and Control for Incidental Investigations

    Get PDF
    This article shows a new approach for developing a system which assists to analyze the human emotions based on the history and practice on emotional attitudes, moods and type. The history of human behaviors and practice recorded from the incidental investigations and which is represented open source as JSON text data files in various nodes is searched using an elastic search algorithm with parameters defined by a rule based engine for human emotion recognition. The system gives assistance or suggestions for investigation, based on statistical and predictive metrics generated from the search results. The paper also elucidates the various possible applications where the system can be implemented
    • …
    corecore