8 research outputs found

    T-Life: um programa de treino da capacidade de reconhecimento emocional em pessoas com perturbações do espectro autista

    Get PDF
    Os indivíduos com défices de relacionamento interpessoal, especificamente com Perturbação do Espectro Autista (PEA), apresentam dificuldades em reconhecer emoções em si e nos outros, evidenciando claro prejuízo no funcionamento social e tarefas interpessoais do quotidiano. O Reconhecimento das Expressões Faciais (REF) tem sido estudado na tentativa de compreensão dos défices no reconhecimento emocional. Inúmeros trabalhos tentam promover o reconhecimento emocional facial em indivíduos com PEA, mas não através da síntese da face em tempo real. As novas tecnologias, nomeadamente a Realidade Virtual, parecem ser muito promissoras no trabalho com as PEA (Eynon, 1997, cit. in Beardon et al., 2001; Strickland et al., 1996), pois vão ao encontro das necessidades/características destes indivíduos ao permitirem: o controlo na apresentação dos estímulos; a introdução de modificações graduais para permitir a generalização; a garantia de situações seguras de aprendizagem, a intervenção individualizada e customizada e a interacção com computadores, factor motivador que permite a aprendizagem de forma não ansiogénica (Strickland, 1997). A abordagem que estamos a desenvolver e a testar reflecte todos estes pressupostos e enquadra-se numa tipologia de jogo. What a feeling é um videojogo que apresenta como finalidade melhorar a capacidade de indivíduos, social e emocionalmente diminuídos, no reconhecimento de emoções através da expressão facial. O jogo desenvolvido permite, através de um conjunto de exercícios, que qualquer pessoa de qualquer idade possa interagir com modelos 3D e aprender sobre as expressões da face. O jogo é baseado em síntese facial em tempo real. Nesta comunicação descreveremos a mecânica da nossa metodologia de aprendizagem e apresentaremos algumas linhas de orientação para trabalho futuro, resultante dos estudos desenvolvidos com os protótipos testados

    Does My Face FIT?: A Face Image Task Reveals Structure and Distortions of Facial Feature Representation.

    Get PDF
    Despite extensive research on face perception, few studies have investigated individuals' knowledge about the physical features of their own face. In this study, 50 participants indicated the location of key features of their own face, relative to an anchor point corresponding to the tip of the nose, and the results were compared to the true location of the same individual's features from a standardised photograph. Horizontal and vertical errors were analysed separately. An overall bias to underestimate vertical distances revealed a distorted face representation, with reduced face height. Factor analyses were used to identify separable subconfigurations of facial features with correlated localisation errors. Independent representations of upper and lower facial features emerged from the data pattern. The major source of variation across individuals was in representation of face shape, with a spectrum from tall/thin to short/wide representation. Visual identification of one's own face is excellent, and facial features are routinely used for establishing personal identity. However, our results show that spatial knowledge of one's own face is remarkably poor, suggesting that face representation may not contribute strongly to self-awareness

    Facial expression animation through action units transfer in latent space

    Get PDF
    Automatic animation synthesis has attracted much attention from the community. As most existing methods take a small number of discrete expressions rather than continuous expressions, their integrity and reality of the facial expressions is often compromised. In addition, the easy manipulation with simple inputs and unsupervised processing, although being important to the automatic facial expression animation applications, is relatively less concerned. To address these issues, we propose an unsupervised continuous automatic facial expression animation approach through action units (AU) transfer in the latent space of generative adversarial networks. The expression descriptor which is depicted with AU vector is transferred into the input image without the need of labeled pairs of images and even without their expressions and further network training. We also propose a new approach to quickly generate input image's latent code and cluster the boundaries of different AU attributes with their latent codes. Two latent code operators, vector addition and continuous interpolation, are leveraged for facial expression animation simulating align with the boundaries in the latent space. Experiments have shown that the proposed approach is effective on facial expression translation and animation synthesis

    The Community-Level Interventions for Pre-eclampsia (CLIP) cluster randomised trials in Mozambique, Pakistan, and India: an individual participant-level meta-analysis

    No full text
    corecore