8 research outputs found
T-Life: um programa de treino da capacidade de reconhecimento emocional em pessoas com perturbações do espectro autista
Os indivĂduos com dĂ©fices de relacionamento interpessoal, especificamente com Perturbação do Espectro Autista (PEA), apresentam dificuldades em reconhecer emoções em si e nos outros, evidenciando claro prejuĂzo no funcionamento social e tarefas interpessoais do quotidiano. O Reconhecimento das Expressões Faciais (REF) tem sido estudado na tentativa de compreensĂŁo dos dĂ©fices no reconhecimento emocional. InĂşmeros trabalhos tentam promover o reconhecimento emocional facial em indivĂduos com PEA, mas nĂŁo atravĂ©s da sĂntese da face em tempo real. As novas tecnologias, nomeadamente a Realidade Virtual, parecem ser muito promissoras no trabalho com as PEA (Eynon, 1997, cit. in Beardon et al., 2001; Strickland et al., 1996), pois vĂŁo ao encontro das necessidades/caracterĂsticas destes indivĂduos ao permitirem: o controlo na apresentação dos estĂmulos; a introdução de modificações graduais para permitir a generalização; a garantia de situações seguras de aprendizagem, a intervenção individualizada e customizada e a interacção com computadores, factor motivador que permite a aprendizagem de forma nĂŁo ansiogĂ©nica (Strickland, 1997). A abordagem que estamos a desenvolver e a testar reflecte todos estes pressupostos e enquadra-se numa tipologia de jogo. What a feeling Ă© um videojogo que apresenta como finalidade melhorar a capacidade de indivĂduos, social e emocionalmente diminuĂdos, no reconhecimento de emoções atravĂ©s da expressĂŁo facial. O jogo desenvolvido permite, atravĂ©s de um conjunto de exercĂcios, que qualquer pessoa de qualquer idade possa interagir com modelos 3D e aprender sobre as expressões da face. O jogo Ă© baseado em sĂntese facial em tempo real. Nesta comunicação descreveremos a mecânica da nossa metodologia de aprendizagem e apresentaremos algumas linhas de orientação para trabalho futuro, resultante dos estudos desenvolvidos com os protĂłtipos testados
Does My Face FIT?: A Face Image Task Reveals Structure and Distortions of Facial Feature Representation.
Despite extensive research on face perception, few studies have investigated individuals' knowledge about the physical features of their own face. In this study, 50 participants indicated the location of key features of their own face, relative to an anchor point corresponding to the tip of the nose, and the results were compared to the true location of the same individual's features from a standardised photograph. Horizontal and vertical errors were analysed separately. An overall bias to underestimate vertical distances revealed a distorted face representation, with reduced face height. Factor analyses were used to identify separable subconfigurations of facial features with correlated localisation errors. Independent representations of upper and lower facial features emerged from the data pattern. The major source of variation across individuals was in representation of face shape, with a spectrum from tall/thin to short/wide representation. Visual identification of one's own face is excellent, and facial features are routinely used for establishing personal identity. However, our results show that spatial knowledge of one's own face is remarkably poor, suggesting that face representation may not contribute strongly to self-awareness
Investigating Macroexpressions and Microexpressions in Computer Graphics Animated Faces
Facial expression animation through action units transfer in latent space
Automatic animation synthesis has attracted much attention from the community. As most existing methods take a small number of discrete expressions rather than continuous expressions, their integrity and reality of the facial expressions is often compromised. In addition, the easy manipulation with simple inputs and unsupervised processing, although being important to the automatic facial expression animation applications, is relatively less concerned. To address these issues, we propose an unsupervised continuous automatic facial expression animation approach through action units (AU) transfer in the latent space of generative adversarial networks. The expression descriptor which is depicted with AU vector is transferred into the input image without the need of labeled pairs of images and even without their expressions and further network training. We also propose a new approach to quickly generate input image's latent code and cluster the boundaries of different AU attributes with their latent codes. Two latent code operators, vector addition and continuous interpolation, are leveraged for facial expression animation simulating align with the boundaries in the latent space. Experiments have shown that the proposed approach is effective on facial expression translation and animation synthesis