6,129 research outputs found
EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment
Face performance capture and reenactment techniques use multiple cameras and sensors, positioned at a distance from the face or mounted on heavy wearable devices. This limits their applications in mobile and outdoor environments. We present EgoFace, a radically new lightweight setup for face performance capture and front-view videorealistic reenactment using a single egocentric RGB camera. Our lightweight setup allows operations in uncontrolled environments, and lends itself to telepresence applications such as video-conferencing from dynamic environments. The input image is projected into a low dimensional latent space of the facial expression parameters. Through careful adversarial training of the parameter-space synthetic rendering, a videorealistic animation is produced. Our problem is challenging as the human visual system is sensitive to the smallest face irregularities that could occur in the final results. This sensitivity is even stronger for video results. Our solution is trained in a pre-processing stage, through a supervised manner without manual annotations. EgoFace captures a wide variety of facial expressions, including mouth movements and asymmetrical expressions. It works under varying illuminations, background, movements, handles people from different ethnicities and can operate in real time
Serious Games Application for Memory Training Using Egocentric Images
Mild cognitive impairment is the early stage of several neurodegenerative
diseases, such as Alzheimer's. In this work, we address the use of lifelogging
as a tool to obtain pictures from a patient's daily life from an egocentric
point of view. We propose to use them in combination with serious games as a
way to provide a non-pharmacological treatment to improve their quality of
life. To do so, we introduce a novel computer vision technique that classifies
rich and non rich egocentric images and uses them in serious games. We present
results over a dataset composed by 10,997 images, recorded by 7 different
users, achieving 79% of F1-score. Our model presents the first method used for
automatic egocentric images selection applicable to serious games.Comment: 11 page
- …