37,434 research outputs found

    EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment

    No full text
    Face performance capture and reenactment techniques use multiple cameras and sensors, positioned at a distance from the face or mounted on heavy wearable devices. This limits their applications in mobile and outdoor environments. We present EgoFace, a radically new lightweight setup for face performance capture and front-view videorealistic reenactment using a single egocentric RGB camera. Our lightweight setup allows operations in uncontrolled environments, and lends itself to telepresence applications such as video-conferencing from dynamic environments. The input image is projected into a low dimensional latent space of the facial expression parameters. Through careful adversarial training of the parameter-space synthetic rendering, a videorealistic animation is produced. Our problem is challenging as the human visual system is sensitive to the smallest face irregularities that could occur in the final results. This sensitivity is even stronger for video results. Our solution is trained in a pre-processing stage, through a supervised manner without manual annotations. EgoFace captures a wide variety of facial expressions, including mouth movements and asymmetrical expressions. It works under varying illuminations, background, movements, handles people from different ethnicities and can operate in real time

    Pain Level Detection From Facial Image Captured by Smartphone

    Get PDF
    Accurate symptom of cancer patient in regular basis is highly concern to the medical service provider for clinical decision making such as adjustment of medication. Since patients have limitations to provide self-reported symptoms, we have investigated how mobile phone application can play the vital role to help the patients in this case. We have used facial images captured by smart phone to detect pain level accurately. In this pain detection process, existing algorithms and infrastructure are used for cancer patients to make cost low and user-friendly. The pain management solution is the first mobile-based study as far as we found today. The proposed algorithm has been used to classify faces, which is represented as a weighted combination of Eigenfaces. Here, angular distance, and support vector machines (SVMs) are used for the classification system. In this study, longitudinal data was collected for six months in Bangladesh. Again, cross-sectional pain images were collected from three different countries: Bangladesh, Nepal and the United States. In this study, we found that personalized model for pain assessment performs better for automatic pain assessment. We also got that the training set should contain varying levels of pain in each group: low, medium and high

    Facial Expression Recognition

    Get PDF

    Understanding face and eye visibility in front-facing cameras of smartphones used in the wild

    Get PDF
    Commodity mobile devices are now equipped with high-resolution front-facing cameras, allowing applications in biometrics (e.g., FaceID in the iPhone X), facial expression analysis, or gaze interaction. However, it is unknown how often users hold devices in a way that allows capturing their face or eyes, and how this impacts detection accuracy. We collected 25,726 in-the-wild photos, taken from the front-facing camera of smartphones as well as associated application usage logs. We found that the full face is visible about 29% of the time, and that in most cases the face is only partially visible. Furthermore, we identified an influence of users' current activity; for example, when watching videos, the eyes but not the entire face are visible 75% of the time in our dataset. We found that a state-of-the-art face detection algorithm performs poorly against photos taken from front-facing cameras. We discuss how these findings impact mobile applications that leverage face and eye detection, and derive practical implications to address state-of-the art's limitations

    Distinguishing Posed and Spontaneous Smiles by Facial Dynamics

    Full text link
    Smile is one of the key elements in identifying emotions and present state of mind of an individual. In this work, we propose a cluster of approaches to classify posed and spontaneous smiles using deep convolutional neural network (CNN) face features, local phase quantization (LPQ), dense optical flow and histogram of gradient (HOG). Eulerian Video Magnification (EVM) is used for micro-expression smile amplification along with three normalization procedures for distinguishing posed and spontaneous smiles. Although the deep CNN face model is trained with large number of face images, HOG features outperforms this model for overall face smile classification task. Using EVM to amplify micro-expressions did not have a significant impact on classification accuracy, while the normalizing facial features improved classification accuracy. Unlike many manual or semi-automatic methodologies, our approach aims to automatically classify all smiles into either `spontaneous' or `posed' categories, by using support vector machines (SVM). Experimental results on large UvA-NEMO smile database show promising results as compared to other relevant methods.Comment: 16 pages, 8 figures, ACCV 2016, Second Workshop on Spontaneous Facial Behavior Analysi

    Can people guess what happened to others from their reactions?

    Get PDF
    Are we able to infer what happened to a person from a brief sample of his/her behaviour? It has been proposed that mentalising skills can be used to retrodict as well as predict behaviour, that is, to determine what mental states of a target have already occurred. The current study aimed to develop a paradigm to explore these processes, which takes into account the intricacies of real-life situations in which reasoning about mental states, as embodied in behaviour, may be utilised. A novel task was devised which involved observing subtle and naturalistic reactions of others in order to determine the event that had previously taken place. Thirty-five participants viewed videos of real individuals reacting to the researcher behaving in one of four possible ways, and were asked to judge which of the four ‘scenarios’ they thought the individual was responding to. Their eye movements were recorded to establish the visual strategies used. Participants were able to deduce successfully from a small sample of behaviour which scenario had previously occurred. Surprisingly, looking at the eye region was associated with poorer identification of the scenarios, and eye movement strategy varied depending on the event experienced by the person in the video. This suggests people flexibly deploy their attention using a retrodictive mindreading process to infer events

    HeadOn: Real-time Reenactment of Human Portrait Videos

    Get PDF
    We propose HeadOn, the first real-time source-to-target reenactment approach for complete human portrait videos that enables transfer of torso and head motion, face expression, and eye gaze. Given a short RGB-D video of the target actor, we automatically construct a personalized geometry proxy that embeds a parametric head, eye, and kinematic torso model. A novel real-time reenactment algorithm employs this proxy to photo-realistically map the captured motion from the source actor to the target actor. On top of the coarse geometric proxy, we propose a video-based rendering technique that composites the modified target portrait video via view- and pose-dependent texturing, and creates photo-realistic imagery of the target actor under novel torso and head poses, facial expressions, and gaze directions. To this end, we propose a robust tracking of the face and torso of the source actor. We extensively evaluate our approach and show significant improvements in enabling much greater flexibility in creating realistic reenacted output videos.Comment: Video: https://www.youtube.com/watch?v=7Dg49wv2c_g Presented at Siggraph'1
    • …
    corecore