1,316 research outputs found

    EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment

    No full text
    Face performance capture and reenactment techniques use multiple cameras and sensors, positioned at a distance from the face or mounted on heavy wearable devices. This limits their applications in mobile and outdoor environments. We present EgoFace, a radically new lightweight setup for face performance capture and front-view videorealistic reenactment using a single egocentric RGB camera. Our lightweight setup allows operations in uncontrolled environments, and lends itself to telepresence applications such as video-conferencing from dynamic environments. The input image is projected into a low dimensional latent space of the facial expression parameters. Through careful adversarial training of the parameter-space synthetic rendering, a videorealistic animation is produced. Our problem is challenging as the human visual system is sensitive to the smallest face irregularities that could occur in the final results. This sensitivity is even stronger for video results. Our solution is trained in a pre-processing stage, through a supervised manner without manual annotations. EgoFace captures a wide variety of facial expressions, including mouth movements and asymmetrical expressions. It works under varying illuminations, background, movements, handles people from different ethnicities and can operate in real time

    FaceVR: Real-Time Facial Reenactment and Eye Gaze Control in Virtual Reality

    No full text
    We introduce FaceVR, a novel method for gaze-aware facial reenactment in the Virtual Reality (VR) context. The key component of FaceVR is a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos. In addition to these face reconstruction components, FaceVR incorporates photo-realistic re-rendering in real time, thus allowing artificial modifications of face and eye appearances. For instance, we can alter facial expressions, change gaze directions, or remove the VR goggles in realistic re-renderings. In a live setup with a source and a target actor, we apply these newly-introduced algorithmic components. We assume that the source actor is wearing a VR device, and we capture his facial expressions and eye movement in real-time. For the target video, we mimic a similar tracking process; however, we use the source input to drive the animations of the target video, thus enabling gaze-aware facial reenactment. To render the modified target video on a stereo display, we augment our capture and reconstruction process with stereo data. In the end, FaceVR produces compelling results for a variety of applications, such as gaze-aware facial reenactment, reenactment in virtual reality, removal of VR goggles, and re-targeting of somebody's gaze direction in a video conferencing call

    The Weird Giggle: Attending to Affect in Virtual Reality

    Get PDF
    Virtual Reality (VR) is once again causing a stir, with conflicting assertions over its potential to usher in a glorious posthuman phase of freedom or to immerse bodies wearing headsets in pure and meaningless violence. This paper integrates philosophies of affect and affective experiences in VR by means of a practical application of phenomenological reflection. The combination of phenomenology and affect is valuable for articulating the lived experience of something unprecedented or disorienting, and for expanding the language of critique. The practical affective experiences of VR are from one particular VR artwork: MAN A VR by Gibson / Martelli, which uses captured data from dancers performing the dance improvisation form Skinner Releasing Technique (SRT) to animate the figures the VR world. SRT is also the movement practice facilitating philosophical reflections on the experience of being in the VR world. In this paper, passages directly describing moments of experience in MAN A VR extracted directly from research journals act as affective counterpoints to the theoretical discussion. The result is an expansion of the somatic register of VR, at the same time as a grounding of concepts from affect theory within contemporary digital culture

    xR-EgoPose: Egocentric 3D Human Pose from an HMD Camera

    Get PDF
    We present a new solution to egocentric 3D body pose estimation from monocular images captured from a downward looking fish-eye camera installed on the rim of a head mounted virtual reality device. This unusual viewpoint, just 2 cm. away from the user's face, leads to images with unique visual appearance, characterized by severe self-occlusions and strong perspective distortions that result in a drastic difference in resolution between lower and upper body. Our contribution is two-fold. Firstly, we propose a new encoder-decoder architecture with a novel dual branch decoder designed specifically to account for the varying uncertainty in the 2D joint locations. Our quantitative evaluation, both on synthetic and real-world datasets, shows that our strategy leads to substantial improvements in accuracy over state of the art egocentric pose estimation approaches. Our second contribution is a new large-scale photorealistic synthetic dataset - xR-EgoPose - offering 383K frames of high quality renderings of people with a diversity of skin tones, body shapes, clothing, in a variety of backgrounds and lighting conditions, performing a range of actions. Our experiments show that the high variability in our new synthetic training corpus leads to good generalization to real world footage and to state of the art results on real world datasets with ground truth. Moreover, an evaluation on the Human3.6M benchmark shows that the performance of our method is on par with top performing approaches on the more classic problem of 3D human pose from a third person viewpoint.Comment: ICCV 201

    AR-MoCap: Using augmented reality to support motion capture acting

    Get PDF
    Technology is disrupting the way films involving visual effects are produced. Chroma-key, LED walls, motion capture (mocap), 3D visual storyboards, and simulcams are only a few examples of the many changes introduced in the cinema industry over the last years. Although these technologies are getting commonplace, they are presenting new, unexplored challenges to the actors. In particular, when mocap is used to record the actors’ movements with the aim of animating digital character models, an increase in the workload can be easily expected for people on stage. In fact, actors have to largely rely on their imagination to understand what the digitally created characters will be actually seeing and feeling. This paper focuses on this specific domain, and aims to demonstrate how Augmented Reality (AR) can be helpful for actors when shooting mocap scenes. To this purpose, we devised a system named AR-MoCap that can be used by actors for rehearsing the scene in AR on the real set before actually shooting it. Through an Optical See-Through Head- Mounted Display (OST-HMD), an actor can see, e.g., the digital characters of other actors wearing mocap suits overlapped in real- time to their bodies. Experimental results showed that, compared to the traditional approach based on physical props and other cues, the devised system can help the actors to position themselves and direct their gaze while shooting the scene, while also improving spatial and social presence, as well as perceived effectiveness
    • …
    corecore