40 research outputs found

    EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment

    No full text
    Face performance capture and reenactment techniques use multiple cameras and sensors, positioned at a distance from the face or mounted on heavy wearable devices. This limits their applications in mobile and outdoor environments. We present EgoFace, a radically new lightweight setup for face performance capture and front-view videorealistic reenactment using a single egocentric RGB camera. Our lightweight setup allows operations in uncontrolled environments, and lends itself to telepresence applications such as video-conferencing from dynamic environments. The input image is projected into a low dimensional latent space of the facial expression parameters. Through careful adversarial training of the parameter-space synthetic rendering, a videorealistic animation is produced. Our problem is challenging as the human visual system is sensitive to the smallest face irregularities that could occur in the final results. This sensitivity is even stronger for video results. Our solution is trained in a pre-processing stage, through a supervised manner without manual annotations. EgoFace captures a wide variety of facial expressions, including mouth movements and asymmetrical expressions. It works under varying illuminations, background, movements, handles people from different ethnicities and can operate in real time

    Sketch-based skeleton-driven 2D animation and motion capture.

    Get PDF
    This research is concerned with the development of a set of novel sketch-based skeleton-driven 2D animation techniques, which allow the user to produce realistic 2D character animation efficiently. The technique consists of three parts: sketch-based skeleton-driven 2D animation production, 2D motion capture and a cartoon animation filter. For 2D animation production, the traditional way is drawing the key-frames by experienced animators manually. It is a laborious and time-consuming process. With the proposed techniques, the user only inputs one image ofa character and sketches a skeleton for each subsequent key-frame. The system then deforms the character according to the sketches and produces animation automatically. To perform 2D shape deformation, a variable-length needle model is developed, which divides the deformation into two stages: skeleton driven deformation and nonlinear deformation in joint areas. This approach preserves the local geometric features and global area during animation. Compared with existing 2D shape deformation algorithms, it reduces the computation complexity while still yielding plausible deformation results. To capture the motion of a character from exiting 2D image sequences, a 2D motion capture technique is presented. Since this technique is skeleton-driven, the motion of a 2D character is captured by tracking the joint positions. Using both geometric and visual features, this problem can be solved by ptimization, which prevents self-occlusion and feature disappearance. After tracking, the motion data are retargeted to a new character using the deformation algorithm proposed in the first part. This facilitates the reuse of the characteristics of motion contained in existing moving images, making the process of cartoon generation easy for artists and novices alike. Subsequent to the 2D animation production and motion capture,"Cartoon Animation Filter" is implemented and applied. Following the animation principles, this filter processes two types of cartoon input: a single frame of a cartoon character and motion capture data from an image sequence. It adds anticipation and follow-through to the motion with related squash and stretch effect

    Visually Plausible Human-Object Interaction Capture from Wearable Sensors

    Get PDF
    In everyday lives, humans naturally modify the surrounding environmentthrough interactions, e.g., moving a chair to sit on it. To reproduce suchinteractions in virtual spaces (e.g., metaverse), we need to be able to captureand model them, including changes in the scene geometry, ideally fromego-centric input alone (head camera and body-worn inertial sensors). This isan extremely hard problem, especially since the object/scene might not bevisible from the head camera (e.g., a human not looking at a chair whilesitting down, or not looking at the door handle while opening a door). In thispaper, we present HOPS, the first method to capture interactions such asdragging objects and opening doors from ego-centric data alone. Central to ourmethod is reasoning about human-object interactions, allowing to track objectseven when they are not visible from the head camera. HOPS localizes andregisters both the human and the dynamic object in a pre-scanned static scene.HOPS is an important first step towards advanced AR/VR applications based onimmersive virtual universes, and can provide human-centric training data toteach machines to interact with their surroundings. The supplementary video,data, and code will be available on our project page athttp://virtualhumans.mpi-inf.mpg.de/hops/<br

    Sketch2Pose : estimating a 3D character pose from a bitmap sketch

    Full text link
    Artists frequently capture character poses via raster sketches, then use these drawings as a reference while posing a 3D character in a specialized 3D software --- a time-consuming process, requiring specialized 3D training and mental effort. We tackle this challenge by proposing the first system for automatically inferring a 3D character pose from a single bitmap sketch, producing poses consistent with viewer expectations. Algorithmically interpreting bitmap sketches is challenging, as they contain significantly distorted proportions and foreshortening. We address this by predicting three key elements of a drawing, necessary to disambiguate the drawn poses: 2D bone tangents, self-contacts, and bone foreshortening. These elements are then leveraged in an optimization inferring the 3D character pose consistent with the artist's intent. Our optimization balances cues derived from artistic literature and perception research to compensate for distorted character proportions. We demonstrate a gallery of results on sketches of numerous styles. We validate our method via numerical evaluations, user studies, and comparisons to manually posed characters and previous work

    Egocentric Videoconferencing

    Get PDF

    A Survey on Video-based Graphics and Video Visualization

    Get PDF
    corecore