25 research outputs found

    MoSculp: Interactive Visualization of Shape and Time

    Full text link
    We present a system that allows users to visualize complex human motion via 3D motion sculptures---a representation that conveys the 3D structure swept by a human body as it moves through space. Given an input video, our system computes the motion sculptures and provides a user interface for rendering it in different styles, including the options to insert the sculpture back into the original video, render it in a synthetic scene or physically print it. To provide this end-to-end workflow, we introduce an algorithm that estimates that human's 3D geometry over time from a set of 2D images and develop a 3D-aware image-based rendering approach that embeds the sculpture back into the scene. By automating the process, our system takes motion sculpture creation out of the realm of professional artists, and makes it applicable to a wide range of existing video material. By providing viewers with 3D information, motion sculptures reveal space-time motion information that is difficult to perceive with the naked eye, and allow viewers to interpret how different parts of the object interact over time. We validate the effectiveness of this approach with user studies, finding that our motion sculpture visualizations are significantly more informative about motion than existing stroboscopic and space-time visualization methods.Comment: UIST 2018. Project page: http://mosculp.csail.mit.edu

    Depth-aware neural style transfer for videos

    Get PDF
    Temporal consistency and content preservation are the prominent challenges in artistic video style transfer. To address these challenges, we present a technique that utilizes depth data and we demonstrate this on real-world videos from the web, as well as on a standard video dataset of three-dimensional computer-generated content. Our algorithm employs an image-transformation network combined with a depth encoder network for stylizing video sequences. For improved global structure preservation and temporal stability, the depth encoder network encodes ground-truth depth information which is fused into the stylization network. To further enforce temporal coherence, we employ ConvLSTM layers in the encoder, and a loss function based on calculated depth information for the output frames is also used. We show that our approach is capable of producing stylized videos with improved temporal consistency compared to state-of-the-art methods whilst also successfully transferring the artistic style of a target painting

    HairBrush for Immersive Data-Driven Hair Modeling

    Get PDF
    International audienceWhile hair is an essential component of virtual humans, it is also one of the most challenging digital assets to create. Existing automatic techniques lack the generality and flexibility to create rich hair variations, while manual authoring interfaces often require considerable artistic skills and efforts, especially for intricate 3D hair structures that can be difficult to navigate. We propose an interactive hair modeling system that can help create complex hairstyles in minutes or hours that would otherwise take much longer with existing tools. Modelers, including novice users, can focus on the overall hairstyles and local hair deformations, as our system intelligently suggests the desired hair parts. Our method combines the flexibility of manual authoring and the convenience of data-driven automation. Since hair contains intricate 3D structures such as buns, knots, and strands, they are inherently challenging to create using traditional 2D interfaces. Our system provides a new 3D hair author-ing interface for immersive interaction in virtual reality (VR). Users can draw high-level guide strips, from which our system predicts the most plausible hairstyles via a deep neural network trained from a professionally curated dataset. Each hairstyle in our dataset is composed of multiple variations, serving as blend-shapes to fit the user drawings via global blending and local deformation. The fitted hair models are visualized as interactive suggestions that the user can select, modify, or ignore. We conducted a user study to confirm that our system can significantly reduce manual labor while improve the output quality for modeling a variety of head and facial hairstyles that are challenging to create via existing techniques

    State of the Art on Stylized Fabrication

    Get PDF
    © 2018 The Authors Computer Graphics Forum © 2018 The Eurographics Association and John Wiley & Sons Ltd. Digital fabrication devices are powerful tools for creating tangible reproductions of 3D digital models. Most available printing technologies aim at producing an accurate copy of a tridimensional shape. However, fabrication technologies can also be used to create a stylistic representation of a digital shape. We refer to this class of methods as ‘stylized fabrication methods’. These methods abstract geometric and physical features of a given shape to create an unconventional representation, to produce an optical illusion or to devise a particular interaction with the fabricated model. In this state-of-the-art report, we classify and overview this broad and emerging class of approaches and also propose possible directions for future research

    Single-view hair modeling using a hairstyle database

    Full text link

    A Mobile, Multi Camera Setup for 3D Full Body Imaging in Combination with Post-Mortem Computed Tomography Procedures

    Full text link
    Three dimensional (3D) models of deceased and injured people in combination with 3D scans of injury causing objects can assist forensic investigations in reconstructing event scenes. Medical imaging techniques, such as post-mortem computed tomography (PMCT) and post-mortem magnetic resonance imaging (PMMR), have been successfully applied to forensic investigations and can add beneficial value to standard autopsy examinations. These imaging modalities can be helpful for 3D reconstructions, especially when internal findings, such as bone fractures, organ damage and internal bleeding, are relevant for the investigation. However, none of these techniques can adequately visualize pattern injuries, such as boot prints and bite marks, or any type of blunt force trauma that forms distinct discolorations on the body’s surface. This is why 3D surface imaging techniques have been introduced to the forensic community. Unfortunately, many commercially available optical scanning systems are cost intensive, time consuming and can only be applied before or after a CT scan has been performed. In this article, we present a mobile, multi-camera rig based on close-range photogrammetry that is inexpensive, fast in acquisition time and can be combined with automated CT scanning protocols. The multi-camera setup comprises seven digital single-lens reflex (DSLR) cameras that are mounted on a mobile frame. Each camera is equipped with a remote control that can trigger the shutter release of all cameras simultaneously. In combination with a medical CT scanner, image acquisition of the multi camera setup can be included into an automated CT scanning procedure. In our preliminary study, textured 3D models of one side of the body were created in less than 15 minutes. The photo acquisition time combined with the modified CT scanning protocols lasted 3:34 minutes whereas the subsequent computation of a textured 3D model based on a low resolution mesh lasted 10:55 minutes. The mobile, multi-camera setup can also be used manually in combination with examination couches, lifting carts and autopsy tables. Finally, the system is not limited to post-mortem investigations but can also be applied to living people and may be used in clinical settings

    State of the art on stylized fabrication

    Get PDF
    © 2019 Copyright held by the owner/author(s). Digital fabrication devices are powerful tools for creating tangible reproductions of 3D digital models. Most available printing technologies aim at producing an accurate copy of a tridimensional shape. However, fabrication technologies can also be used to create a stylistic representation of a digital shape. We refer to this class of methods as stylized fabrication methods. These methods abstract geometric and physical features of a given shape to create an unconventional representation, to produce an optical illusion, or to devise a particular interaction with the fabricated model. In this course, we classify and overview this broad and emerging class of approaches and also propose possible directions for future research

    {3D} Morphable Face Models -- Past, Present and Future

    No full text
    In this paper, we provide a detailed survey of 3D Morphable Face Models over the 20 years since they were first proposed. The challenges in building and applying these models, namely capture, modeling, image formation, and image analysis, are still active research topics, and we review the state-of-the-art in each of these areas. We also look ahead, identifying unsolved challenges, proposing directions for future research and highlighting the broad range of current and future applications
    corecore