566 research outputs found

    Data-Driven Shape Analysis and Processing

    Full text link
    Data-driven methods play an increasingly important role in discovering geometric, structural, and semantic relationships between 3D shapes in collections, and applying this analysis to support intelligent modeling, editing, and visualization of geometric data. In contrast to traditional approaches, a key feature of data-driven approaches is that they aggregate information from a collection of shapes to improve the analysis and processing of individual shapes. In addition, they are able to learn models that reason about properties and relationships of shapes without relying on hard-coded rules or explicitly programmed instructions. We provide an overview of the main concepts and components of these techniques, and discuss their application to shape classification, segmentation, matching, reconstruction, modeling and exploration, as well as scene analysis and synthesis, through reviewing the literature and relating the existing works with both qualitative and numerical comparisons. We conclude our report with ideas that can inspire future research in data-driven shape analysis and processing.Comment: 10 pages, 19 figure

    Single View Modeling and View Synthesis

    Get PDF
    This thesis develops new algorithms to produce 3D content from a single camera. Today, amateurs can use hand-held camcorders to capture and display the 3D world in 2D, using mature technologies. However, there is always a strong desire to record and re-explore the 3D world in 3D. To achieve this goal, current approaches usually make use of a camera array, which suffers from tedious setup and calibration processes, as well as lack of portability, limiting its application to lab experiments. In this thesis, I try to produce the 3D contents using a single camera, making it as simple as shooting pictures. It requires a new front end capturing device rather than a regular camcorder, as well as more sophisticated algorithms. First, in order to capture the highly detailed object surfaces, I designed and developed a depth camera based on a novel technique called light fall-off stereo (LFS). The LFS depth camera outputs color+depth image sequences and achieves 30 fps, which is necessary for capturing dynamic scenes. Based on the output color+depth images, I developed a new approach that builds 3D models of dynamic and deformable objects. While the camera can only capture part of a whole object at any instance, partial surfaces are assembled together to form a complete 3D model by a novel warping algorithm. Inspired by the success of single view 3D modeling, I extended my exploration into 2D-3D video conversion that does not utilize a depth camera. I developed a semi-automatic system that converts monocular videos into stereoscopic videos, via view synthesis. It combines motion analysis with user interaction, aiming to transfer as much depth inferring work from the user to the computer. I developed two new methods that analyze the optical flow in order to provide additional qualitative depth constraints. The automatically extracted depth information is presented in the user interface to assist with user labeling work. In this thesis, I developed new algorithms to produce 3D contents from a single camera. Depending on the input data, my algorithm can build high fidelity 3D models for dynamic and deformable objects if depth maps are provided. Otherwise, it can turn the video clips into stereoscopic video

    Tetrahedral Image-to-Mesh Conversion Software for Anatomic Modeling of Arteriovenous Malformations

    Get PDF
    We describe a new implementation of an adaptive multi-tissue tetrahedral mesh generator targeting anatomic modeling of Arteriovenous Malformation (AVM) for surgical simulations. Our method, initially constructs an adaptive Body-Centered Cubic (BCC) mesh of high quality elements. Then, it deforms the mesh surfaces to their corresponding physical image boundaries, hence, improving the mesh fidelity and smoothness. Our deformation scheme, which builds upon the ITK toolkit, is based on the concept of energy minimization, and relies on a multi-material point-based registration. It uses non-connectivity patterns to implicitly control the number of the extracted feature points needed for the registration, and thus, adjusts the trade-off between the achieved mesh fidelity and the deformation speed. While many medical imaging applications require robust mesh generation, there are few codes available to the public. We compare our implementation with two similar open-source image-to-mesh conversion codes: (1) Cleaver from US, and (2) CGAL from EU. Our evaluation is based on five isotropic/anisotropic segmented images, and relies on metrics like geometric & topologic fidelity, mesh quality, gradation and smoothness. The implementation we describe is open- source and it will be available within: (i) the 3D Slicer package for visualization and image analysis from Harvard Medical School, and (ii) an interactive simulator for neurosurgical procedures involving vasculature using SOFA, a framework for real-time medical simulation developed by INRIA
    corecore