59,120 research outputs found
Tetrahedral Image-to-Mesh Conversion Software for Anatomic Modeling of Arteriovenous Malformations
We describe a new implementation of an adaptive multi-tissue tetrahedral mesh generator targeting anatomic modeling of Arteriovenous Malformation (AVM) for surgical simulations. Our method, initially constructs an adaptive Body-Centered Cubic (BCC) mesh of high quality elements. Then, it deforms the mesh surfaces to their corresponding physical image boundaries, hence, improving the mesh fidelity and smoothness. Our deformation scheme, which builds upon the ITK toolkit, is based on the concept of energy minimization, and relies on a multi-material point-based registration. It uses non-connectivity patterns to implicitly control the number of the extracted feature points needed for the registration, and thus, adjusts the trade-off between the achieved mesh fidelity and the deformation speed. While many medical imaging applications require robust mesh generation, there are few codes available to the public. We compare our implementation with two similar open-source image-to-mesh conversion codes: (1) Cleaver from US, and (2) CGAL from EU. Our evaluation is based on five isotropic/anisotropic segmented images, and relies on metrics like geometric & topologic fidelity, mesh quality, gradation and smoothness. The implementation we describe is open- source and it will be available within: (i) the 3D Slicer package for visualization and image analysis from Harvard Medical School, and (ii) an interactive simulator for neurosurgical procedures involving vasculature using SOFA, a framework for real-time medical simulation developed by INRIA
3D Shape Reconstruction from Sketches via Multi-view Convolutional Networks
We propose a method for reconstructing 3D shapes from 2D sketches in the form
of line drawings. Our method takes as input a single sketch, or multiple
sketches, and outputs a dense point cloud representing a 3D reconstruction of
the input sketch(es). The point cloud is then converted into a polygon mesh. At
the heart of our method lies a deep, encoder-decoder network. The encoder
converts the sketch into a compact representation encoding shape information.
The decoder converts this representation into depth and normal maps capturing
the underlying surface from several output viewpoints. The multi-view maps are
then consolidated into a 3D point cloud by solving an optimization problem that
fuses depth and normals across all viewpoints. Based on our experiments,
compared to other methods, such as volumetric networks, our architecture offers
several advantages, including more faithful reconstruction, higher output
surface resolution, better preservation of topology and shape structure.Comment: 3DV 2017 (oral
MVPNet: Multi-View Point Regression Networks for 3D Object Reconstruction from A Single Image
In this paper, we address the problem of reconstructing an object's surface
from a single image using generative networks. First, we represent a 3D surface
with an aggregation of dense point clouds from multiple views. Each point cloud
is embedded in a regular 2D grid aligned on an image plane of a viewpoint,
making the point cloud convolution-favored and ordered so as to fit into deep
network architectures. The point clouds can be easily triangulated by
exploiting connectivities of the 2D grids to form mesh-based surfaces. Second,
we propose an encoder-decoder network that generates such kind of multiple
view-dependent point clouds from a single image by regressing their 3D
coordinates and visibilities. We also introduce a novel geometric loss that is
able to interpret discrepancy over 3D surfaces as opposed to 2D projective
planes, resorting to the surface discretization on the constructed meshes. We
demonstrate that the multi-view point regression network outperforms
state-of-the-art methods with a significant improvement on challenging
datasets.Comment: 8 pages; accepted by AAAI 201
From 3D Models to 3D Prints: an Overview of the Processing Pipeline
Due to the wide diffusion of 3D printing technologies, geometric algorithms
for Additive Manufacturing are being invented at an impressive speed. Each
single step, in particular along the Process Planning pipeline, can now count
on dozens of methods that prepare the 3D model for fabrication, while analysing
and optimizing geometry and machine instructions for various objectives. This
report provides a classification of this huge state of the art, and elicits the
relation between each single algorithm and a list of desirable objectives
during Process Planning. The objectives themselves are listed and discussed,
along with possible needs for tradeoffs. Additive Manufacturing technologies
are broadly categorized to explicitly relate classes of devices and supported
features. Finally, this report offers an analysis of the state of the art while
discussing open and challenging problems from both an academic and an
industrial perspective.Comment: European Union (EU); Horizon 2020; H2020-FoF-2015; RIA - Research and
Innovation action; Grant agreement N. 68044
LabelFusion: A Pipeline for Generating Ground Truth Labels for Real RGBD Data of Cluttered Scenes
Deep neural network (DNN) architectures have been shown to outperform
traditional pipelines for object segmentation and pose estimation using RGBD
data, but the performance of these DNN pipelines is directly tied to how
representative the training data is of the true data. Hence a key requirement
for employing these methods in practice is to have a large set of labeled data
for your specific robotic manipulation task, a requirement that is not
generally satisfied by existing datasets. In this paper we develop a pipeline
to rapidly generate high quality RGBD data with pixelwise labels and object
poses. We use an RGBD camera to collect video of a scene from multiple
viewpoints and leverage existing reconstruction techniques to produce a 3D
dense reconstruction. We label the 3D reconstruction using a human assisted
ICP-fitting of object meshes. By reprojecting the results of labeling the 3D
scene we can produce labels for each RGBD image of the scene. This pipeline
enabled us to collect over 1,000,000 labeled object instances in just a few
days. We use this dataset to answer questions related to how much training data
is required, and of what quality the data must be, to achieve high performance
from a DNN architecture
- …