1,021 research outputs found
FrameNet: Learning Local Canonical Frames of 3D Surfaces from a Single RGB Image
In this work, we introduce the novel problem of identifying dense canonical
3D coordinate frames from a single RGB image. We observe that each pixel in an
image corresponds to a surface in the underlying 3D geometry, where a canonical
frame can be identified as represented by three orthogonal axes, one along its
normal direction and two in its tangent plane. We propose an algorithm to
predict these axes from RGB. Our first insight is that canonical frames
computed automatically with recently introduced direction field synthesis
methods can provide training data for the task. Our second insight is that
networks designed for surface normal prediction provide better results when
trained jointly to predict canonical frames, and even better when trained to
also predict 2D projections of canonical frames. We conjecture this is because
projections of canonical tangent directions often align with local gradients in
images, and because those directions are tightly linked to 3D canonical frames
through projective geometry and orthogonality constraints. In our experiments,
we find that our method predicts 3D canonical frames that can be used in
applications ranging from surface normal estimation, feature matching, and
augmented reality
3D Shape Reconstruction from Sketches via Multi-view Convolutional Networks
We propose a method for reconstructing 3D shapes from 2D sketches in the form
of line drawings. Our method takes as input a single sketch, or multiple
sketches, and outputs a dense point cloud representing a 3D reconstruction of
the input sketch(es). The point cloud is then converted into a polygon mesh. At
the heart of our method lies a deep, encoder-decoder network. The encoder
converts the sketch into a compact representation encoding shape information.
The decoder converts this representation into depth and normal maps capturing
the underlying surface from several output viewpoints. The multi-view maps are
then consolidated into a 3D point cloud by solving an optimization problem that
fuses depth and normals across all viewpoints. Based on our experiments,
compared to other methods, such as volumetric networks, our architecture offers
several advantages, including more faithful reconstruction, higher output
surface resolution, better preservation of topology and shape structure.Comment: 3DV 2017 (oral
Consistent ICP for the registration of sparse and inhomogeneous point clouds
In this paper, we derive a novel iterative closest point (ICP) technique that performs point cloud alignment in a robust and consistent way. Traditional ICP techniques minimize the point-to-point distances, which are successful when point clouds contain no noise or clutter and moreover are dense and more or less uniformly sampled. In the other case, it is better to employ point-to-plane or other metrics to locally approximate the surface of the objects. However, the point-to-plane metric does not yield a symmetric solution, i.e. the estimated transformation of point cloud p to point cloud q is not necessarily equal to the inverse transformation of point cloud q to point cloud p. In order to improve ICP, we will enforce such symmetry constraints as prior knowledge and make it also robust to noise and clutter. Experimental results show that our method is indeed much more consistent and accurate in presence of noise and clutter compared to existing ICP algorithms
Placental Flattening via Volumetric Parameterization
We present a volumetric mesh-based algorithm for flattening the placenta to a
canonical template to enable effective visualization of local anatomy and
function. Monitoring placental function in vivo promises to support pregnancy
assessment and to improve care outcomes. We aim to alleviate visualization and
interpretation challenges presented by the shape of the placenta when it is
attached to the curved uterine wall. To do so, we flatten the volumetric mesh
that captures placental shape to resemble the well-studied ex vivo shape. We
formulate our method as a map from the in vivo shape to a flattened template
that minimizes the symmetric Dirichlet energy to control distortion throughout
the volume. Local injectivity is enforced via constrained line search during
gradient descent. We evaluate the proposed method on 28 placenta shapes
extracted from MRI images in a clinical study of placental function. We achieve
sub-voxel accuracy in mapping the boundary of the placenta to the template
while successfully controlling distortion throughout the volume. We illustrate
how the resulting mapping of the placenta enhances visualization of placental
anatomy and function. Our code is freely available at
https://github.com/mabulnaga/placenta-flattening .Comment: MICCAI 201
- …