24,015 research outputs found
3D Shape Completion with Multi-view Consistent Inference
3D shape completion is important to enable machines to perceive the complete
geometry of objects from partial observations. To address this problem,
view-based methods have been presented. These methods represent shapes as
multiple depth images, which can be back-projected to yield corresponding 3D
point clouds, and they perform shape completion by learning to complete each
depth image using neural networks. While view-based methods lead to
state-of-the-art results, they currently do not enforce geometric consistency
among the completed views during the inference stage. To resolve this issue, we
propose a multi-view consistent inference technique for 3D shape completion,
which we express as an energy minimization problem including a data term and a
regularization term. We formulate the regularization term as a consistency loss
that encourages geometric consistency among multiple views, while the data term
guarantees that the optimized views do not drift away too much from a learned
shape descriptor. Experimental results demonstrate that our method completes
shapes more accurately than previous techniques.Comment: Accepted to AAAI 2020 as oral presentatio
DiffRF: Rendering-Guided 3D Radiance Field Diffusion
We introduce DiffRF, a novel approach for 3D radiance field synthesis based
on denoising diffusion probabilistic models. While existing diffusion-based
methods operate on images, latent codes, or point cloud data, we are the first
to directly generate volumetric radiance fields. To this end, we propose a 3D
denoising model which directly operates on an explicit voxel grid
representation. However, as radiance fields generated from a set of posed
images can be ambiguous and contain artifacts, obtaining ground truth radiance
field samples is non-trivial. We address this challenge by pairing the
denoising formulation with a rendering loss, enabling our model to learn a
deviated prior that favours good image quality instead of trying to replicate
fitting errors like floating artifacts. In contrast to 2D-diffusion models, our
model learns multi-view consistent priors, enabling free-view synthesis and
accurate shape generation. Compared to 3D GANs, our diffusion-based approach
naturally enables conditional generation such as masked completion or
single-view 3D synthesis at inference time.Comment: Project page: https://sirwyver.github.io/DiffRF/ Video:
https://youtu.be/qETBcLu8SUk - CVPR 2023 Highlight - updated evaluations
after fixing initial data mapping error on all method
Hybrid Bayesian Eigenobjects: Combining Linear Subspace and Deep Network Methods for 3D Robot Vision
We introduce Hybrid Bayesian Eigenobjects (HBEOs), a novel representation for
3D objects designed to allow a robot to jointly estimate the pose, class, and
full 3D geometry of a novel object observed from a single viewpoint in a single
practical framework. By combining both linear subspace methods and deep
convolutional prediction, HBEOs efficiently learn nonlinear object
representations without directly regressing into high-dimensional space. HBEOs
also remove the onerous and generally impractical necessity of input data
voxelization prior to inference. We experimentally evaluate the suitability of
HBEOs to the challenging task of joint pose, class, and shape inference on
novel objects and show that, compared to preceding work, HBEOs offer
dramatically improved performance in all three tasks along with several orders
of magnitude faster runtime performance.Comment: To appear in the International Conference on Intelligent Robots
(IROS) - Madrid, 201
Deformable Shape Completion with Graph Convolutional Autoencoders
The availability of affordable and portable depth sensors has made scanning
objects and people simpler than ever. However, dealing with occlusions and
missing parts is still a significant challenge. The problem of reconstructing a
(possibly non-rigidly moving) 3D object from a single or multiple partial scans
has received increasing attention in recent years. In this work, we propose a
novel learning-based method for the completion of partial shapes. Unlike the
majority of existing approaches, our method focuses on objects that can undergo
non-rigid deformations. The core of our method is a variational autoencoder
with graph convolutional operations that learns a latent space for complete
realistic shapes. At inference, we optimize to find the representation in this
latent space that best fits the generated shape to the known partial input. The
completed shape exhibits a realistic appearance on the unknown part. We show
promising results towards the completion of synthetic and real scans of human
body and face meshes exhibiting different styles of articulation and
partiality.Comment: CVPR 201
Learning to Reconstruct Shapes from Unseen Classes
From a single image, humans are able to perceive the full 3D shape of an
object by exploiting learned shape priors from everyday life. Contemporary
single-image 3D reconstruction algorithms aim to solve this task in a similar
fashion, but often end up with priors that are highly biased by training
classes. Here we present an algorithm, Generalizable Reconstruction (GenRe),
designed to capture more generic, class-agnostic shape priors. We achieve this
with an inference network and training procedure that combine 2.5D
representations of visible surfaces (depth and silhouette), spherical shape
representations of both visible and non-visible surfaces, and 3D voxel-based
representations, in a principled manner that exploits the causal structure of
how 3D shapes give rise to 2D images. Experiments demonstrate that GenRe
performs well on single-view shape reconstruction, and generalizes to diverse
novel objects from categories not seen during training.Comment: NeurIPS 2018 (Oral). The first two authors contributed equally to
this paper. Project page: http://genre.csail.mit.edu
High-Resolution Shape Completion Using Deep Neural Networks for Global Structure and Local Geometry Inference
We propose a data-driven method for recovering miss-ing parts of 3D shapes.
Our method is based on a new deep learning architecture consisting of two
sub-networks: a global structure inference network and a local geometry
refinement network. The global structure inference network incorporates a long
short-term memorized context fusion module (LSTM-CF) that infers the global
structure of the shape based on multi-view depth information provided as part
of the input. It also includes a 3D fully convolutional (3DFCN) module that
further enriches the global structure representation according to volumetric
information in the input. Under the guidance of the global structure network,
the local geometry refinement network takes as input lo-cal 3D patches around
missing regions, and progressively produces a high-resolution, complete surface
through a volumetric encoder-decoder architecture. Our method jointly trains
the global structure inference and local geometry refinement networks in an
end-to-end manner. We perform qualitative and quantitative evaluations on six
object categories, demonstrating that our method outperforms existing
state-of-the-art work on shape completion.Comment: 8 pages paper, 11 pages supplementary material, ICCV spotlight pape
- …