2,810 research outputs found
Point-PC: Point Cloud Completion Guided by Prior Knowledge via Causal Inference
Point cloud completion aims to recover raw point clouds captured by scanners
from partial observations caused by occlusion and limited view angles. Many
approaches utilize a partial-complete paradigm in which missing parts are
directly predicted by a global feature learned from partial inputs. This makes
it hard to recover details because the global feature is unlikely to capture
the full details of all missing parts. In this paper, we propose a novel
approach to point cloud completion called Point-PC, which uses a memory network
to retrieve shape priors and designs an effective causal inference model to
choose missing shape information as additional geometric information to aid
point cloud completion. Specifically, we propose a memory operating mechanism
where the complete shape features and the corresponding shapes are stored in
the form of ``key-value'' pairs. To retrieve similar shapes from the partial
input, we also apply a contrastive learning-based pre-training scheme to
transfer features of incomplete shapes into the domain of complete shape
features. Moreover, we use backdoor adjustment to get rid of the confounder,
which is a part of the shape prior that has the same semantic structure as the
partial input. Experimental results on the ShapeNet-55, PCN, and KITTI datasets
demonstrate that Point-PC performs favorably against the state-of-the-art
methods
Denoise and Contrast for Category Agnostic Shape Completion
In this paper, we present a deep learning model that exploits the power of
self-supervision to perform 3D point cloud completion, estimating the missing
part and a context region around it. Local and global information are encoded
in a combined embedding. A denoising pretext task provides the network with the
needed local cues, decoupled from the high-level semantics and naturally shared
over multiple classes. On the other hand, contrastive learning maximizes the
agreement between variants of the same shape with different missing portions,
thus producing a representation which captures the global appearance of the
shape. The combined embedding inherits category-agnostic properties from the
chosen pretext tasks. Differently from existing approaches, this allows to
better generalize the completion properties to new categories unseen at
training time. Moreover, while decoding the obtained joint representation, we
better blend the reconstructed missing part with the partial shape by paying
attention to its known surrounding region and reconstructing this frame as
auxiliary objective. Our extensive experiments and detailed ablation on the
ShapeNet dataset show the effectiveness of each part of the method with new
state of the art results. Our quantitative and qualitative analysis confirms
how our approach is able to work on novel categories without relying neither on
classification and shape symmetry priors, nor on adversarial training
procedures.Comment: Accepted at CVPR 202
Deformable Shape Completion with Graph Convolutional Autoencoders
The availability of affordable and portable depth sensors has made scanning
objects and people simpler than ever. However, dealing with occlusions and
missing parts is still a significant challenge. The problem of reconstructing a
(possibly non-rigidly moving) 3D object from a single or multiple partial scans
has received increasing attention in recent years. In this work, we propose a
novel learning-based method for the completion of partial shapes. Unlike the
majority of existing approaches, our method focuses on objects that can undergo
non-rigid deformations. The core of our method is a variational autoencoder
with graph convolutional operations that learns a latent space for complete
realistic shapes. At inference, we optimize to find the representation in this
latent space that best fits the generated shape to the known partial input. The
completed shape exhibits a realistic appearance on the unknown part. We show
promising results towards the completion of synthetic and real scans of human
body and face meshes exhibiting different styles of articulation and
partiality.Comment: CVPR 201
High-Resolution Shape Completion Using Deep Neural Networks for Global Structure and Local Geometry Inference
We propose a data-driven method for recovering miss-ing parts of 3D shapes.
Our method is based on a new deep learning architecture consisting of two
sub-networks: a global structure inference network and a local geometry
refinement network. The global structure inference network incorporates a long
short-term memorized context fusion module (LSTM-CF) that infers the global
structure of the shape based on multi-view depth information provided as part
of the input. It also includes a 3D fully convolutional (3DFCN) module that
further enriches the global structure representation according to volumetric
information in the input. Under the guidance of the global structure network,
the local geometry refinement network takes as input lo-cal 3D patches around
missing regions, and progressively produces a high-resolution, complete surface
through a volumetric encoder-decoder architecture. Our method jointly trains
the global structure inference and local geometry refinement networks in an
end-to-end manner. We perform qualitative and quantitative evaluations on six
object categories, demonstrating that our method outperforms existing
state-of-the-art work on shape completion.Comment: 8 pages paper, 11 pages supplementary material, ICCV spotlight pape
- …