14,265 research outputs found
Local Frequency Interpretation and Non-Local Self-Similarity on Graph for Point Cloud Inpainting
As 3D scanning devices and depth sensors mature, point clouds have attracted
increasing attention as a format for 3D object representation, with
applications in various fields such as tele-presence, navigation and heritage
reconstruction. However, point clouds usually exhibit holes of missing data,
mainly due to the limitation of acquisition techniques and complicated
structure. Further, point clouds are defined on irregular non-Euclidean
domains, which is challenging to address especially with conventional signal
processing tools. Hence, leveraging on recent advances in graph signal
processing, we propose an efficient point cloud inpainting method, exploiting
both the local smoothness and the non-local self-similarity in point clouds.
Specifically, we first propose a frequency interpretation in graph nodal
domain, based on which we introduce the local graph-signal smoothness prior in
order to describe the local smoothness of point clouds. Secondly, we explore
the characteristics of non-local self-similarity, by globally searching for the
most similar area to the missing region. The similarity metric between two
areas is defined based on the direct component and the anisotropic graph total
variation of normals in each area. Finally, we formulate the hole-filling step
as an optimization problem based on the selected most similar area and
regularized by the graph-signal smoothness prior. Besides, we propose
voxelization and automatic hole detection methods for the point cloud prior to
inpainting. Experimental results show that the proposed approach outperforms
four competing methods significantly, both in objective and subjective quality.Comment: 11 pages, 11 figures, submitted to IEEE Transactions on Image
Processing at 2018.09.0
ConvPoint: Continuous Convolutions for Point Cloud Processing
Point clouds are unstructured and unordered data, as opposed to images. Thus,
most machine learning approach developed for image cannot be directly
transferred to point clouds. In this paper, we propose a generalization of
discrete convolutional neural networks (CNNs) in order to deal with point
clouds by replacing discrete kernels by continuous ones. This formulation is
simple, allows arbitrary point cloud sizes and can easily be used for designing
neural networks similarly to 2D CNNs. We present experimental results with
various architectures, highlighting the flexibility of the proposed approach.
We obtain competitive results compared to the state-of-the-art on shape
classification, part segmentation and semantic segmentation for large-scale
point clouds.Comment: 12 page
Single-view Object Shape Reconstruction Using Deep Shape Prior and Silhouette
3D shape reconstruction from a single image is a highly ill-posed problem.
Modern deep learning based systems try to solve this problem by learning an
end-to-end mapping from image to shape via a deep network. In this paper, we
aim to solve this problem via an online optimization framework inspired by
traditional methods. Our framework employs a deep autoencoder to learn a set of
latent codes of 3D object shapes, which are fitted by a probabilistic shape
prior using Gaussian Mixture Model (GMM). At inference, the shape and pose are
jointly optimized guided by both image cues and deep shape prior without
relying on an initialization from any trained deep nets. Surprisingly, our
method achieves comparable performance to state-of-the-art methods even without
training an end-to-end network, which shows a promising step in this direction
NPTC-net: Narrow-Band Parallel Transport Convolutional Neural Network on Point Clouds
Convolution plays a crucial role in various applications in signal and image
processing, analysis, and recognition. It is also the main building block of
convolution neural networks (CNNs). Designing appropriate convolution neural
networks on manifold-structured point clouds can inherit and empower recent
advances of CNNs to analyzing and processing point cloud data. However, one of
the major challenges is to define a proper way to "sweep" filters through the
point cloud as a natural generalization of the planar convolution and to
reflect the point cloud's geometry at the same time. In this paper, we consider
generalizing convolution by adapting parallel transport on the point cloud.
Inspired by a triangulated surface-based method [Stefan C. Schonsheck, Bin
Dong, and Rongjie Lai, arXiv:1805.07857.], we propose the Narrow-Band Parallel
Transport Convolution (NPTC) using a specifically defined connection on a
voxel-based narrow-band approximation of point cloud data. With that, we
further propose a deep convolutional neural network based on NPTC (called
NPTC-net) for point cloud classification and segmentation. Comprehensive
experiments show that the proposed NPTC-net achieves similar or better results
than current state-of-the-art methods on point cloud classification and
segmentation.Comment: 18 pages, 6 figure
Deep Level Sets: Implicit Surface Representations for 3D Shape Inference
Existing 3D surface representation approaches are unable to accurately
classify pixels and their orientation lying on the boundary of an object. Thus
resulting in coarse representations which usually require post-processing steps
to extract 3D surface meshes. To overcome this limitation, we propose an
end-to-end trainable model that directly predicts implicit surface
representations of arbitrary topology by optimising a novel geometric loss
function. Specifically, we propose to represent the output as an oriented level
set of a continuous embedding function, and incorporate this in a deep
end-to-end learning framework by introducing a variational shape inference
formulation. We investigate the benefits of our approach on the task of 3D
surface prediction and demonstrate its ability to produce a more accurate
reconstruction compared to voxel-based representations. We further show that
our model is flexible and can be applied to a variety of shape inference
problems
Dense Object Reconstruction from RGBD Images with Embedded Deep Shape Representations
Most problems involving simultaneous localization and mapping can nowadays be
solved using one of two fundamentally different approaches. The traditional
approach is given by a least-squares objective, which minimizes many local
photometric or geometric residuals over explicitly parametrized structure and
camera parameters. Unmodeled effects violating the lambertian surface
assumption or geometric invariances of individual residuals are encountered
through statistical averaging or the addition of robust kernels and smoothness
terms. Aiming at more accurate measurement models and the inclusion of
higher-order shape priors, the community more recently shifted its attention to
deep end-to-end models for solving geometric localization and mapping problems.
However, at test-time, these feed-forward models ignore the more traditional
geometric or photometric consistency terms, thus leading to a low ability to
recover fine details and potentially complete failure in corner case scenarios.
With an application to dense object modeling from RGBD images, our work aims at
taking the best of both worlds by embedding modern higher-order object shape
priors into classical iterative residual minimization objectives. We
demonstrate a general ability to improve mapping accuracy with respect to each
modality alone, and present a successful application to real data.Comment: 12 page
Guided Proceduralization: Optimizing Geometry Processing and Grammar Extraction for Architectural Models
We describe a guided proceduralization framework that optimizes geometry
processing on architectural input models to extract target grammars. We aim to
provide efficient artistic workflows by creating procedural representations
from existing 3D models, where the procedural expressiveness is controlled by
the user. Architectural reconstruction and modeling tasks have been handled as
either time consuming manual processes or procedural generation with difficult
control and artistic influence. We bridge the gap between creation and
generation by converting existing manually modeled architecture to procedurally
editable parametrized models, and carrying the guidance to procedural domain by
letting the user define the target procedural representation. Additionally, we
propose various applications of such procedural representations, including
guided completion of point cloud models, controllable 3D city modeling, and
other benefits of procedural modeling
Deep Functional Dictionaries: Learning Consistent Semantic Structures on 3D Models from Functions
Various 3D semantic attributes such as segmentation masks, geometric
features, keypoints, and materials can be encoded as per-point probe functions
on 3D geometries. Given a collection of related 3D shapes, we consider how to
jointly analyze such probe functions over different shapes, and how to discover
common latent structures using a neural network --- even in the absence of any
correspondence information. Our network is trained on point cloud
representations of shape geometry and associated semantic functions on that
point cloud. These functions express a shared semantic understanding of the
shapes but are not coordinated in any way. For example, in a segmentation task,
the functions can be indicator functions of arbitrary sets of shape parts, with
the particular combination involved not known to the network. Our network is
able to produce a small dictionary of basis functions for each shape, a
dictionary whose span includes the semantic functions provided for that shape.
Even though our shapes have independent discretizations and no functional
correspondences are provided, the network is able to generate latent bases, in
a consistent order, that reflect the shared semantic structure among the
shapes. We demonstrate the effectiveness of our technique in various
segmentation and keypoint selection applications
3D Dynamic Point Cloud Inpainting via Temporal Consistency on Graphs
With the development of 3D laser scanning techniques and depth sensors, 3D
dynamic point clouds have attracted increasing attention as a representation of
3D objects in motion, enabling various applications such as 3D immersive
tele-presence, gaming and navigation. However, dynamic point clouds usually
exhibit holes of missing data, mainly due to the fast motion, the limitation of
acquisition and complicated structure. Leveraging on graph signal processing
tools, we represent irregular point clouds on graphs and propose a novel
inpainting method exploiting both intra-frame self-similarity and inter-frame
consistency in 3D dynamic point clouds. Specifically, for each missing region
in every frame of the point cloud sequence, we search for its self-similar
regions in the current frame and corresponding ones in adjacent frames as
references. Then we formulate dynamic point cloud inpainting as an optimization
problem based on the two types of references, which is regularized by a
graph-signal smoothness prior. Experimental results show the proposed approach
outperforms three competing methods significantly, both in objective and
subjective quality.Comment: 7 pages, 5 figures, accepted by IEEE ICME 2020 at 2020.04.03. arXiv
admin note: text overlap with arXiv:1810.0397
Spherical Conformal Parameterization of Genus-0 Point Clouds for Meshing
Point cloud is the most fundamental representation of 3D geometric objects.
Analyzing and processing point cloud surfaces is important in computer graphics
and computer vision. However, most of the existing algorithms for surface
analysis require connectivity information. Therefore, it is desirable to
develop a mesh structure on point clouds. This task can be simplified with the
aid of a parameterization. In particular, conformal parameterizations are
advantageous in preserving the geometric information of the point cloud data.
In this paper, we extend a state-of-the-art spherical conformal
parameterization algorithm for genus-0 closed meshes to the case of point
clouds, using an improved approximation of the Laplace-Beltrami operator on
data points. Then, we propose an iterative scheme called the North-South
reiteration for achieving a spherical conformal parameterization. A balancing
scheme is introduced to enhance the distribution of the spherical
parameterization. High quality triangulations and quadrangulations can then be
built on the point clouds with the aid of the parameterizations. Also, the
meshes generated are guaranteed to be genus-0 closed meshes. Moreover, using
our proposed spherical conformal parameterization, multilevel representations
of point clouds can be easily constructed. Experimental results demonstrate the
effectiveness of our proposed framework
- …