576 research outputs found
SPU-Net: Self-Supervised Point Cloud Upsampling by Coarse-to-Fine Reconstruction with Self-Projection Optimization
The task of point cloud upsampling aims to acquire dense and uniform point
sets from sparse and irregular point sets. Although significant progress has
been made with deep learning models, they require ground-truth dense point sets
as the supervision information, which can only trained on synthetic paired
training data and are not suitable for training under real-scanned sparse data.
However, it is expensive and tedious to obtain large scale paired sparse-dense
point sets for training from real scanned sparse data. To address this problem,
we propose a self-supervised point cloud upsampling network, named SPU-Net, to
capture the inherent upsampling patterns of points lying on the underlying
object surface. Specifically, we propose a coarse-to-fine reconstruction
framework, which contains two main components: point feature extraction and
point feature expansion, respectively. In the point feature extraction, we
integrate self-attention module with graph convolution network (GCN) to
simultaneously capture context information inside and among local regions. In
the point feature expansion, we introduce a hierarchically learnable folding
strategy to generate the upsampled point sets with learnable 2D grids.
Moreover, to further optimize the noisy points in the generated point sets, we
propose a novel self-projection optimization associated with uniform and
reconstruction terms, as a joint loss, to facilitate the self-supervised point
cloud upsampling. We conduct various experiments on both synthetic and
real-scanned datasets, and the results demonstrate that we achieve comparable
performance to the state-of-the-art supervised methods
Point2Sequence: Learning the Shape Representation of 3D Point Clouds with an Attention-based Sequence to Sequence Network
Exploring contextual information in the local region is important for shape
understanding and analysis. Existing studies often employ hand-crafted or
explicit ways to encode contextual information of local regions. However, it is
hard to capture fine-grained contextual information in hand-crafted or explicit
manners, such as the correlation between different areas in a local region,
which limits the discriminative ability of learned features. To resolve this
issue, we propose a novel deep learning model for 3D point clouds, named
Point2Sequence, to learn 3D shape features by capturing fine-grained contextual
information in a novel implicit way. Point2Sequence employs a novel sequence
learning model for point clouds to capture the correlations by aggregating
multi-scale areas of each local region with attention. Specifically,
Point2Sequence first learns the feature of each area scale in a local region.
Then, it captures the correlation between area scales in the process of
aggregating all area scales using a recurrent neural network (RNN) based
encoder-decoder structure, where an attention mechanism is proposed to
highlight the importance of different area scales. Experimental results show
that Point2Sequence achieves state-of-the-art performance in shape
classification and segmentation tasks.Comment: To be published in AAAI 201
Learning Signed Distance Functions from Noisy 3D Point Clouds via Noise to Noise Mapping
Learning signed distance functions (SDFs) from 3D point clouds is an
important task in 3D computer vision. However, without ground truth signed
distances, point normals or clean point clouds, current methods still struggle
from learning SDFs from noisy point clouds. To overcome this challenge, we
propose to learn SDFs via a noise to noise mapping, which does not require any
clean point cloud or ground truth supervision for training. Our novelty lies in
the noise to noise mapping which can infer a highly accurate SDF of a single
object or scene from its multiple or even single noisy point cloud
observations. Our novel learning manner is supported by modern Lidar systems
which capture multiple noisy observations per second. We achieve this by a
novel loss which enables statistical reasoning on point clouds and maintains
geometric consistency although point clouds are irregular, unordered and have
no point correspondence among noisy observations. Our evaluation under the
widely used benchmarks demonstrates our superiority over the state-of-the-art
methods in surface reconstruction, point cloud denoising and upsampling. Our
code, data, and pre-trained models are available at
https://github.com/mabaorui/Noise2NoiseMapping/Comment: To appear at ICML2023. Code and data are available at
https://github.com/mabaorui/Noise2NoiseMapping
Latent Partition Implicit with Surface Codes for 3D Representation
Deep implicit functions have shown remarkable shape modeling ability in
various 3D computer vision tasks. One drawback is that it is hard for them to
represent a 3D shape as multiple parts. Current solutions learn various
primitives and blend the primitives directly in the spatial space, which still
struggle to approximate the 3D shape accurately. To resolve this problem, we
introduce a novel implicit representation to represent a single 3D shape as a
set of parts in the latent space, towards both highly accurate and plausibly
interpretable shape modeling. Our insight here is that both the part learning
and the part blending can be conducted much easier in the latent space than in
the spatial space. We name our method Latent Partition Implicit (LPI), because
of its ability of casting the global shape modeling into multiple local part
modeling, which partitions the global shape unity. LPI represents a shape as
Signed Distance Functions (SDFs) using surface codes. Each surface code is a
latent code representing a part whose center is on the surface, which enables
us to flexibly employ intrinsic attributes of shapes or additional surface
properties. Eventually, LPI can reconstruct both the shape and the parts on the
shape, both of which are plausible meshes. LPI is a multi-level representation,
which can partition a shape into different numbers of parts after training. LPI
can be learned without ground truth signed distances, point normals or any
supervision for part partition. LPI outperforms the latest methods under the
widely used benchmarks in terms of reconstruction accuracy and modeling
interpretability. Our code, data and models are available at
https://github.com/chenchao15/LPI.Comment: 20pages,14figures. Accepted by ECCV 202
Memory Performance Characterization of SPEC CPU2006 Benchmarks Using TSIM
AbstractThis paper uses TSIM, a cycle accurate architecture simulator, to characterize the memory performance of SPEC CPU2006 Benchmarks under CMP platform. The experiment covers 54 workloads with different input sets, and collects statistical information of instruction mixture and cache behaviors. By detecting the cyclical changes of MPKI, this paper clearly shows the memory performance phases of some SPEC CPU2006 programs. These performance data and analysis results can not only help program developers and architects understand the memory performance caused by system architecture better, but also guide them in software and system optimization
- …