3,732 research outputs found
Flow-based GAN for 3D Point Cloud Generation from a Single Image
Generating a 3D point cloud from a single 2D image is of great importance for
3D scene understanding applications. To reconstruct the whole 3D shape of the
object shown in the image, the existing deep learning based approaches use
either explicit or implicit generative modeling of point clouds, which,
however, suffer from limited quality. In this work, we aim to alleviate this
issue by introducing a hybrid explicit-implicit generative modeling scheme,
which inherits the flow-based explicit generative models for sampling point
clouds with arbitrary resolutions while improving the detailed 3D structures of
point clouds by leveraging the implicit generative adversarial networks (GANs).
We evaluate on the large-scale synthetic dataset ShapeNet, with the
experimental results demonstrating the superior performance of the proposed
method. In addition, the generalization ability of our method is demonstrated
by performing on cross-category synthetic images as well as by testing on real
images from PASCAL3D+ dataset.Comment: 13 pages, 5 figures, accepted to BMVC202
3DHacker: Spectrum-based Decision Boundary Generation for Hard-label 3D Point Cloud Attack
With the maturity of depth sensors, the vulnerability of 3D point cloud
models has received increasing attention in various applications such as
autonomous driving and robot navigation. Previous 3D adversarial attackers
either follow the white-box setting to iteratively update the coordinate
perturbations based on gradients, or utilize the output model logits to
estimate noisy gradients in the black-box setting. However, these attack
methods are hard to be deployed in real-world scenarios since realistic 3D
applications will not share any model details to users. Therefore, we explore a
more challenging yet practical 3D attack setting, \textit{i.e.}, attacking
point clouds with black-box hard labels, in which the attacker can only have
access to the prediction label of the input. To tackle this setting, we propose
a novel 3D attack method, termed \textbf{3D} \textbf{H}ard-label
att\textbf{acker} (\textbf{3DHacker}), based on the developed decision boundary
algorithm to generate adversarial samples solely with the knowledge of class
labels. Specifically, to construct the class-aware model decision boundary,
3DHacker first randomly fuses two point clouds of different classes in the
spectral domain to craft their intermediate sample with high imperceptibility,
then projects it onto the decision boundary via binary search. To restrict the
final perturbation size, 3DHacker further introduces an iterative optimization
strategy to move the intermediate sample along the decision boundary for
generating adversarial point clouds with smallest trivial perturbations.
Extensive evaluations show that, even in the challenging hard-label setting,
3DHacker still competitively outperforms existing 3D attacks regarding the
attack performance as well as adversary quality.Comment: Accepted by ICCV 202
StarNet: Style-Aware 3D Point Cloud Generation
This paper investigates an open research task of reconstructing and
generating 3D point clouds. Most existing works of 3D generative models
directly take the Gaussian prior as input for the decoder to generate 3D point
clouds, which fail to learn disentangled latent codes, leading noisy
interpolated results. Most of the GAN-based models fail to discriminate the
local geometries, resulting in the point clouds generated not evenly
distributed at the object surface, hence degrading the point cloud generation
quality. Moreover, prevailing methods adopt computation-intensive frameworks,
such as flow-based models and Markov chains, which take plenty of time and
resources in the training phase. To resolve these limitations, this paper
proposes a unified style-aware network architecture combining both point-wise
distance loss and adversarial loss, StarNet which is able to reconstruct and
generate high-fidelity and even 3D point clouds using a mapping network that
can effectively disentangle the Gaussian prior from input's high-level
attributes in the mapped latent space to generate realistic interpolated
objects. Experimental results demonstrate that our framework achieves
comparable state-of-the-art performance on various metrics in the point cloud
reconstruction and generation tasks, but is more lightweight in model size,
requires much fewer parameters and less time for model training
Model-Free Prediction of Adversarial Drop Points in 3D Point Clouds
Adversarial attacks pose serious challenges for deep neural network
(DNN)-based analysis of various input signals. In the case of 3D point clouds,
methods have been developed to identify points that play a key role in the
network decision, and these become crucial in generating existing adversarial
attacks. For example, a saliency map approach is a popular method for
identifying adversarial drop points, whose removal would significantly impact
the network decision. Generally, methods for identifying adversarial points
rely on the deep model itself in order to determine which points are critically
important for the model's decision. This paper aims to provide a novel
viewpoint on this problem, in which adversarial points can be predicted
independently of the model. To this end, we define 14 point cloud features and
use multiple linear regression to examine whether these features can be used
for model-free adversarial point prediction, and which combination of features
is best suited for this purpose. Experiments show that a suitable combination
of features is able to predict adversarial points of three different networks
-- PointNet, PointNet++, and DGCNN -- significantly better than a random guess.
The results also provide further insight into DNNs for point cloud analysis, by
showing which features play key roles in their decision-making process.Comment: 10 pages, 6 figure
- …