28 research outputs found
Towards High-Fidelity 3D Face Reconstruction from In-the-Wild Images Using Graph Convolutional Networks
3D Morphable Model (3DMM) based methods have achieved great success in
recovering 3D face shapes from single-view images. However, the facial textures
recovered by such methods lack the fidelity as exhibited in the input images.
Recent work demonstrates high-quality facial texture recovering with generative
networks trained from a large-scale database of high-resolution UV maps of face
textures, which is hard to prepare and not publicly available. In this paper,
we introduce a method to reconstruct 3D facial shapes with high-fidelity
textures from single-view images in-the-wild, without the need to capture a
large-scale face texture database. The main idea is to refine the initial
texture generated by a 3DMM based method with facial details from the input
image. To this end, we propose to use graph convolutional networks to
reconstruct the detailed colors for the mesh vertices instead of reconstructing
the UV map. Experiments show that our method can generate high-quality results
and outperforms state-of-the-art methods in both qualitative and quantitative
comparisons.Comment: Accepted to CVPR 2020. The source code is available at
https://github.com/FuxiCV/3D-Face-GCN
BASAR:Black-box Attack on Skeletal Action Recognition
Skeletal motion plays a vital role in human activity recognition as either an
independent data source or a complement. The robustness of skeleton-based
activity recognizers has been questioned recently, which shows that they are
vulnerable to adversarial attacks when the full-knowledge of the recognizer is
accessible to the attacker. However, this white-box requirement is overly
restrictive in most scenarios and the attack is not truly threatening. In this
paper, we show that such threats do exist under black-box settings too. To this
end, we propose the first black-box adversarial attack method BASAR. Through
BASAR, we show that adversarial attack is not only truly a threat but also can
be extremely deceitful, because on-manifold adversarial samples are rather
common in skeletal motions, in contrast to the common belief that adversarial
samples only exist off-manifold. Through exhaustive evaluation and comparison,
we show that BASAR can deliver successful attacks across models, data, and
attack modes. Through harsh perceptual studies, we show that it achieves
effective yet imperceptible attacks. By analyzing the attack on different
activity recognizers, BASAR helps identify the potential causes of their
vulnerability and provides insights on what classifiers are likely to be more
robust against attack. Code is available at
https://github.com/realcrane/BASAR-Black-box-Attack-on-Skeletal-Action-Recognition.Comment: Accepted in CVPR 202
Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack
Human Activity Recognition (HAR) has been employed in a wide range of
applications, e.g. self-driving cars, where safety and lives are at stake.
Recently, the robustness of existing skeleton-based HAR methods has been
questioned due to their vulnerability to adversarial attacks, which causes
concerns considering the scale of the implication. However, the proposed
attacks require the full-knowledge of the attacked classifier, which is overly
restrictive. In this paper, we show such threats indeed exist, even when the
attacker only has access to the input/output of the model. To this end, we
propose the very first black-box adversarial attack approach in skeleton-based
HAR called BASAR. BASAR explores the interplay between the classification
boundary and the natural motion manifold. To our best knowledge, this is the
first time data manifold is introduced in adversarial attacks on time series.
Via BASAR, we find on-manifold adversarial samples are extremely deceitful and
rather common in skeletal motions, in contrast to the common belief that
adversarial samples only exist off-manifold. Through exhaustive evaluation, we
show that BASAR can deliver successful attacks across classifiers, datasets,
and attack modes. By attack, BASAR helps identify the potential causes of the
model vulnerability and provides insights on possible improvements. Finally, to
mitigate the newly identified threat, we propose a new adversarial training
approach by leveraging the sophisticated distributions of on/off-manifold
adversarial samples, called mixed manifold-based adversarial training (MMAT).
MMAT can successfully help defend against adversarial attacks without
compromising classification accuracy.Comment: arXiv admin note: substantial text overlap with arXiv:2103.0526