1 research outputs found
Can 3D Adversarial Logos Cloak Humans?
With the trend of adversarial attacks, researchers attempt to fool trained
object detectors in 2D scenes. Among many of them, an intriguing new form of
attack with potential real-world usage is to append adversarial patches (e.g.
logos) to images. Nevertheless, much less have we known about adversarial
attacks from 3D rendering views, which is essential for the attack to be
persistently strong in the physical world. This paper presents a new 3D
adversarial logo attack: we construct an arbitrary shape logo from a 2D texture
image and map this image into a 3D adversarial logo via a texture mapping
called logo transformation. The resulting 3D adversarial logo is then viewed as
an adversarial texture enabling easy manipulation of its shape and position.
This greatly extends the versatility of adversarial training for computer
graphics synthesized imagery. Contrary to the traditional adversarial patch,
this new form of attack is mapped into the 3D object world and back-propagates
to the 2D image domain through differentiable rendering. In addition, and
unlike existing adversarial patches, our new 3D adversarial logo is shown to
fool state-of-the-art deep object detectors robustly under model rotations,
leading to one step further for realistic attacks in the physical world. Our
codes are available at https://github.com/TAMU-VITA/3D_Adversarial_Logo