1 research outputs found

    Reversible Adversarial Example based on Reversible Image Transformation

    Full text link
    Currently, there emerge many companies taking Deep Neural Networks (DNNs) to classify and analyze user-uploaded photos on social platforms. Hence for users to protect image privacy without affecting human eyes to correctly extract the semantic information, it is possible to rely upon the attack capability of adversarial examples to fool these DNNs. In this paper, we take advantage of Reversible Image Transformation (RIT) to disguise original image as its adversarial example to get a controllable adversarial example, namely reversible adversarial example, which is still an adversarial example to DNNs. However, it not only deceives DNNs to extract the wrong information, but also can be recovered to original image without distortion. Experimental results on ImageNet demonstrate that our proposed scheme is superior to Liu et al. Since RIT can reversibly transform an image into an arbitrarily-chosen image with the same size, there is no need to worry, as Liu et al., about adversarial perturbations that are too large to be fully embedded. More importantly, our reversible adversarial examples achieve higher attack success rate to reach desired privacy protection goals, while ensuring the image quality is still good
    corecore