In the transfer-based adversarial attacks, adversarial examples are only
generated by the surrogate models and achieve effective perturbation in the
victim models. Although considerable efforts have been developed on improving
the transferability of adversarial examples generated by transfer-based
adversarial attacks, our investigation found that, the big deviation between
the actual and steepest update directions of the current transfer-based
adversarial attacks is caused by the large update step length, resulting in the
generated adversarial examples can not converge well. However, directly
reducing the update step length will lead to serious update oscillation so that
the generated adversarial examples also can not achieve great transferability
to the victim models. To address these issues, a novel transfer-based attack,
namely direction tuning attack, is proposed to not only decrease the update
deviation in the large step length, but also mitigate the update oscillation in
the small sampling step length, thereby making the generated adversarial
examples converge well to achieve great transferability on victim models. In
addition, a network pruning method is proposed to smooth the decision boundary,
thereby further decreasing the update oscillation and enhancing the
transferability of the generated adversarial examples. The experiment results
on ImageNet demonstrate that the average attack success rate (ASR) of the
adversarial examples generated by our method can be improved from 87.9\% to
94.5\% on five victim models without defenses, and from 69.1\% to 76.2\% on
eight advanced defense methods, in comparison with that of latest
gradient-based attacks