Deep learning models are susceptible to adversarial samples in white and
black-box environments. Although previous studies have shown high attack
success rates, coupling DNN models with interpretation models could offer a
sense of security when a human expert is involved, who can identify whether a
given sample is benign or malicious. However, in white-box environments,
interpretable deep learning systems (IDLSes) have been shown to be vulnerable
to malicious manipulations. In black-box settings, as access to the components
of IDLSes is limited, it becomes more challenging for the adversary to fool the
system. In this work, we propose a Query-efficient Score-based black-box attack
against IDLSes, QuScore, which requires no knowledge of the target model and
its coupled interpretation model. QuScore is based on transfer-based and
score-based methods by employing an effective microbial genetic algorithm. Our
method is designed to reduce the number of queries necessary to carry out
successful attacks, resulting in a more efficient process. By continuously
refining the adversarial samples created based on feedback scores from the
IDLS, our approach effectively navigates the search space to identify
perturbations that can fool the system. We evaluate the attack's effectiveness
on four CNN models (Inception, ResNet, VGG, DenseNet) and two interpretation
models (CAM, Grad), using both ImageNet and CIFAR datasets. Our results show
that the proposed approach is query-efficient with a high attack success rate
that can reach between 95% and 100% and transferability with an average success
rate of 69% in the ImageNet and CIFAR datasets. Our attack method generates
adversarial examples with attribution maps that resemble benign samples. We
have also demonstrated that our attack is resilient against various
preprocessing defense techniques and can easily be transferred to different DNN
models