2 research outputs found
Excitement Surfeited Turns to Errors: Deep Learning Testing Framework Based on Excitable Neurons
Despite impressive capabilities and outstanding performance, deep neural
networks (DNNs) have captured increasing public concern about their security
problems, due to their frequently occurred erroneous behaviors. Therefore, it
is necessary to conduct a systematical testing for DNNs before they are
deployed to real-world applications. Existing testing methods have provided
fine-grained metrics based on neuron coverage and proposed various approaches
to improve such metrics. However, it has been gradually realized that a higher
neuron coverage does \textit{not} necessarily represent better capabilities in
identifying defects that lead to errors. Besides, coverage-guided methods
cannot hunt errors due to faulty training procedure. So the robustness
improvement of DNNs via retraining by these testing examples are
unsatisfactory. To address this challenge, we introduce the concept of
excitable neurons based on Shapley value and design a novel white-box testing
framework for DNNs, namely DeepSensor. It is motivated by our observation that
neurons with larger responsibility towards model loss changes due to small
perturbations are more likely related to incorrect corner cases due to
potential defects. By maximizing the number of excitable neurons concerning
various wrong behaviors of models, DeepSensor can generate testing examples
that effectively trigger more errors due to adversarial inputs, polluted data
and incomplete training. Extensive experiments implemented on both image
classification models and speaker recognition models have demonstrated the
superiority of DeepSensor.Comment: 32 page