1 research outputs found
A survey on practical adversarial examples for malware classifiers
Machine learning based solutions have been very helpful in solving problems
that deal with immense amounts of data, such as malware detection and
classification. However, deep neural networks have been found to be vulnerable
to adversarial examples, or inputs that have been purposefully perturbed to
result in an incorrect label. Researchers have shown that this vulnerability
can be exploited to create evasive malware samples. However, many proposed
attacks do not generate an executable and instead generate a feature vector. To
fully understand the impact of adversarial examples on malware detection, we
review practical attacks against malware classifiers that generate executable
adversarial malware examples. We also discuss current challenges in this area
of research, as well as suggestions for improvement and future research
directions.Comment: preprint. to appear in the Reversing and Offensive-oriented Trends
Symposium(ROOTS) 202