809 research outputs found
Artificial intelligence in the cyber domain: Offense and defense
Artificial intelligence techniques have grown rapidly in recent years, and their applications in practice can be seen in many fields, ranging from facial recognition to image analysis. In the cybersecurity domain, AI-based techniques can provide better cyber defense tools and help adversaries improve methods of attack. However, malicious actors are aware of the new prospects too and will probably attempt to use them for nefarious purposes. This survey paper aims at providing an overview of how artificial intelligence can be used in the context of cybersecurity in both offense and defense.Web of Science123art. no. 41
Universal Adversarial Perturbations for Malware
Machine learning classification models are vulnerable to adversarial examples
-- effective input-specific perturbations that can manipulate the model's
output. Universal Adversarial Perturbations (UAPs), which identify noisy
patterns that generalize across the input space, allow the attacker to greatly
scale up the generation of these adversarial examples. Although UAPs have been
explored in application domains beyond computer vision, little is known about
their properties and implications in the specific context of realizable
attacks, such as malware, where attackers must reason about satisfying
challenging problem-space constraints.
In this paper, we explore the challenges and strengths of UAPs in the context
of malware classification. We generate sequences of problem-space
transformations that induce UAPs in the corresponding feature-space embedding
and evaluate their effectiveness across threat models that consider a varying
degree of realistic attacker knowledge. Additionally, we propose adversarial
training-based mitigations using knowledge derived from the problem-space
transformations, and compare against alternative feature-space defenses. Our
experiments limit the effectiveness of a white box Android evasion attack to
~20 % at the cost of 3 % TPR at 1 % FPR. We additionally show how our method
can be adapted to more restrictive application domains such as Windows malware.
We observe that while adversarial training in the feature space must deal
with large and often unconstrained regions, UAPs in the problem space identify
specific vulnerabilities that allow us to harden a classifier more effectively,
shifting the challenges and associated cost of identifying new universal
adversarial transformations back to the attacker
Learning Fast and Slow: PROPEDEUTICA for Real-time Malware Detection
In this paper, we introduce and evaluate PROPEDEUTICA, a novel methodology
and framework for efficient and effective real-time malware detection,
leveraging the best of conventional machine learning (ML) and deep learning
(DL) algorithms. In PROPEDEUTICA, all software processes in the system start
execution subjected to a conventional ML detector for fast classification. If a
piece of software receives a borderline classification, it is subjected to
further analysis via more performance expensive and more accurate DL methods,
via our newly proposed DL algorithm DEEPMALWARE. Further, we introduce delays
to the execution of software subjected to deep learning analysis as a way to
"buy time" for DL analysis and to rate-limit the impact of possible malware in
the system. We evaluated PROPEDEUTICA with a set of 9,115 malware samples and
877 commonly used benign software samples from various categories for the
Windows OS. Our results show that the false positive rate for conventional ML
methods can reach 20%, and for modern DL methods it is usually below 6%.
However, the classification time for DL can be 100X longer than conventional ML
methods. PROPEDEUTICA improved the detection F1-score from 77.54% (conventional
ML method) to 90.25%, and reduced the detection time by 54.86%. Further, the
percentage of software subjected to DL analysis was approximately 40% on
average. Further, the application of delays in software subjected to ML reduced
the detection time by approximately 10%. Finally, we found and discussed a
discrepancy between the detection accuracy offline (analysis after all traces
are collected) and on-the-fly (analysis in tandem with trace collection). Our
insights show that conventional ML and modern DL-based malware detectors in
isolation cannot meet the needs of efficient and effective malware detection:
high accuracy, low false positive rate, and short classification time.Comment: 17 pages, 7 figure
Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A Contemporary Survey
Adversarial attacks and defenses in machine learning and deep neural network
have been gaining significant attention due to the rapidly growing applications
of deep learning in the Internet and relevant scenarios. This survey provides a
comprehensive overview of the recent advancements in the field of adversarial
attack and defense techniques, with a focus on deep neural network-based
classification models. Specifically, we conduct a comprehensive classification
of recent adversarial attack methods and state-of-the-art adversarial defense
techniques based on attack principles, and present them in visually appealing
tables and tree diagrams. This is based on a rigorous evaluation of the
existing works, including an analysis of their strengths and limitations. We
also categorize the methods into counter-attack detection and robustness
enhancement, with a specific focus on regularization-based methods for
enhancing robustness. New avenues of attack are also explored, including
search-based, decision-based, drop-based, and physical-world attacks, and a
hierarchical classification of the latest defense methods is provided,
highlighting the challenges of balancing training costs with performance,
maintaining clean accuracy, overcoming the effect of gradient masking, and
ensuring method transferability. At last, the lessons learned and open
challenges are summarized with future research opportunities recommended.Comment: 46 pages, 21 figure
Adversarial Robustness of Hybrid Machine Learning Architecture for Malware Classification
The detection heuristic in contemporary machine learning Windows malware classifiers is typically based on the static properties of the sample. In contrast, simultaneous utilization of static and behavioral telemetry is vaguely explored. We propose a hybrid model that employs dynamic malware analysis techniques, contextual information as an executable filesystem path on the system, and static representations used in modern state-of-the-art detectors. It does not require an operating system virtualization platform. Instead, it relies on kernel emulation for dynamic analysis. Our model reports enhanced detection heuristic and identify malicious samples, even if none of the separate models express high confidence in categorizing the file as malevolent. For instance, given the false positive rate, individual static, dynamic, and contextual model detection rates are , , and . However, we show that composite processing of all three achieves a detection rate of , above the cumulative performance of individual components. Moreover, simultaneous use of distinct malware analysis techniques address independent unit weaknesses, minimizing false positives and increasing adversarial robustness. Our experiments show a decrease in contemporary adversarial attack evasion rates from to when behavioral and contextual representations of sample are employed in detection heuristic
- …