339 research outputs found

    GUIDE FOR THE COLLECTION OF INSTRUSION DATA FOR MALWARE ANALYSIS AND DETECTION IN THE BUILD AND DEPLOYMENT PHASE

    Get PDF
    During the COVID-19 pandemic, when most businesses were not equipped for remote work and cloud computing, we saw a significant surge in ransomware attacks. This study aims to utilize machine learning and artificial intelligence to prevent known and unknown malware threats from being exploited by threat actors when developers build and deploy applications to the cloud. This study demonstrated an experimental quantitative research design using Aqua. The experiment\u27s sample is a Docker image. Aqua checked the Docker image for malware, sensitive data, Critical/High vulnerabilities, misconfiguration, and OSS license. The data collection approach is experimental. Our analysis of the experiment demonstrated how unapproved images were prevented from running anywhere in our environment based on known vulnerabilities, embedded secrets, OSS licensing, dynamic threat analysis, and secure image configuration. In addition to the experiment, the forensic data collected in the build and deployment phase are exploitable vulnerability, Critical/High Vulnerability Score, Misconfiguration, Sensitive Data, and Root User (Super User). Since Aqua generates a detailed audit record for every event during risk assessment and runtime, we viewed two events on the Audit page for our experiment. One of the events caused an alert due to two failed controls (Vulnerability Score, Super User), and the other was a successful event meaning that the image is secure to deploy in the production environment. The primary finding for our study is the forensic data associated with the two events on the Audit page in Aqua. In addition, Aqua validated our security controls and runtime policies based on the forensic data with both events on the Audit page. Finally, the study’s conclusions will mitigate the likelihood that organizations will fall victim to ransomware by mitigating and preventing the total damage caused by a malware attack

    RansomAI: AI-powered Ransomware for Stealthy Encryption

    Full text link
    Cybersecurity solutions have shown promising performance when detecting ransomware samples that use fixed algorithms and encryption rates. However, due to the current explosion of Artificial Intelligence (AI), sooner than later, ransomware (and malware in general) will incorporate AI techniques to intelligently and dynamically adapt its encryption behavior to be undetected. It might result in ineffective and obsolete cybersecurity solutions, but the literature lacks AI-powered ransomware to verify it. Thus, this work proposes RansomAI, a Reinforcement Learning-based framework that can be integrated into existing ransomware samples to adapt their encryption behavior and stay stealthy while encrypting files. RansomAI presents an agent that learns the best encryption algorithm, rate, and duration that minimizes its detection (using a reward mechanism and a fingerprinting intelligent detection system) while maximizing its damage function. The proposed framework was validated in a ransomware, Ransomware-PoC, that infected a Raspberry Pi 4, acting as a crowdsensor. A pool of experiments with Deep Q-Learning and Isolation Forest (deployed on the agent and detection system, respectively) has demonstrated that RansomAI evades the detection of Ransomware-PoC affecting the Raspberry Pi 4 in a few minutes with >90% accuracy

    Artificial Intelligence Crime:An Overview of Malicious Use and Abuse of AI

    Get PDF
    The capabilities of Artificial Intelligence (AI) evolve rapidly and affect almost all sectors of society. AI has been increasingly integrated into criminal and harmful activities, expanding existing vulnerabilities, and introducing new threats. This article reviews the relevant literature, reports, and representative incidents which allows to construct a typology of the malicious use and abuse of systems with AI capabilities. The main objective is to clarify the types of activities and corresponding risks. Our starting point is to identify the vulnerabilities of AI models and outline how malicious actors can abuse them. Subsequently, we explore AI-enabled and AI-enhanced attacks. While we present a comprehensive overview, we do not aim for a conclusive and exhaustive classification. Rather, we provide an overview of the risks of enhanced AI application, that contributes to the growing body of knowledge on the issue. Specifically, we suggest four types of malicious abuse of AI (integrity attacks, unintended AI outcomes, algorithmic trading, membership inference attacks) and four types of malicious use of AI (social engineering, misinformation/fake news, hacking, autonomous weapon systems). Mapping these threats enables advanced reflection of governance strategies, policies, and activities that can be developed or improved to minimize risks and avoid harmful consequences. Enhanced collaboration among governments, industries, and civil society actors is vital to increase preparedness and resilience against malicious use and abuse of AI

    Deep Learning Based Malware Classification Using Deep Residual Network

    Get PDF
    The traditional malware detection approaches rely heavily on feature extraction procedure, in this paper we proposed a deep learning-based malware classification model by using a 18-layers deep residual network. Our model uses the raw bytecodes data of malware samples, converting the bytecodes to 3-channel RGB images and then applying the deep learning techniques to classify the malwares. Our experiment results show that the deep residual network model achieved an average accuracy of 86.54% by 5-fold cross validation. Comparing to the traditional methods for malware classification, our deep residual network model greatly simplify the malware detection and classification procedures, it achieved a very good classification accuracy as well. The dataset we used in this paper for training and testing is Malimg dataset, one of the biggest malware datasets released by vision research lab of UCSB

    Proceedings, MSVSCC 2019

    Get PDF
    Old Dominion University Department of Modeling, Simulation & Visualization Engineering (MSVE) and the Virginia Modeling, Analysis and Simulation Center (VMASC) held the 13th annual Modeling, Simulation & Visualization (MSV) Student Capstone Conference on April 18, 2019. The Conference featured student research and student projects that are central to MSV. Also participating in the conference were faculty members who volunteered their time to impart direct support to their students’ research, facilitated the various conference tracks, served as judges for each of the tracks, and provided overall assistance to the conference. Appreciating the purpose of the conference and working in a cohesive, collaborative effort, resulted in a successful symposium for everyone involved. These proceedings feature the works that were presented at the conference. Capstone Conference Chair: Dr. Yuzhong Shen Capstone Conference Student Chair: Daniel Pere

    Crafting Adversarial Examples using Particle Swarm Optimization

    Get PDF
    Machine learning models have been found to be vulnerable to adversarial attacks that apply small perturbations to input samples to get them misclassified. Attacks that search for and apply the perturbations are performed in both white-box and black-box settings, depending on the information available to the attacker about the target. For black-box attacks, the attacker can only query the target with specially crafted inputs and observing the outputs returned by the model. These outputs are used to guide the perturbations and create adversarial examples that are then misclassified. Current black-box attacks on API-based malware classifiers rely solely on feature insertion when applying perturbations. This restriction is set in place to ensure that no changes are introduced to the malware\u27s originally intended functionality. Additionally, the API calls being inserted in the malware are null or no-op APIs that have no functional affect to avoid any unintentional impact on malware behavior. Due to the nature of these API calls, they can be easily detected through non-ML techniques by analyzing their arguments and return values. In this dissertation, we explore other attacks on API-based malware detection models that are not restricted to feature addition. Specifically, we explore feature replacement as a possible avenue for creating adversarial malware examples. To retain the malware\u27s original functionality, we replace API calls with other functionally equivalent API calls. We find the API alternatives by using a hierarchical unsupervised learning approach on the API\u27s documentation. Our attack, which we call AdversarialPSO, uses Particle Swarm Optimization to guide the perturbations according to available function alternatives. Results show that creating adversarial malware examples by feature replacement is possible even under the more restrictive search space of limited function alternatives. Unlike the malware domain, which lacks benchmark datasets and publicly available classification models, image classification has multiple benchmarks to test new attacks. Therefore, to evaluate the efficacy and wide-applicability of AdversarialPSO, we re-implement the attack in the image classification domain, where we create adversarial examples from images by adding small often unrecognizable perturbations to the inputs. As a result of these perturbations, highly-accurate models misclassify the inputs resulting in a drastic drop in their accuracy. We evaluate this attack against both defended and undefended models and show that AdversarialPSO performs comparably to state-of-the-art adversarial attacks

    AI-powered Ransomware to Optimize its Impact on IoT Spectrum Sensors

    Get PDF
    This work aims to investigate the feasibility of exploiting reinforcement learning (RL) to improve the impact of ransomware on a target device while evading dynamic detection methods such as behavioral fingerprinting-based anomaly detection (AD). Given the constantly growing number of connected resource-constrained devices, such as Internet of Things (IoT) devices, and the significant rise in ransomware attacks over the past years, the importance of investigating ransomware attacks and corresponding defense approaches is evident. So far, most related research has been confined to exploring unethical artificial intelligence (AI) systems instead of analyzing the possibilities of using AI for launching optimized malware attacks. This work covers the mentioned limitations and introduces Ransomware Optimized with AI for Resource-constrained devices (ROAR), an RL framework to hide ransomware from dynamic detection mechanisms and optimize its impact on the target device. ROAR has been deployed in a real-world IoT crowdsensing scenario, including a Raspberry Pi 4 as a spectrum sensor. The Raspberry Pi was infected with ROAR, and behavioral data were collected from the target device to facilitate environment simulation. The results obtained by executing prototypes of the RL agent have been aggregated, and the corresponding plots are discussed and compared. These findings suggest that no relation exists between individual actions within an episode and that discounting future rewards does not improve performance in this particular RL problem. Overall, this work demonstrates the feasibility of optimizing ransomware attacks with RL and the effectiveness of the resulting evasion capabilities. The findings derived from the collected results hold in a simulated environment and when the agent is deployed in a real scenario. To our knowledge, this work is the first to investigate the possibilities of supporting malware attacks with RL during the attack phase. Further studies are needed to investigate additional optimizations of the RL model, efficiency improvements to the underlying ransomware implementation, and the feasibility of attacking more powerful devices
    • …
    corecore