159 research outputs found

    Applications in security and evasions in machine learning : a survey

    Get PDF
    In recent years, machine learning (ML) has become an important part to yield security and privacy in various applications. ML is used to address serious issues such as real-time attack detection, data leakage vulnerability assessments and many more. ML extensively supports the demanding requirements of the current scenario of security and privacy across a range of areas such as real-time decision-making, big data processing, reduced cycle time for learning, cost-efficiency and error-free processing. Therefore, in this paper, we review the state of the art approaches where ML is applicable more effectively to fulfill current real-world requirements in security. We examine different security applications' perspectives where ML models play an essential role and compare, with different possible dimensions, their accuracy results. By analyzing ML algorithms in security application it provides a blueprint for an interdisciplinary research area. Even with the use of current sophisticated technology and tools, attackers can evade the ML models by committing adversarial attacks. Therefore, requirements rise to assess the vulnerability in the ML models to cope up with the adversarial attacks at the time of development. Accordingly, as a supplement to this point, we also analyze the different types of adversarial attacks on the ML models. To give proper visualization of security properties, we have represented the threat model and defense strategies against adversarial attack methods. Moreover, we illustrate the adversarial attacks based on the attackers' knowledge about the model and addressed the point of the model at which possible attacks may be committed. Finally, we also investigate different types of properties of the adversarial attacks

    SECURING 5G NETWORKS WITH FEDERATED LEARNING AND GAN

    Get PDF
    The threat landscape of the 5G network is quite vast due to the complexity of its architecture and its use of virtualized network functions. This landscape can be divided into two categories: Attacks against the Access point and Attacks against the Core. This thesis has been dedicated to analyzing the threats that plague the 5G network with a special focus on the access point. The architecture for the access point was simulated with a federated learning environment to not only secure the privacy of the user data but to also present a realistic scenario from which to perceive the 5G network. The main objective of the thesis was to secure the access point of the 5G network in this federated learning environment. This was accomplished by placing an Intrusion Detection System at the endpoint which would classify the data as either benign or malicious. The effectiveness of this model was checked by simulating a malicious user and con- ducting certain adversarial attacks to determine if the model could defend against them. The study was conducted by performing two specific attacks i.e Label-Flipping attack and Genera- tive Adversarial Networks. The attacks were successful and revealed that a new system should be designed and developed that could be resilient against these types of attacks

    SETTI: A Self-supervised Adversarial Malware Detection Architecture in an IoT Environment

    Full text link
    In recent years, malware detection has become an active research topic in the area of Internet of Things (IoT) security. The principle is to exploit knowledge from large quantities of continuously generated malware. Existing algorithms practice available malware features for IoT devices and lack real-time prediction behaviors. More research is thus required on malware detection to cope with real-time misclassification of the input IoT data. Motivated by this, in this paper we propose an adversarial self-supervised architecture for detecting malware in IoT networks, SETTI, considering samples of IoT network traffic that may not be labeled. In the SETTI architecture, we design three self-supervised attack techniques, namely Self-MDS, GSelf-MDS and ASelf-MDS. The Self-MDS method considers the IoT input data and the adversarial sample generation in real-time. The GSelf-MDS builds a generative adversarial network model to generate adversarial samples in the self-supervised structure. Finally, ASelf-MDS utilizes three well-known perturbation sample techniques to develop adversarial malware and inject it over the self-supervised architecture. Also, we apply a defence method to mitigate these attacks, namely adversarial self-supervised training to protect the malware detection architecture against injecting the malicious samples. To validate the attack and defence algorithms, we conduct experiments on two recent IoT datasets: IoT23 and NBIoT. Comparison of the results shows that in the IoT23 dataset, the Self-MDS method has the most damaging consequences from the attacker's point of view by reducing the accuracy rate from 98% to 74%. In the NBIoT dataset, the ASelf-MDS method is the most devastating algorithm that can plunge the accuracy rate from 98% to 77%.Comment: 20 pages, 6 figures, 2 Tables, Submitted to ACM Transactions on Multimedia Computing, Communications, and Application
    • …
    corecore