304 research outputs found
Applications in security and evasions in machine learning : a survey
In recent years, machine learning (ML) has become an important part to yield security and privacy in various applications. ML is used to address serious issues such as real-time attack detection, data leakage vulnerability assessments and many more. ML extensively supports the demanding requirements of the current scenario of security and privacy across a range of areas such as real-time decision-making, big data processing, reduced cycle time for learning, cost-efficiency and error-free processing. Therefore, in this paper, we review the state of the art approaches where ML is applicable more effectively to fulfill current real-world requirements in security. We examine different security applications' perspectives where ML models play an essential role and compare, with different possible dimensions, their accuracy results. By analyzing ML algorithms in security application it provides a blueprint for an interdisciplinary research area. Even with the use of current sophisticated technology and tools, attackers can evade the ML models by committing adversarial attacks. Therefore, requirements rise to assess the vulnerability in the ML models to cope up with the adversarial attacks at the time of development. Accordingly, as a supplement to this point, we also analyze the different types of adversarial attacks on the ML models. To give proper visualization of security properties, we have represented the threat model and defense strategies against adversarial attack methods. Moreover, we illustrate the adversarial attacks based on the attackers' knowledge about the model and addressed the point of the model at which possible attacks may be committed. Finally, we also investigate different types of properties of the adversarial attacks
Towards a Multi-Layered Phishing Detection.
Phishing is one of the most common threats that users face while browsing the web. In the current threat landscape, a targeted phishing attack (i.e., spear phishing) often constitutes the first action of a threat actor during an intrusion campaign. To tackle this threat, many data-driven approaches have been proposed, which mostly rely on the use of supervised machine learning under a single-layer approach. However, such approaches are resource-demanding and, thus, their deployment in production environments is infeasible. Moreover, most previous works utilise a feature set that can be easily tampered with by adversaries. In this paper, we investigate the use of a multi-layered detection framework in which a potential phishing domain is classified multiple times by models using different feature sets. In our work, an additional classification takes place only when the initial one scores below a predefined confidence level, which is set by the system owner. We demonstrate our approach by implementing a two-layered detection system, which uses supervised machine learning to identify phishing attacks. We evaluate our system with a dataset consisting of active phishing attacks and find that its performance is comparable to the state of the art
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Learning-based pattern classifiers, including deep networks, have shown
impressive performance in several application domains, ranging from computer
vision to cybersecurity. However, it has also been shown that adversarial input
perturbations carefully crafted either at training or at test time can easily
subvert their predictions. The vulnerability of machine learning to such wild
patterns (also referred to as adversarial examples), along with the design of
suitable countermeasures, have been investigated in the research field of
adversarial machine learning. In this work, we provide a thorough overview of
the evolution of this research area over the last ten years and beyond,
starting from pioneering, earlier work on the security of non-deep learning
algorithms up to more recent work aimed to understand the security properties
of deep learning algorithms, in the context of computer vision and
cybersecurity tasks. We report interesting connections between these
apparently-different lines of work, highlighting common misconceptions related
to the security evaluation of machine-learning algorithms. We review the main
threat models and attacks defined to this end, and discuss the main limitations
of current work, along with the corresponding future challenges towards the
design of more secure learning algorithms.Comment: Accepted for publication on Pattern Recognition, 201
Advances in Cybercrime Prediction: A Survey of Machine, Deep, Transfer, and Adaptive Learning Techniques
Cybercrime is a growing threat to organizations and individuals worldwide,
with criminals using increasingly sophisticated techniques to breach security
systems and steal sensitive data. In recent years, machine learning, deep
learning, and transfer learning techniques have emerged as promising tools for
predicting cybercrime and preventing it before it occurs. This paper aims to
provide a comprehensive survey of the latest advancements in cybercrime
prediction using above mentioned techniques, highlighting the latest research
related to each approach. For this purpose, we reviewed more than 150 research
articles and discussed around 50 most recent and relevant research articles. We
start the review by discussing some common methods used by cyber criminals and
then focus on the latest machine learning techniques and deep learning
techniques, such as recurrent and convolutional neural networks, which were
effective in detecting anomalous behavior and identifying potential threats. We
also discuss transfer learning, which allows models trained on one dataset to
be adapted for use on another dataset, and then focus on active and
reinforcement Learning as part of early-stage algorithmic research in
cybercrime prediction. Finally, we discuss critical innovations, research gaps,
and future research opportunities in Cybercrime prediction. Overall, this paper
presents a holistic view of cutting-edge developments in cybercrime prediction,
shedding light on the strengths and limitations of each method and equipping
researchers and practitioners with essential insights, publicly available
datasets, and resources necessary to develop efficient cybercrime prediction
systems.Comment: 27 Pages, 6 Figures, 4 Table
Digital Deception: Generative Artificial Intelligence in Social Engineering and Phishing
The advancement of Artificial Intelligence (AI) and Machine Learning (ML) has
profound implications for both the utility and security of our digital
interactions. This paper investigates the transformative role of Generative AI
in Social Engineering (SE) attacks. We conduct a systematic review of social
engineering and AI capabilities and use a theory of social engineering to
identify three pillars where Generative AI amplifies the impact of SE attacks:
Realistic Content Creation, Advanced Targeting and Personalization, and
Automated Attack Infrastructure. We integrate these elements into a conceptual
model designed to investigate the complex nature of AI-driven SE attacks - the
Generative AI Social Engineering Framework. We further explore human
implications and potential countermeasures to mitigate these risks. Our study
aims to foster a deeper understanding of the risks, human implications, and
countermeasures associated with this emerging paradigm, thereby contributing to
a more secure and trustworthy human-computer interaction.Comment: Submitted to CHI 202
Computational Methods for Medical and Cyber Security
Over the past decade, computational methods, including machine learning (ML) and deep learning (DL), have been exponentially growing in their development of solutions in various domains, especially medicine, cybersecurity, finance, and education. While these applications of machine learning algorithms have been proven beneficial in various fields, many shortcomings have also been highlighted, such as the lack of benchmark datasets, the inability to learn from small datasets, the cost of architecture, adversarial attacks, and imbalanced datasets. On the other hand, new and emerging algorithms, such as deep learning, one-shot learning, continuous learning, and generative adversarial networks, have successfully solved various tasks in these fields. Therefore, applying these new methods to life-critical missions is crucial, as is measuring these less-traditional algorithms' success when used in these fields
Security of Internet of Things (IoT) Using Federated Learning and Deep Learning — Recent Advancements, Issues and Prospects
There is a great demand for an efficient security framework which can secure IoT systems from potential adversarial attacks. However, it is challenging to design a suitable security model for IoT considering the dynamic and distributed nature of IoT. This motivates the researchers to focus more on investigating the role of machine learning (ML) in the designing of security models. A brief analysis of different ML algorithms for IoT security is discussed along with the advantages and limitations of ML algorithms. Existing studies state that ML algorithms suffer from the problem of high computational overhead and risk of privacy leakage. In this context, this review focuses on the implementation of federated learning (FL) and deep learning (DL) algorithms for IoT security. Unlike conventional ML techniques, FL models can maintain the privacy of data while sharing information with other systems. The study suggests that FL can overcome the drawbacks of conventional ML techniques in terms of maintaining the privacy of data while sharing information with other systems. The study discusses different models, overview, comparisons, and summarization of FL and DL-based techniques for IoT security
- …