435,942 research outputs found

    MACHINE LEARNING ALGORITHMS FOR DETECTION OF CYBER THREATS USING LOGISTIC REGRESSION

    Get PDF
    The threat of cyberattacks is expanding globally; thus, businesses are developing intelligent artificial intelligence systems that can analyze security and other infrastructure logs from their systems department and quickly and automatically identify cyberattacks. Security analytics based on machine learning the next big thing in cybersecurity is machine data, which aims to mine security data to show the high maintenance costs of static relationship rules and methods. But, choosing the appropriate machine learning technique for log analytics using ML continues to be a significant barrier to AI success in cyber security due to the possibility of a substantial number of false-positive detections in large-scale or global Security Operations Centre (SOC) settings, selecting the proper machine learning technique for security log analytics remains a substantial obstacle to AI success in cyber security. A machine learning technique for a cyber threat exposure system that can minimize false positives is required. Today\u27s machine learning methods for identifying threats frequently use logistic regression. Logistic regression is the first of three machine learning subcategories—supervised, unsupervised, and reinforcement learning. Any machine learning enthusiast will encounter this supervised machine learning algorithm at the beginning of their machine learning career. It\u27s an essential and often applied classification algorithm

    Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning

    Get PDF
    Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in the research field of adversarial machine learning. In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks. We report interesting connections between these apparently-different lines of work, highlighting common misconceptions related to the security evaluation of machine-learning algorithms. We review the main threat models and attacks defined to this end, and discuss the main limitations of current work, along with the corresponding future challenges towards the design of more secure learning algorithms.Comment: Accepted for publication on Pattern Recognition, 201

    Exploiting Machine Learning to Subvert Your Spam Filter

    Get PDF
    Using statistical machine learning for making security decisions introduces new vulnerabilities in large scale systems. This paper shows how an adversary can exploit statistical machine learning, as used in the SpamBayes spam filter, to render it useless—even if the adversary’s access is limited to only 1 % of the training messages. We further demonstrate a new class of focused attacks that successfully prevent victims from receiving specific email messages. Finally, we introduce two new types of defenses against these attacks.

    Security Evaluation of Support Vector Machines in Adversarial Environments

    Full text link
    Support Vector Machines (SVMs) are among the most popular classification techniques adopted in security applications like malware detection, intrusion detection, and spam filtering. However, if SVMs are to be incorporated in real-world security systems, they must be able to cope with attack patterns that can either mislead the learning algorithm (poisoning), evade detection (evasion), or gain information about their internal parameters (privacy breaches). The main contributions of this chapter are twofold. First, we introduce a formal general framework for the empirical evaluation of the security of machine-learning systems. Second, according to our framework, we demonstrate the feasibility of evasion, poisoning and privacy attacks against SVMs in real-world security problems. For each attack technique, we evaluate its impact and discuss whether (and how) it can be countered through an adversary-aware design of SVMs. Our experiments are easily reproducible thanks to open-source code that we have made available, together with all the employed datasets, on a public repository.Comment: 47 pages, 9 figures; chapter accepted into book 'Support Vector Machine Applications
    corecore