760,832 research outputs found

    Ring-LWE Cryptography for the Number Theorist

    Get PDF
    In this paper, we survey the status of attacks on the ring and polynomial learning with errors problems (RLWE and PLWE). Recent work on the security of these problems [Eisentr\"ager-Hallgren-Lauter, Elias-Lauter-Ozman-Stange] gives rise to interesting questions about number fields. We extend these attacks and survey related open problems in number theory, including spectral distortion of an algebraic number and its relationship to Mahler measure, the monogenic property for the ring of integers of a number field, and the size of elements of small order modulo q.Comment: 20 Page

    Security Evaluation of Support Vector Machines in Adversarial Environments

    Full text link
    Support Vector Machines (SVMs) are among the most popular classification techniques adopted in security applications like malware detection, intrusion detection, and spam filtering. However, if SVMs are to be incorporated in real-world security systems, they must be able to cope with attack patterns that can either mislead the learning algorithm (poisoning), evade detection (evasion), or gain information about their internal parameters (privacy breaches). The main contributions of this chapter are twofold. First, we introduce a formal general framework for the empirical evaluation of the security of machine-learning systems. Second, according to our framework, we demonstrate the feasibility of evasion, poisoning and privacy attacks against SVMs in real-world security problems. For each attack technique, we evaluate its impact and discuss whether (and how) it can be countered through an adversary-aware design of SVMs. Our experiments are easily reproducible thanks to open-source code that we have made available, together with all the employed datasets, on a public repository.Comment: 47 pages, 9 figures; chapter accepted into book 'Support Vector Machine Applications

    Dos and Don'ts of Machine Learning in Computer Security

    Get PDF
    With the growing processing power of computing systems and the increasing availability of massive datasets, machine learning algorithms have led to major breakthroughs in many different areas. This development has influenced computer security, spawning a series of work on learning-based security systems, such as for malware detection, vulnerability discovery, and binary code analysis. Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance and render learning-based systems potentially unsuitable for security tasks and practical deployment. In this paper, we look at this problem with critical eyes. First, we identify common pitfalls in the design, implementation, and evaluation of learning-based security systems. We conduct a study of 30 papers from top-tier security conferences within the past 10 years, confirming that these pitfalls are widespread in the current security literature. In an empirical analysis, we further demonstrate how individual pitfalls can lead to unrealistic performance and interpretations, obstructing the understanding of the security problem at hand. As a remedy, we propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible. Furthermore, we identify open problems when applying machine learning in security and provide directions for further research.Comment: to appear at USENIX Security Symposium 202

    Graph-based Security and Privacy Analytics via Collective Classification with Joint Weight Learning and Propagation

    Full text link
    Many security and privacy problems can be modeled as a graph classification problem, where nodes in the graph are classified by collective classification simultaneously. State-of-the-art collective classification methods for such graph-based security and privacy analytics follow the following paradigm: assign weights to edges of the graph, iteratively propagate reputation scores of nodes among the weighted graph, and use the final reputation scores to classify nodes in the graph. The key challenge is to assign edge weights such that an edge has a large weight if the two corresponding nodes have the same label, and a small weight otherwise. Although collective classification has been studied and applied for security and privacy problems for more than a decade, how to address this challenge is still an open question. In this work, we propose a novel collective classification framework to address this long-standing challenge. We first formulate learning edge weights as an optimization problem, which quantifies the goals about the final reputation scores that we aim to achieve. However, it is computationally hard to solve the optimization problem because the final reputation scores depend on the edge weights in a very complex way. To address the computational challenge, we propose to jointly learn the edge weights and propagate the reputation scores, which is essentially an approximate solution to the optimization problem. We compare our framework with state-of-the-art methods for graph-based security and privacy analytics using four large-scale real-world datasets from various application scenarios such as Sybil detection in social networks, fake review detection in Yelp, and attribute inference attacks. Our results demonstrate that our framework achieves higher accuracies than state-of-the-art methods with an acceptable computational overhead.Comment: Network and Distributed System Security Symposium (NDSS), 2019. Dataset link: http://gonglab.pratt.duke.edu/code-dat

    Preemptive modelling towards classifying vulnerability of DDoS attack in SDN environment

    Get PDF
    Software-Defined Networking (SDN) has become an essential networking concept towards escalating the networking capabilities that are highly demanded future internet system, which is immensely distributed in nature. Owing to the novel concept in the field of network, it is still shrouded with security problems. It is also found that the Distributed Denial-of-Service (DDoS) attack is one of the prominent problems in the SDN environment. After reviewing existing research solutions towards resisting DDoS attack in SDN, it is found that still there are many open-end issues. Therefore, these issues are identified and are addressed in this paper in the form of a preemptive model of security. Different from existing approaches, this model is capable of identifying any malicious activity that leads to a DDoS attack by performing a correct classification of attack strategy using a machine learning approach. The paper also discusses the applicability of best classifiers using machine learning that is effective against DDoS attack

    Tactics, Techniques and Procedures (TTPs) to Augment Cyber Threat Intelligence (CTI): A Comprehensive Study

    Get PDF
    Sharing Threat Intelligence is now one of the biggest trends in cyber security industry. Today, no one can deny the necessity for information sharing to fight the cyber battle. The massive production of raw and redundant data coupled with the increasingly innovative attack vectors of the perpetrators demands an ecosystem to scrutinize the information, detect and react to take a defensive stance. Having enough sources for threat intelligence or having too many security tools are the least of our problems. The main challenge lies in threat knowledge management, interoperability between different security tools and then converting these filtered data into actionable items across multiple devices. Large datasets may help filtering the massive information gathering, open standards may somewhat facilitate the interoperability issues, and machine learning may partly aid the learning of malicious traits and features of attack, but how do we coordinate the actionable responses across devices, networks, and other ecosystems to be proactive rather than reactive? This paper presents a study of current threat intelligence landscape (Tactic), information sources, basic Indicators of Compromise (IOCs) (Technique) and STIX and TAXII standard as open source frameworks (Procedure) to augment Cyber Threat Intelligence (CTI) sharing

    Data Mining and Machine Learning for Software Engineering

    Get PDF
    Software engineering is one of the most utilizable research areas for data mining. Developers have attempted to improve software quality by mining and analyzing software data. In any phase of software development life cycle (SDLC), while huge amount of data is produced, some design, security, or software problems may occur. In the early phases of software development, analyzing software data helps to handle these problems and lead to more accurate and timely delivery of software projects. Various data mining and machine learning studies have been conducted to deal with software engineering tasks such as defect prediction, effort estimation, etc. This study shows the open issues and presents related solutions and recommendations in software engineering, applying data mining and machine learning techniques

    A Survey of Privacy Attacks in Machine Learning

    Full text link
    As machine learning becomes more widely used, the need to study its implications in security and privacy becomes more urgent. Although the body of work in privacy has been steadily growing over the past few years, research on the privacy aspects of machine learning has received less focus than the security aspects. Our contribution in this research is an analysis of more than 40 papers related to privacy attacks against machine learning that have been published during the past seven years. We propose an attack taxonomy, together with a threat model that allows the categorization of different attacks based on the adversarial knowledge, and the assets under attack. An initial exploration of the causes of privacy leaks is presented, as well as a detailed analysis of the different attacks. Finally, we present an overview of the most commonly proposed defenses and a discussion of the open problems and future directions identified during our analysis.Comment: Under revie
    corecore