544,149 research outputs found

    Security pattern evaluation

    Get PDF
    Current Security Pattern evaluation techniques are demonstrated to be incomplete with respect to quantitative measurement and comparison. A proposal for a dynamic testbed system is presented as a potential mechanism for evaluating patterns within a constrained environment.Postprin

    Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning

    Get PDF
    Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in the research field of adversarial machine learning. In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks. We report interesting connections between these apparently-different lines of work, highlighting common misconceptions related to the security evaluation of machine-learning algorithms. We review the main threat models and attacks defined to this end, and discuss the main limitations of current work, along with the corresponding future challenges towards the design of more secure learning algorithms.Comment: Accepted for publication on Pattern Recognition, 201

    Finding Security Bugs in Web Applications using a Catalog of Access Control Patterns

    Get PDF
    We propose a specification-free technique for finding missing security checks in web applications using a catalog of access control patterns in which each pattern models a common access control use case. Our implementation, Space, checks that every data exposure allowed by an application's code matches an allowed exposure from a security pattern in our catalog. The only user-provided input is a mapping from application types to the types of the catalog; the rest of the process is entirely automatic. In an evaluation on the 50 most watched Ruby on Rails applications on Github, Space reported 33 possible bug--|23 previously unknown security bugs, and 10 false positives.National Science Foundation (U.S.) (Grant 0707612

    Mining patterns of unsatisfiable constraints to detect infeasible paths

    Get PDF
    Detection of infeasible paths is required in many areas including test coverage analysis, test case generation, security vulnerability analysis, etc. Existing approaches typically use static analysis coupled with symbolic evaluation, heuristics, or path-pattern analysis. This paper is related to these approaches but with a different objective. It is to analyze code of real systems to build patterns of unsatisfiable constraints in infeasible paths. The resulting patterns can be used to detect infeasible paths without the use of constraint solver and evaluation of function calls involved, thus improving scalability. The patterns can be built gradually. Evaluation of the proposed approach shows promising results

    SECURITY EVALUATION OF PATTERN CLASSIFIERS UNDER ATTACK

    Get PDF
    Pattern classification systems are commonly used in adversarial applications, like biometric authentication, network intrusion detection, and spam filtering, in which data can be purposely manipulated by humans to undermine their operation. As this adversarial scenario is not taken into account by classical design methods, pattern classification systems may exhibit vulnerabilities, whose exploitation may severely affect their performance, and consequently limit their practical utility. In this paper, we address one of the main open issues: evaluating at design phase the security of pattern classifiers, namely, the performance degradation under potential attacks they may incur during operation. We propose a framework for empirical evaluation of classifier security that formalizes and generalizes the main ideas proposed in the literature, and give examples of its use in three real applications. Reported results show that security evaluation can provide a more complete understanding of the classifier’s behavior in adversarial environments, and lead to better design choice
    • …
    corecore