8 research outputs found

    Impact of IT Monoculture on Behavioral End Host Intrusion Detection

    Get PDF
    International audienceIn this paper, we study the impact of today's IT policies, defined based upon a monoculture approach, on the performance of endhost anomaly detectors. This approach leads to the uniform configuration of Host intrusion detection systems (HIDS) across all hosts in an enterprise networks. We assess the performance impact this policy has from the individual's point of view by analyzing network traces collected from 350 enterprise users. We uncover a great deal of diversity in the user population in terms of the “tail†behavior, i.e., the component which matters for anomaly detection systems. We demonstrate that the monoculture approach to HIDS configuration results in users that experience wildly different false positive and false negatives rates. We then introduce new policies, based upon leveraging this diversity and show that not only do they dramatically improve performance for the vast majority of users, but they also reduce the number of false positives arriving in centralized IT operation centers, and can reduce attack strength

    Mitigating Malicious Packets Attack via Vulnerability-aware Heterogeneous Network Devices Assignment

    Get PDF
    Due to high homogeneity of current network devices, a network is compromised if one node in the network is compromised by exploiting its vulnerability (e.g., malicious packets attack). Many existing works adopt heterogeneity philosophy to improve network survivability. For example, “diverse variants” are assigned to nodes in the network. However, these works assume that diverse variants do not have common vulnerabilities, which deem an invalid assumption in real networks. Therefore, existing diverse variants deployment schemes could not achieve optimal performance. This paper considers that some variants have common vulnerabilities, and proposes a novel solution called Vulnerability-aware Heterogeneous Network Devices Assignment (VHNDA). Firstly, we introduce a new metric named Expected Infected Ratio (EIR) to measure the impact of malicious packets’ attacks spread on the network. Secondly, we use EIR to model the vulnerability-aware diverse variants deployment problem as an integer-programming optimization problem with NP-hard complexity. Considering NP-hardness, we then design a heuristic algorithm named Simulated Annealing Vulnerability-aware Diverse Variants Deployment (SA-VDVD) to address the problem. Finally, we present a low complexity algorithm named Graph Segmentation-based Simulated Annealing Vulnerability-aware Diverse Variants Deployment (GSSA-VDVD) for large-scale networks named graph segmentation-based simulated annealing. The experimental results demonstrate that the proposed algorithms restrain effectively the spread of malicious packets attack with a reasonable computation cost when compared with baseline algorithms

    Would Diversity Really Increase the Robustness of the Routing Infrastructure Against Software Defects?

    No full text
    Today’s Internet routing infrastructure exhibits high homogeneity. This constitutes a serious threat to the resilience of the network, since a bug or security vulnerability in an implementation could make all routers running that implementation become simultaneously unusable. This situation could arise as a result of a defective software upgrade or a denial-of-service attack. Diversity has been proposed as a solution to increase resilience to software defects, but the benefits have not been clearly studied. In this paper, we use a graph theoretic approach to study the benefits of diversity for the robustness of a network, where robustness is the property of a network staying connected under a software failure. We address three fundamental questions: 1) How do we measure the robustness of a network under such failures? 2) How much diversity is needed to guarantee a certain degree of robust- ness? 3) Is there enough diversity already in the network or do we need to introduce more? We find that a small degree of diversity can provide good robustness. In particular, for a Tier-1 ISP network, five implementations suffice: two for the backbone routers and three for the access routers. We learn that some networks may already have enough diversity, but the diversity is not adequately used for robustness. We observe that the best way to apply diversity is to partition the network into contiguous regions using the same implementation, separating backbone and access routers and taking into account if a router is replicated. We evaluate our approach on multiple real ISP topologies, including the topology of a Tier-1 ISP

    Common Attack Surface Detection

    Get PDF
    In the current software development market, many software is being developed using a copy-paste mechanism with little to no change made to the reused code. Such a practice has the potential of causing severe security issues since one fragment of code containing a vulnerability may cause the same vulnerability to appear in many other software with the same cloned fragment. The concept of relying on software diversity for security may also be compromised by such a trend, since seemingly different software may in fact share vulnerable code fragments. Although there exist efforts on detecting cloned code fragments, there lack solutions for formally characterizing the specific impact on security. In this thesis, we revisit the concept of software diversity from a security viewpoint. Specifically, we define the novel concept of common attack surface to model the relative degree to which a pair of software may be sharing potentially vulnerable code fragments. To implement the concept, we develop an automated tool, Dupsec, in order to efficiently identify common attack surface between any given pair of software applications with minimum human intervention. Finally, we conduct experiments by applying our tool to a large number of open source software. Our results demonstrate many seemingly unrelated real-world software indeed share significant common attack surface

    Anti-fragile ICT Systems

    Get PDF
    This book introduces a novel approach to the design and operation of large ICT systems. It views the technical solutions and their stakeholders as complex adaptive systems and argues that traditional risk analyses cannot predict all future incidents with major impacts. To avoid unacceptable events, it is necessary to establish and operate anti-fragile ICT systems that limit the impact of all incidents, and which learn from small-impact incidents how to function increasingly well in changing environments. The book applies four design principles and one operational principle to achieve anti-fragility for different classes of incidents. It discusses how systems can achieve high availability, prevent malware epidemics, and detect anomalies. Analyses of Netflix’s media streaming solution, Norwegian telecom infrastructures, e-government platforms, and Numenta’s anomaly detection software show that cloud computing is essential to achieving anti-fragility for classes of events with negative impacts

    Developing Robust Models, Algorithms, Databases and Tools With Applications to Cybersecurity and Healthcare

    Get PDF
    As society and technology becomes increasingly interconnected, so does the threat landscape. Once isolated threats now pose serious concerns to highly interdependent systems, highlighting the fundamental need for robust machine learning. This dissertation contributes novel tools, algorithms, databases, and models—through the lens of robust machine learning—in a research effort to solve large-scale societal problems affecting millions of people in the areas of cybersecurity and healthcare. (1) Tools: We develop TIGER, the first comprehensive graph robustness toolbox; and our ROBUSTNESS SURVEY identifies critical yet missing areas of graph robustness research. (2) Algorithms: Our survey and toolbox reveal existing work has overlooked lateral attacks on computer authentication networks. We develop D2M, the first algorithmic framework to quantify and mitigate network vulnerability to lateral attacks by modeling lateral attack movement from a graph theoretic perspective. (3) Databases: To prevent lateral attacks altogether, we develop MALNET-GRAPH, the world’s largest cybersecurity graph database—containing over 1.2M graphs across 696 classes—and show the first large-scale results demonstrating the effectiveness of malware detection through a graph medium. We extend MALNET-GRAPH by constructing the largest binary-image cybersecurity database—containing 1.2M images, 133×more images than the only other public database—enabling new discoveries in malware detection and classification research restricted to a few industry labs (MALNET-IMAGE). (4) Models: To protect systems from adversarial attacks, we develop UNMASK, the first model that flags semantic incoherence in computer vision systems, which detects up to 96.75% of attacks, and defends the model by correctly classifying up to 93% of attacks. Inspired by UNMASK’s ability to protect computer visions systems from adversarial attack, we develop REST, which creates noise robust models through a novel combination of adversarial training, spectral regularization, and sparsity regularization. In the presence of noise, our method improves state-of-the-art sleep stage scoring by 71%—allowing us to diagnose sleep disorders earlier on and in the home environment—while using 19× less parameters and 15×less MFLOPS. Our work has made significant impact to industry and society: the UNMASK framework laid the foundation for a multi-million dollar DARPA GARD award; the TIGER toolbox for graph robustness analysis is a part of the Nvidia Data Science Teaching Kit, available to educators around the world; we released MALNET, the world’s largest graph classification database with 1.2M graphs; and the D2M framework has had major impact to Microsoft products, inspiring changes to the product’s approach to lateral attack detection.Ph.D
    corecore