167 research outputs found

    Network-wide Configuration Synthesis

    Full text link
    Computer networks are hard to manage. Given a set of high-level requirements (e.g., reachability, security), operators have to manually figure out the individual configuration of potentially hundreds of devices running complex distributed protocols so that they, collectively, compute a compatible forwarding state. Not surprisingly, operators often make mistakes which lead to downtimes. To address this problem, we present a novel synthesis approach that automatically computes correct network configurations that comply with the operator's requirements. We capture the behavior of existing routers along with the distributed protocols they run in stratified Datalog. Our key insight is to reduce the problem of finding correct input configurations to the task of synthesizing inputs for a stratified Datalog program. To solve this synthesis task, we introduce a new algorithm that synthesizes inputs for stratified Datalog programs. This algorithm is applicable beyond the domain of networks. We leverage our synthesis algorithm to construct the first network-wide configuration synthesis system, called SyNET, that support multiple interacting routing protocols (OSPF and BGP) and static routes. We show that our system is practical and can infer correct input configurations, in a reasonable amount time, for networks of realistic size (> 50 routers) that forward packets for multiple traffic classes.Comment: 24 Pages, short version published in CAV 201

    Sharing Computer Network Logs for Security and Privacy: A Motivation for New Methodologies of Anonymization

    Full text link
    Logs are one of the most fundamental resources to any security professional. It is widely recognized by the government and industry that it is both beneficial and desirable to share logs for the purpose of security research. However, the sharing is not happening or not to the degree or magnitude that is desired. Organizations are reluctant to share logs because of the risk of exposing sensitive information to potential attackers. We believe this reluctance remains high because current anonymization techniques are weak and one-size-fits-all--or better put, one size tries to fit all. We must develop standards and make anonymization available at varying levels, striking a balance between privacy and utility. Organizations have different needs and trust other organizations to different degrees. They must be able to map multiple anonymization levels with defined risks to the trust levels they share with (would-be) receivers. It is not until there are industry standards for multiple levels of anonymization that we will be able to move forward and achieve the goal of widespread sharing of logs for security researchers.Comment: 17 pages, 1 figur

    Misconfiguration in Firewalls and Network Access Controls: Literature Review

    Get PDF
    Firewalls and network access controls play important roles in security control and protection. Those firewalls may create an incorrect sense or state of protection if they are improperly configured. One of the major configuration problems in firewalls is related to misconfiguration in the access control roles added to the firewall that will control network traffic. In this paper, we evaluated recent research trends and open challenges related to firewalls and access controls in general and misconfiguration problems in particular. With the recent advances in next-generation (NG) firewalls, firewall roles can be auto-generated based on networks and threats. Nonetheless, and due to the large number of roles in any medium to large networks, roles’ misconfiguration may occur for several reasons and will impact the performance of the firewall and overall network and protection efficiency

    Toward Automated Network Management and Operations.

    Full text link
    Network management plays a fundamental role in the operation and well-being of today's networks. Despite the best effort of existing support systems and tools, management operations in large service provider and enterprise networks remain mostly manual. Due to the larger scale of modern networks, more complex network functionalities, and higher network dynamics, human operators are increasingly short-handed. As a result, network misconfigurations are frequent, and can result in violated service-level agreements and degraded user experience. In this dissertation, we develop various tools and systems to understand, automate, augment, and evaluate network management operations. Our thesis is that by introducing formal abstractions, like deterministic finite automata, Petri-Nets and databases, we can build new support systems that systematically capture domain knowledge, automate network management operations, enforce network-wide properties to prevent misconfigurations, and simultaneously reduce manual effort. The theme for our systems is to build a knowledge plane based on the proposed abstractions, allowing network-wide reasoning and guidance for network operations. More importantly, the proposed systems require no modification to the existing Internet infrastructure and network devices, simplifying adoption. We show that our systems improve both timeliness and correctness in performing realistic and large-scale network operations. Finally, to address the current limitations and difficulty of evaluating novel network management systems, we have designed a distributed network testing platform that relies on network and device virtualization to provide realistic environments and isolation to production networks.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/78837/1/chenxu_1.pd

    Modelling and Analysis of Network Security Policies

    Get PDF
    Nowadays, computers and network communications have a pervasive presence in all our daily activities. Their correct configuration in terms of security is becoming more and more complex due to the growing number and variety of services present in a network. Generally, the security configuration of a computer network is dictated by specifying the policies of the security controls (e.g. firewall, VPN gateway) in the network. This implies that the specification of the network security policies is a crucial step to avoid errors in network configuration (e.g., blocking legitimate traffic, permitting unwanted traffic or sending insecure data). In the literature, an anomaly is an incorrect policy specification that an administrator may introduce in the network. In this thesis, we indicate as policy anomaly any conflict (e.g. two triggered policy rules enforcing contradictory actions), error (e.g. a policy cannot be enforced because it requires a cryptographic algorithm not supported by the security controls) or sub-optimization (e.g. redundant policies) that may arise in the policy specification phase. Security administrators, thus, have to face the hard job of correctly specifying the policies, which requires a high level of competence. Several studies have confirmed, in fact, that many security breaches and breakdowns are attributable to administrators’ responsibilities. Several approaches have been proposed to analyze the presence of anomalies among policy rules, in order to enforce a correct security configuration. However, we have identified two limitations of such approaches. On one hand, current literature identifies only the anomalies among policies of a single security technology (i.e., IPsec, TLS), while a network is generally configured with many technologies. On the other hand, existing approaches work on a single policy type, also named domain (i.e., filtering, communication protection). Unfortunately, the complexity of real systems is not self-contained and each network security control may affect the behavior of other controls in the same network. The objective of this PhD work was to investigate novel approaches for modelling security policies and their anomalies, and formal techniques of anomaly analysis. We present in this dissertation our contributions to the current policy analysis state of the art and the achieved results. A first contribution was the definition of a new class of policy anomalies, i.e. the inter-technology anomalies, which arises in a set of policies of multiple security technologies. We provided also a formal model able to detect these new types of anomalies. One of the results achieved by applying the inter-technology analysis to the communication protection policies was to categorize twelve new types of anomalies. The second result of this activity was derived from an empirical assessment that proved the practical significance of detecting such new anomalies. The second contribution of this thesis was the definition of a newly-defined type of policy analysis, named inter-domain analysis, which identifies any anomaly that may arise among different policy domains. We improved the state of the art by proposing a possible model to detect the inter-domain anomalies, which is a generalization of the aforementioned inter-technology model. In particular, we defined the Unified Model for Policy Analysis (UMPA) to perform the inter-domain analysis by extending the analysis model applied for a single policy domain to comprehensive analysis of anomalies among many policy domains. The result of this last part of our dissertation was to improve the effectiveness of the analysis process. Thanks to the inter-domain analysis, indeed, administrators can detect in a simple and customizable way a greater set of anomalies than the sets they could detect by running individually any other model

    Improving efficiency and resilience in large-scale computing systems through analytics and data-driven management

    Full text link
    Applications running in large-scale computing systems such as high performance computing (HPC) or cloud data centers are essential to many aspects of modern society, from weather forecasting to financial services. As the number and size of data centers increase with the growing computing demand, scalable and efficient management becomes crucial. However, data center management is a challenging task due to the complex interactions between applications, middleware, and hardware layers such as processors, network, and cooling units. This thesis claims that to improve robustness and efficiency of large-scale computing systems, significantly higher levels of automated support than what is available in today's systems are needed, and this automation should leverage the data continuously collected from various system layers. Towards this claim, we propose novel methodologies to automatically diagnose the root causes of performance and configuration problems and to improve efficiency through data-driven system management. We first propose a framework to diagnose software and hardware anomalies that cause undesired performance variations in large-scale computing systems. We show that by training machine learning models on resource usage and performance data collected from servers, our approach successfully diagnoses 98% of the injected anomalies at runtime in real-world HPC clusters with negligible computational overhead. We then introduce an analytics framework to address another major source of performance anomalies in cloud data centers: software misconfigurations. Our framework discovers and extracts configuration information from cloud instances such as containers or virtual machines. This is the first framework to provide comprehensive visibility into software configurations in multi-tenant cloud platforms, enabling systematic analysis for validating the correctness of software configurations. This thesis also contributes to the design of robust and efficient system management methods that leverage continuously monitored resource usage data. To improve performance under power constraints, we propose a workload- and cooling-aware power budgeting algorithm that distributes the available power among servers and cooling units in a data center, achieving up to 21% improvement in throughput per Watt compared to the state-of-the-art. Additionally, we design a network- and communication-aware HPC workload placement policy that reduces communication overhead by up to 30% in terms of hop-bytes compared to existing policies.2019-07-02T00:00:00

    Systems for characterizing Internet routing

    Get PDF
    2018 Spring.Includes bibliographical references.Today the Internet plays a critical role in our lives; we rely on it for communication, business, and more recently, smart home operations. Users expect high performance and availability of the Internet. To meet such high demands, all Internet components including routing must operate at peak efficiency. However, events that hamper the routing system over the Internet are very common, causing millions of dollars of financial loss, traffic exposed to attacks, or even loss of national connectivity. Moreover, there is sparse real-time detection and reporting of such events for the public. A key challenge in addressing such issues is lack of methodology to study, evaluate and characterize Internet connectivity. While many networks operating autonomously have made the Internet robust, the complexity in understanding how users interconnect, interact and retrieve content has also increased. Characterizing how data is routed, measuring dependency on external networks, and fast outage detection has become very necessary using public measurement infrastructures and data sources. From a regulatory standpoint, there is an immediate need for systems to detect and report routing events where a content provider's routing policies may run afoul of state policies. In this dissertation, we design, build and evaluate systems that leverage existing infrastructure and report routing events in near-real time. In particular, we focus on geographic routing anomalies i.e., detours, routing failure i.e., outages, and measuring structural changes in routing policies

    A system for the detection of limited visibility in BGP

    Get PDF
    Mención Internacional en el título de doctorThe performance of the global routing system is vital to thousands of entities operating the Autonomous Systems (ASes) which make up the Internet. The Border Gateway Protocol (BGP) is currently responsible for the exchange of reachability information and the selection of paths according to their specified routing policies. BGP thus enables traffic to flow from any point to another connected to the Internet. The manner traffic flows if often influenced by entities in the Internet according to their preferences. The latter are implemented in the form of routing policies by tweaking BGP configurations. Routing policies are usually complex and aim to achieve a myriad goals, including technical, economic and political purposes. Additionally, individual network managers need to permanently adapt to the interdomain routing changes and, by engineering the Internet traffic, optimize the use of their network. Despite the flexibility offered, the implementation of routing policies is a complicated process in itself, involving fine-tuning operations. Thus, it is an error-prone task and operators might end up with faulty configurations that impact the efficacy of their strategies or, more importantly, their revenues. Withal, even when correctly defining legitimate routing policies, unforeseen interactions between ASes have been observed to cause important disruptions that affect the global routing system. The main reason behind this resides in the fact that the actual inter-domain routing is the result of the interplay of many routing policies from ASes across the Internet, possibly bringing about a different outcome than the one expected. In this thesis, we perform an extensive analysis of the intricacies emerging from the complex netting of routing policies at the interdomain level, in the context of the current operational status of the Internet. Abundant implications on the way traffic flows in the Internet arise from the convolution of routing policies at a global scale, at times resulting in ASes using suboptimal ill-favored paths or in the undetected propagation of configuration errors in routing system. We argue here that monitoring prefix visibility at the interdomain level can be used to detect cases of faulty configurations or backfired routing policies, which disrupt the functionality of the routing system. We show that the lack of global prefix visibility can offer early warning signs for anomalous events which, despite their impact, often remain hidden from state of the art tools. Additionally, we show that such unintended Internet behavior not only degrades the efficacy of the routing policies implemented by operators, causing their traffic to follow ill-favored paths, but can also point out problems in the global connectivity of prefixes. We further observe that majority of prefixes suffering from limited visibility at the interdomain level is a set of more-specific prefixes, often used by network operators to fulfill binding traffic engineering needs. One important task achieved through the use of routing policies for traffic engineering is the control and optimization of the routing function in order to allow the ASes to engineer the incoming traffic. The advertisement of more-specific prefixes, also known as prefix deaggregation, provides network operators with a fine-grained method to control the interdomain ingress traffic, given that the longest-prefix match rule over-rides any other routing policy applied to the covering lessspecific prefixes. Nevertheless, however efficient, this traffic engineering tool comes with a cost, which is usually externalized to the entire Internet community. Prefix deaggregation is a known reason for the artificial inflation of the BGP routing table, which can further affect the scalability of the global routing system. Looking past the main motivation for deploying deaggregation in the first place, we identify and analyze here the economic impact of this type of strategy. We propose a general Internet model to analyze the effect that advertising more-specific prefixes has on the incoming transit traffic burstiness. We show that deaggregation combined with selective advertisements (further defined as strategic deaggregation) has a traffic stabilization side-effect, which translates into a decrease of the transit traffic bill. Next, we develop a methodology for Internet Service Providers (ISPs) to monitor general occurrences of deaggregation within their customer base. Furthermore, the ISPs can detect selective advertisements of deaggregated prefixes, and thus identify customers which may impact the business of their providers. We apply the proposed methodology on a complete set of data including routing, traffic, topological and billing information provided by an operational ISP and we discuss the obtained results.Programa Oficial de Doctorado en Ingeniería TelemáticaPresidente: Arturo Azcorra Saloña.- Secretario: Steffano Vissichio.- Vocal: Kc. Claff
    • …
    corecore