190 research outputs found

    Autonomic disaggregated multilayer networking

    Get PDF
    Focused on reducing capital expenditures by opening the data plane to multiple vendors without impacting performance, node disaggregation is attracting the interest of network operators. Although the software-defined networking (SDN) paradigm is key for the control of such networks, the increased complexity of multilayer networks strictly requires monitoring/telemetry and data analytics capabilities to assist in creating and operating self-managed (autonomic) networks. Such autonomicity greatly reduces operational expenditures, while improving network performance. In this context, a monitoring and data analytics (MDA) architecture consisting of centralized data storage with data analytics capabilities, together with a generic node agent for monitoring/telemetry supporting disaggregation, is presented. A YANG data model that allows one to clearly separate responsibilities for monitoring configuration from node configuration is also proposed. The MDA architecture and YANG data models are experimentally demonstrated through three different use cases: i) virtual link creation supported by an optical connection, where monitoring is automatically activated; ii) multilayer self-configuration after bit error rate (BER) degradation detection, where a modulation format adaptation is recommended for the SDN controller to minimize errors (this entails reducing the capacity of both the virtual link and supported multiprotocol label switching-transport profile (MPLS-TP) paths); and iii) optical layer selfhealing, including failure localization at the optical layer to find the cause of BER degradation. A combination of active and passive monitoring procedures allows one to localize the cause of the failure, leading to lightpath rerouting recommendations toward the SDN controller avoiding the failing element(s).Peer ReviewedPostprint (author's final draft

    Modelling and Analysis of Network Security Policies

    Get PDF
    Nowadays, computers and network communications have a pervasive presence in all our daily activities. Their correct configuration in terms of security is becoming more and more complex due to the growing number and variety of services present in a network. Generally, the security configuration of a computer network is dictated by specifying the policies of the security controls (e.g. firewall, VPN gateway) in the network. This implies that the specification of the network security policies is a crucial step to avoid errors in network configuration (e.g., blocking legitimate traffic, permitting unwanted traffic or sending insecure data). In the literature, an anomaly is an incorrect policy specification that an administrator may introduce in the network. In this thesis, we indicate as policy anomaly any conflict (e.g. two triggered policy rules enforcing contradictory actions), error (e.g. a policy cannot be enforced because it requires a cryptographic algorithm not supported by the security controls) or sub-optimization (e.g. redundant policies) that may arise in the policy specification phase. Security administrators, thus, have to face the hard job of correctly specifying the policies, which requires a high level of competence. Several studies have confirmed, in fact, that many security breaches and breakdowns are attributable to administrators’ responsibilities. Several approaches have been proposed to analyze the presence of anomalies among policy rules, in order to enforce a correct security configuration. However, we have identified two limitations of such approaches. On one hand, current literature identifies only the anomalies among policies of a single security technology (i.e., IPsec, TLS), while a network is generally configured with many technologies. On the other hand, existing approaches work on a single policy type, also named domain (i.e., filtering, communication protection). Unfortunately, the complexity of real systems is not self-contained and each network security control may affect the behavior of other controls in the same network. The objective of this PhD work was to investigate novel approaches for modelling security policies and their anomalies, and formal techniques of anomaly analysis. We present in this dissertation our contributions to the current policy analysis state of the art and the achieved results. A first contribution was the definition of a new class of policy anomalies, i.e. the inter-technology anomalies, which arises in a set of policies of multiple security technologies. We provided also a formal model able to detect these new types of anomalies. One of the results achieved by applying the inter-technology analysis to the communication protection policies was to categorize twelve new types of anomalies. The second result of this activity was derived from an empirical assessment that proved the practical significance of detecting such new anomalies. The second contribution of this thesis was the definition of a newly-defined type of policy analysis, named inter-domain analysis, which identifies any anomaly that may arise among different policy domains. We improved the state of the art by proposing a possible model to detect the inter-domain anomalies, which is a generalization of the aforementioned inter-technology model. In particular, we defined the Unified Model for Policy Analysis (UMPA) to perform the inter-domain analysis by extending the analysis model applied for a single policy domain to comprehensive analysis of anomalies among many policy domains. The result of this last part of our dissertation was to improve the effectiveness of the analysis process. Thanks to the inter-domain analysis, indeed, administrators can detect in a simple and customizable way a greater set of anomalies than the sets they could detect by running individually any other model

    An Integrated framework for firewall testing and validation

    Get PDF
    In today's global world, most corporations are bound to have an Internet presence. This phenomenon has led to a significant increase in all kinds of network attacks. Firewalls are used to protect organizational networks against these attacks. Firewall design is based on a set of filtering rules. Because of the nature of these rules, and due to the rising complexity of security policies, an increasing number of mistakes are found in configurations. A reliable and automated technique for testing firewall configuration is becoming necessary to ensure the full functionality of the firewall. In this thesis, a new approach to fully test a firewall has been developed using a white box approach that takes into account its inner implementation. Also--thanks to the information provided by the network information file--the environment where the firewall will be deployed is considered, ensuring a better accuracy and performance than previous work. Moreover, the method uses a combination of algorithms that remove common misconfigurations widely present in current firewall configurations [I] and guarantees a coverage that is greater than previous methods for generating test sets with a novel test set generation approach. The developed framework is fully automated and contains the full steps to get testing done, from the parsing of the firewall file to the generation of the test set based on the actual configuration of the firewall to the correction of the error in the firewall file, avoiding all types of errors of omission and misconfiguration that occur during a manual configuration. Keywords: Firewall, Policy Language, Conflict Free Rules, Rule Set, White Box Testing, Misconfiguration Errors, Configuration, Rule Updat

    From Map to Dist: the Evolution of a Large-Scale Wlan Monitoring System

    Get PDF
    The edge of the Internet is increasingly becoming wireless. Therefore, monitoring the wireless edge is important to understanding the security and performance aspects of the Internet experience. We have designed and implemented a large-scale WLAN monitoring system, the Distributed Internet Security Testbed (DIST), at Dartmouth College. It is equipped with distributed arrays of “sniffers” that cover 210 diverse campus locations and more than 5,000 users. In this paper, we describe our approach, designs and solutions for addressing the technical challenges that have resulted from efficiency, scalability, security, and management perspectives. We also present extensive evaluation results on a production network, and summarize the lessons learned

    Enterprise Voice-over-IP Traffic Monitoring

    Get PDF
    The contribution of this work is an extensible and flexible framework designed and implemented in order to satisfy the disparate requirements introduced by service oriented network monitoring needs

    A Big Data and machine learning approach for network monitoring and security

    Get PDF
    In the last decade the performances of 802.11 (Wi-Fi) devices skyrocketed. Today it is possible to realize gigabit wireless links spanning across kilometers at a fraction of the cost of the wired equivalent. In the same period, mesh network evolved from being experimental tools confined into university labs, to systems running in several real world scenarios. Mesh networks can now provide city-wide coverage and can compete on the market of Internet access. Yet, being wireless distributed networks, mesh networks are still hard to maintain and monitor. This paper explains how today we can perform monitoring, anomaly detection and root cause analysis in mesh networks using Big Data techniques. It first describes the architecture of a modern mesh network, it justifies the use of Big Data techniques and provides a design for the storage and analysis of Big Data produced by a large-scale mesh network. While proposing a generic infrastructure, we focus on its application in the security domain

    {SoK}: {An} Analysis of Protocol Design: Avoiding Traps for Implementation and Deployment

    No full text
    Today's Internet utilizes a multitude of different protocols. While some of these protocols were first implemented and used and later documented, other were first specified and then implemented. Regardless of how protocols came to be, their definitions can contain traps that lead to insecure implementations or deployments. A classical example is insufficiently strict authentication requirements in a protocol specification. The resulting Misconfigurations, i.e., not enabling strong authentication, are common root causes for Internet security incidents. Indeed, Internet protocols have been commonly designed without security in mind which leads to a multitude of misconfiguration traps. While this is slowly changing, to strict security considerations can have a similarly bad effect. Due to complex implementations and insufficient documentation, security features may remain unused, leaving deployments vulnerable. In this paper we provide a systematization of the security traps found in common Internet protocols. By separating protocols in four classes we identify major factors that lead to common security traps. These insights together with observations about end-user centric usability and security by default are then used to derive recommendations for improving existing and designing new protocols---without such security sensitive traps for operators, implementors and users

    {SoK}: {An} Analysis of Protocol Design: Avoiding Traps for Implementation and Deployment

    No full text
    Today's Internet utilizes a multitude of different protocols. While some of these protocols were first implemented and used and later documented, other were first specified and then implemented. Regardless of how protocols came to be, their definitions can contain traps that lead to insecure implementations or deployments. A classical example is insufficiently strict authentication requirements in a protocol specification. The resulting Misconfigurations, i.e., not enabling strong authentication, are common root causes for Internet security incidents. Indeed, Internet protocols have been commonly designed without security in mind which leads to a multitude of misconfiguration traps. While this is slowly changing, to strict security considerations can have a similarly bad effect. Due to complex implementations and insufficient documentation, security features may remain unused, leaving deployments vulnerable. In this paper we provide a systematization of the security traps found in common Internet protocols. By separating protocols in four classes we identify major factors that lead to common security traps. These insights together with observations about end-user centric usability and security by default are then used to derive recommendations for improving existing and designing new protocols---without such security sensitive traps for operators, implementors and users
    • …
    corecore