1,379 research outputs found

    Applying Formal Methods to Networking: Theory, Techniques and Applications

    Full text link
    Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet which began as a research experiment was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, especially for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification, and an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design---especially, the software defined networking (SDN) paradigm---offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial

    Improving Performance of Cross-Domain Firewalls in Multi-Firewall System

    Get PDF
    Firewall is used to protect local network from outside untrusted public network or Internet. Every packet coming to and going out from network is inspected at Firewall. Local network policies are converted into rules and stored in firewall. It is used to restrict access of the external network into local network and vice versa. Packets are checked against the rules serially. Therefore increase in the number of rules decreases the firewall performance. The key thing in performance improvement is to reduce number of firewall rules. Optimization helps to reduce number of rules by removing anomalies and redundancies in the rule list. It is observed that only reducing number of rules is not sufficient as the major time is consumed in rule verification. Therefore to reduce time of rule checking fast verification method is used. Prior work focuses on either Intrafirewall optimization or Interfirewall optimization within single administrative domain. In cross-domain firewall optimization key thing is to keep rules secure from others as they contain confidential information which can be exploited by attackers. The proposed system implements cross-domain firewall rule optimization. For optimization multi-firewall environment is considered. Then optimized rule set is converted to Binary Tree Firewall (BTF) so as to reduce packet checking time and improve firewall performance further. DOI: 10.17762/ijritcc2321-8169.16047

    Distributed Security Policy Analysis

    Get PDF
    Computer networks have become an important part of modern society, and computer network security is crucial for their correct and continuous operation. The security aspects of computer networks are defined by network security policies. The term policy, in general, is defined as ``a definite goal, course or method of action to guide and determine present and future decisions''. In the context of computer networks, a policy is ``a set of rules to administer, manage, and control access to network resources''. Network security policies are enforced by special network appliances, so called security controls.Different types of security policies are enforced by different types of security controls. Network security policies are hard to manage, and errors are quite common. The problem exists because network administrators do not have a good overview of the network, the defined policies and the interaction between them. Researchers have proposed different techniques for network security policy analysis, which aim to identify errors within policies so that administrators can correct them. There are three different solution approaches: anomaly analysis, reachability analysis and policy comparison. Anomaly analysis searches for potential semantic errors within policy rules, and can also be used to identify possible policy optimizations. Reachability analysis evaluates allowed communication within a computer network and can determine if a certain host can reach a service or a set of services. Policy comparison compares two or more network security policies and represents the differences between them in an intuitive way. Although research in this field has been carried out for over a decade, there is still no clear answer on how to reduce policy errors. The different analysis techniques have their pros and cons, but none of them is a sufficient solution. More precisely, they are mainly complements to each other, as one analysis technique finds policy errors which remain unknown to another. Therefore, to be able to have a complete analysis of the computer network, multiple models must be instantiated. An analysis model that can perform all types of analysis techniques is desirable and has three main advantages. Firstly, the model can cover the greatest number of possible policy errors. Secondly, the computational overhead of instantiating the model is required only once. Thirdly, research effort is reduced because improvements and extensions to the model are applied to all three analysis types at the same time. Fourthly, new algorithms can be evaluated by comparing their performance directly to each other. This work proposes a new analysis model which is capable of performing all three analysis techniques. Security policies and the network topology are represented by the so-called Geometric-Model. The Geometric-Model is a formal model based on the set theory and geometric interpretation of policy rules. Policy rules are defined according to the condition-action format: if the condition holds then the action is applied. A security policy is expressed as a set of rules, a resolution strategy which selects the action when more than one rule applies, external data used by the resolution strategy and a default action in case no rule applies. This work also introduces the concept of Equivalent-Policy, which is calculated on the network topology and the policies involved. All analysis techniques are performed on it with a much higher performance. A precomputation phase is required for two reasons. Firstly, security policies which modify the traffic must be transformed to gain linear behaviour. Secondly, there are much fewer rules required to represent the global behaviour of a set of policies than the sum of the rules in the involved policies. The analysis model can handle the most common security policies and is designed to be extensible for future security policy types. As already mentioned the Geometric-Model can represent all types of security policies, but the calculation of the Equivalent-Policy has some small dependencies on the details of different policy types. Therefore, the computation of the Equivalent-Policy must be tweaked to support new types. Since the model and the computation of the Equivalent-Policy was designed to be extendible, the effort required to introduce a new security policy type is minimal. The anomaly analysis can be performed on computer networks containing different security policies. The policy comparison can perform an Implementation-Verification among high-level security requirements and an entire computer network containing different security policies. The policy comparison can perform a ChangeImpact-Analysis of an entire network containing different security policies. The proposed model is implemented in a working prototype, and a performance evaluation has been performed. The performance of the implementation is more than sufficient for real scenarios. Although the calculation of the Equivalent-Policy requires a significant amount of time, it is still manageable and is required only once. The execution of the different analysis techniques is fast, and generally the results are calculated in real time. The implementation also exposes an API for future integration in different frameworks or software packages. Based on the API, a complete tool was implemented, with a graphical user interface and additional features

    Consistent SDNs through Network State Fuzzing

    No full text
    The conventional wisdom is that a software-defined network (SDN) operates under the premise that the logically centralized control plane has an accurate representation of the actual data plane state. Nevertheless, bugs, misconfigurations, faults or attacks can introduce inconsistencies that undermine correct operation. Previous work in this area, however, lacks a holistic methodology to tackle this problem and thus, addresses only certain parts of the problem. Yet, the consistency of the overall system is only as good as its least consistent part. Motivated by an analogy of network consistency checking with program testing, we propose to add active probe-based network state fuzzing to our consistency check repertoire. Hereby, our system, PAZZ, combines production traffic with active probes to continuously test if the actual forwarding path and decision elements (on the data plane) correspond to the expected ones (on the control plane). Our insight is that active traffic covers the inconsistency cases beyond the ones identified by passive traffic. PAZZ prototype was built and evaluated on topologies of varying scale and complexity. Our results show that PAZZ requires minimal network resources to detect persistent data plane faults through fuzzing and localize them quickly

    Consistent SDNs through Network State Fuzzing

    Full text link
    The conventional wisdom is that a software-defined network (SDN) operates under the premise that the logically centralized control plane has an accurate representation of the actual data plane state. Unfortunately, bugs, misconfigurations, faults or attacks can introduce inconsistencies that undermine correct operation. Previous work in this area, however, lacks a holistic methodology to tackle this problem and thus, addresses only certain parts of the problem. Yet, the consistency of the overall system is only as good as its least consistent part. Motivated by an analogy of network consistency checking with program testing, we propose to add active probe-based network state fuzzing to our consistency check repertoire. Hereby, our system, PAZZ, combines production traffic with active probes to periodically test if the actual forwarding path and decision elements (on the data plane) correspond to the expected ones (on the control plane). Our insight is that active traffic covers the inconsistency cases beyond the ones identified by passive traffic. PAZZ prototype was built and evaluated on topologies of varying scale and complexity. Our results show that PAZZ requires minimal network resources to detect persistent data plane faults through fuzzing and localize them quickly while outperforming baseline approaches.Comment: Added three extra relevant references, the arXiv later was accepted in IEEE Transactions of Network and Service Management (TNSM), 2019 with the title "Towards Consistent SDNs: A Case for Network State Fuzzing

    Work-in-Progress: A Formal Approach to Verify Fault Tolerance in Industrial Network Systems

    Get PDF
    Distributed systems are extremely difficult to design and implement correctly because they must handle both system correctness and device failures. Most of the work focuses on the first aspect, and in particular, on the correctness of security and network configuration. The large demand for availability and reliability for critical services is actually pushing new architectures that tolerate faults, but a-priori analysis of redundancy and recovery features is still limited. To this end, we present a framework to design and formally verify the persistence of network properties, even in case of failures. The solution considers both nodes and links failure, and it is based on a formal model that takes both network topology and network device configurations into account. In contrast, most of the existing approaches only consider network topology. By analyzing the formal model, the framework can check whether the specified network services are still available after failures, and in case of success, it outputs a possible configuration of the devices to be used for automatic recovery

    Verifying and Monitoring IoTs Network Behavior using MUD Profiles

    Full text link
    IoT devices are increasingly being implicated in cyber-attacks, raising community concern about the risks they pose to critical infrastructure, corporations, and citizens. In order to reduce this risk, the IETF is pushing IoT vendors to develop formal specifications of the intended purpose of their IoT devices, in the form of a Manufacturer Usage Description (MUD), so that their network behavior in any operating environment can be locked down and verified rigorously. This paper aims to assist IoT manufacturers in developing and verifying MUD profiles, while also helping adopters of these devices to ensure they are compatible with their organizational policies and track devices network behavior based on their MUD profile. Our first contribution is to develop a tool that takes the traffic trace of an arbitrary IoT device as input and automatically generates the MUD profile for it. We contribute our tool as open source, apply it to 28 consumer IoT devices, and highlight insights and challenges encountered in the process. Our second contribution is to apply a formal semantic framework that not only validates a given MUD profile for consistency, but also checks its compatibility with a given organizational policy. We apply our framework to representative organizations and selected devices, to demonstrate how MUD can reduce the effort needed for IoT acceptance testing. Finally, we show how operators can dynamically identify IoT devices using known MUD profiles and monitor their behavioral changes on their network.Comment: 17 pages, 17 figures. arXiv admin note: text overlap with arXiv:1804.0435

    Modelling and Analysis of Network Security Policies

    Get PDF
    Nowadays, computers and network communications have a pervasive presence in all our daily activities. Their correct configuration in terms of security is becoming more and more complex due to the growing number and variety of services present in a network. Generally, the security configuration of a computer network is dictated by specifying the policies of the security controls (e.g. firewall, VPN gateway) in the network. This implies that the specification of the network security policies is a crucial step to avoid errors in network configuration (e.g., blocking legitimate traffic, permitting unwanted traffic or sending insecure data). In the literature, an anomaly is an incorrect policy specification that an administrator may introduce in the network. In this thesis, we indicate as policy anomaly any conflict (e.g. two triggered policy rules enforcing contradictory actions), error (e.g. a policy cannot be enforced because it requires a cryptographic algorithm not supported by the security controls) or sub-optimization (e.g. redundant policies) that may arise in the policy specification phase. Security administrators, thus, have to face the hard job of correctly specifying the policies, which requires a high level of competence. Several studies have confirmed, in fact, that many security breaches and breakdowns are attributable to administrators’ responsibilities. Several approaches have been proposed to analyze the presence of anomalies among policy rules, in order to enforce a correct security configuration. However, we have identified two limitations of such approaches. On one hand, current literature identifies only the anomalies among policies of a single security technology (i.e., IPsec, TLS), while a network is generally configured with many technologies. On the other hand, existing approaches work on a single policy type, also named domain (i.e., filtering, communication protection). Unfortunately, the complexity of real systems is not self-contained and each network security control may affect the behavior of other controls in the same network. The objective of this PhD work was to investigate novel approaches for modelling security policies and their anomalies, and formal techniques of anomaly analysis. We present in this dissertation our contributions to the current policy analysis state of the art and the achieved results. A first contribution was the definition of a new class of policy anomalies, i.e. the inter-technology anomalies, which arises in a set of policies of multiple security technologies. We provided also a formal model able to detect these new types of anomalies. One of the results achieved by applying the inter-technology analysis to the communication protection policies was to categorize twelve new types of anomalies. The second result of this activity was derived from an empirical assessment that proved the practical significance of detecting such new anomalies. The second contribution of this thesis was the definition of a newly-defined type of policy analysis, named inter-domain analysis, which identifies any anomaly that may arise among different policy domains. We improved the state of the art by proposing a possible model to detect the inter-domain anomalies, which is a generalization of the aforementioned inter-technology model. In particular, we defined the Unified Model for Policy Analysis (UMPA) to perform the inter-domain analysis by extending the analysis model applied for a single policy domain to comprehensive analysis of anomalies among many policy domains. The result of this last part of our dissertation was to improve the effectiveness of the analysis process. Thanks to the inter-domain analysis, indeed, administrators can detect in a simple and customizable way a greater set of anomalies than the sets they could detect by running individually any other model
    • …
    corecore