12 research outputs found

    Automated Approaches for Program Verification and Repair

    Get PDF
    Formal methods techniques, such as verification, analysis, and synthesis,allow programmers to prove properties of their programs, or automatically derive programs from specifications. Making such techniques usable requires care: they must provide useful debugging information, be scalable, and enable automation. This dissertation presents automated analysis and synthesis techniques to ease the debugging of modular verification systems and allow easy access to constraint solvers from functional code. Further, it introduces machine learning based techniques to improve the scalability of off-the-shelf syntax-guided synthesis solvers and techniques to reduce the burden of network administrators writing and analyzing firewalls. We describe the design and implementationof a symbolic execution engine, G2, for non-strict functional languages such as Haskell. We extend G2 to both debug and automate the process of modular verification, and give Haskell programmers easy access to constraints solvers via a library named G2Q. Modular verifiers, such as LiquidHaskell, Dafny, and ESC/Java,allow programmers to write and prove specifications of their code. When a modular verifier fails to verify a program, it is not necessarily because of an actual bug in the program. This is because when verifying a function f, modular verifiers consider only the specification of a called function g, not the actual definition of g. Thus, a modular verifier may fail to prove a true specification of f if the specification of g is too weak. We present a technique, counterfactual symbolic execution, to aid in the debugging of modular verification failures. The approach uses symbolic execution to find concrete counterexamples, in the case of an actual inconsistency between a program and a specification; and abstract counterexamples, in the case that a function specification is too weak. Further, a counterexample-guided inductive synthesis (CEGIS) loop based technique is introduced to fully automate the process of modular verification, by using found counterexamples to automatically infer needed function specifications. The counterfactual symbolic execution and automated specification inference techniques are implemented in G2, and evaluated on existing LiquidHaskell errors and programs. We also leveraged G2 to build a library, G2Q, which allows writing constraint solving problemsdirectly as Haskell code. Users of G2Q can embed specially marked Haskell constraints (Boolean expressions) into their normal Haskell code, while marking some of the variables in the constraint as symbolic. Then, at runtime, G2Q automatically derives values for the symbolic variables that satisfy the constraint, and returns those values to the outside code. Unlike other constraint solving solutions, such as directly calling an SMT solver, G2Q uses symbolic execution to unroll recursive function definitions, and guarantees that the use of G2Q constraints will preserve type correctness. We further consider the problem of synthesizing functions viaa class of tools known as syntax-guided synthesis (SyGuS) solvers. We introduce a machine learning based technique to preprocess SyGuS problems, and reduce the space that the solver must search for a solution in. We demonstrate that the technique speeds up an existing SyGuS solver, CVC4, on a set of SyGuS solver benchmarks. Finally, we describe techniques to ease analysis and repair of firewalls.Firewalls are widely deployed to manage network security. However, firewall systems provide only a primitive interface, in which the specification is given as an ordered list of rules. This makes it hard to manually track and maintain the behavior of a firewall. We introduce a formal semantics for iptables firewall rules via a translation to first-order logic with uninterpreted functions and linear integer arithmetic, which allows encoding of firewalls into a decidable logic. We then describe techniques to automate the analysis and repair of firewalls using SMT solvers, based on user provided specifications of the desired behavior. We evaluate this approach with real world case studies collected from StackOverflow users

    Modelling and Analysis of Network Security Policies

    Get PDF
    Nowadays, computers and network communications have a pervasive presence in all our daily activities. Their correct configuration in terms of security is becoming more and more complex due to the growing number and variety of services present in a network. Generally, the security configuration of a computer network is dictated by specifying the policies of the security controls (e.g. firewall, VPN gateway) in the network. This implies that the specification of the network security policies is a crucial step to avoid errors in network configuration (e.g., blocking legitimate traffic, permitting unwanted traffic or sending insecure data). In the literature, an anomaly is an incorrect policy specification that an administrator may introduce in the network. In this thesis, we indicate as policy anomaly any conflict (e.g. two triggered policy rules enforcing contradictory actions), error (e.g. a policy cannot be enforced because it requires a cryptographic algorithm not supported by the security controls) or sub-optimization (e.g. redundant policies) that may arise in the policy specification phase. Security administrators, thus, have to face the hard job of correctly specifying the policies, which requires a high level of competence. Several studies have confirmed, in fact, that many security breaches and breakdowns are attributable to administrators’ responsibilities. Several approaches have been proposed to analyze the presence of anomalies among policy rules, in order to enforce a correct security configuration. However, we have identified two limitations of such approaches. On one hand, current literature identifies only the anomalies among policies of a single security technology (i.e., IPsec, TLS), while a network is generally configured with many technologies. On the other hand, existing approaches work on a single policy type, also named domain (i.e., filtering, communication protection). Unfortunately, the complexity of real systems is not self-contained and each network security control may affect the behavior of other controls in the same network. The objective of this PhD work was to investigate novel approaches for modelling security policies and their anomalies, and formal techniques of anomaly analysis. We present in this dissertation our contributions to the current policy analysis state of the art and the achieved results. A first contribution was the definition of a new class of policy anomalies, i.e. the inter-technology anomalies, which arises in a set of policies of multiple security technologies. We provided also a formal model able to detect these new types of anomalies. One of the results achieved by applying the inter-technology analysis to the communication protection policies was to categorize twelve new types of anomalies. The second result of this activity was derived from an empirical assessment that proved the practical significance of detecting such new anomalies. The second contribution of this thesis was the definition of a newly-defined type of policy analysis, named inter-domain analysis, which identifies any anomaly that may arise among different policy domains. We improved the state of the art by proposing a possible model to detect the inter-domain anomalies, which is a generalization of the aforementioned inter-technology model. In particular, we defined the Unified Model for Policy Analysis (UMPA) to perform the inter-domain analysis by extending the analysis model applied for a single policy domain to comprehensive analysis of anomalies among many policy domains. The result of this last part of our dissertation was to improve the effectiveness of the analysis process. Thanks to the inter-domain analysis, indeed, administrators can detect in a simple and customizable way a greater set of anomalies than the sets they could detect by running individually any other model

    Monadic Sequence Testing and Explicit Test-Refinements

    Get PDF
    We present an abstract framework for sequence testing that is implemented in Isabelle/HOL-TestGen. Our framework is based on the theory of state-exception monads, explicitly modelled in HOL, and can cope with typed input and output, interleaving executions including abort, and synchronisation. The framework is particularly geared towards symbolic execution and has proven effective in several large case-studies involving system models based on large (or infinite) state. On this basis, we rephrase the concept of test-refinements for inclusion, deadlock and IOCO-like tests, together with a formal theory of its rela- tion to traditional, IO-automata based notions

    Formal Firewall Conformance Testing: An Application of Test and Proof Techniques

    Get PDF
    Firewalls are an important means to secure critical ICT infrastructures. As configurable off-the-shelf prod\-ucts, the effectiveness of a firewall crucially depends on both the correctness of the implementation itself as well as the correct configuration. While testing the implementation can be done once by the manufacturer, the configuration needs to be tested for each application individually. This is particularly challenging as the configuration, implementing a firewall policy, is inherently complex, hard to understand, administrated by different stakeholders and thus difficult to validate. This paper presents a formal model of both stateless and stateful firewalls (packet filters), including NAT, to which a specification-based conformance test case gen\-eration approach is applied. Furthermore, a verified optimisation technique for this approach is presented: starting from a formal model for stateless firewalls, a collection of semantics-preserving policy transformation rules and an algorithm that optimizes the specification with respect of the number of test cases required for path coverage of the model are derived. We extend an existing approach that integrates verification and testing, that is, tests and proofs to support conformance testing of network policies. The presented approach is supported by a test framework that allows to test actual firewalls using the test cases generated on the basis of the formal model. Finally, a report on several larger case studies is presented

    Foundational Property-Based Testing

    Get PDF
    International audienceIntegrating property-based testing with a proof assistant creates an interesting opportunity: reusable or tricky testing code can be formally verified using the proof assistant itself. In this work we introduce a novel methodology for formally verified property-based testing and implement it as a foundational verification framework for QuickChick, a port of QuickCheck to Coq. Our framework enables one to verify that the executable testing code is testing the right Coq property. To make verification tractable, we provide a systematic way for reasoning about the set of outcomes a random data generator can produce with non-zero probability, while abstracting away from the actual probabilities. Our framework is firmly grounded in a fully verified implementation of QuickChick itself, using the same underlying verification methodology. We also apply this methodology to a complex case study on testing an information-flow control abstract machine, demonstrating that our verification methodology is modular and scalable and that it requires minimal changes to existing code

    Strategies for the intelligent selection of components

    Get PDF
    It is becoming common to build applications as component-intensive systems - a mixture of fresh code and existing components. For application developers the selection of components to incorporate is key to overall system quality - so they want the `best\u27. For each selection task, the application developer will de ne requirements for the ideal component and use them to select the most suitable one. While many software selection processes exist there is a lack of repeatable, usable, exible, automated processes with tool support. This investigation has focussed on nding and implementing strategies to enhance the selection of software components. The study was built around four research elements, targeting characterisation, process, strategies and evaluation. A Post-positivist methodology was used with the Spiral Development Model structuring the investigation. Data for the study is generated using a range of qualitative and quantitative methods including a survey approach, a range of case studies and quasiexperiments to focus on the speci c tuning of tools and techniques. Evaluation and review are integral to the SDM: a Goal-Question-Metric (GQM)-based approach was applied to every Spiral

    Verification-based software-fault detection

    Get PDF
    Software is used in many safety- and security-critical systems. Software development is, however, an error-prone task. In this work new techniques for the detection of software faults (or software "bugs") are described which are based on a formal deductive verification technology. The described techniques take advantage of information obtained during verification and combine verification technology with deductive fault detection and test generation in a very unified way

    Verification-based Software-fault Detection

    Get PDF
    Software is used in many safety- and security-critical systems. Software development is, however, an error-prone task. In this dissertation new techniques for the detection of software faults (or software "bugs") are described which are based on a formal deductive verification technology. The described techniques take advantage of information obtained during verification and combine verification technology with deductive fault detection and test generation in a very unified way

    A Test Generation Framework for Distributed Fault-Tolerant Algorithms

    Get PDF
    Heavyweight formal methods such as theorem proving have been successfully applied to the analysis of safety critical fault-tolerant systems. Typically, the models and proofs performed during such analysis do not inform the testing process of actual implementations. We propose a framework for generating test vectors from specifications written in the Prototype Verification System (PVS). The methodology uses a translator to produce a Java prototype from a PVS specification. Symbolic (Java) PathFinder is then employed to generate a collection of test cases. A small example is employed to illustrate how the framework can be used in practice
    corecore