337 research outputs found

    Using theorem provers to increase the precision of dependence analysis for information flow control

    Get PDF
    Information flow control (IFC) is a category of techniques for enforcing information flow properties. In this paper we present the Combined Approach, a novel IFC technique that combines a scalable system-dependence-graph-based (SDG-based) approach with a precise logic-based approach based on a theorem prover. The Combined Approach has an increased precision compared with the SDG-based approach on its own, without sacrificing its scalability. For every potential illegal information flow reported by the SDG-based approach, the Combined Approach automatically generates proof obligations that, if valid, prove that there is no program path for which the reported information flow can happen. These proof obligations are then relayed to the logic-based approach. We also show how the SDG-based approach can provide additional information to the theorem prover that helps decrease the verification effort. Moreover, we present a prototypical implementation of the Combined Approach that uses the tools JOANA and KeY as the SDG-based and logic-based approach respectively

    From Formal Semantics to Verified Slicing : A Modular Framework with Applications in Language Based Security

    Get PDF
    This book presents a modular framework for slicing in the proof assistant Isabelle/HOL which is based on abstract control flow graphs. Building on such abstract structures renders the correctness results language-independent. To prove that they hold for a specific language, it remains to instantiate the framework with this language, which requires a formal semantics of this language in Isabelle/HOL. We show that formal semantics even for sophisticated high-level languages are realizable

    Combining Graph-Based and Deduction-Based Information-Flow Analysis

    Get PDF
    Information flow control (IFC) is a category of techniques for ensuring system security by enforcing information flow properties such as non-interference. Established IFC techniques range from fully automatic approaches with much over-approximation to approaches with high pre- cision but potentially laborious user interaction. A noteworthy approach mitigating the weaknesses of both automatic and interactive IFC tech- niques is the hybrid approach, developed by Küsters et al., which – how- ever – is based on program modifications and still requires a significant amount of user interaction. In this paper, we present a combined approach that works without any program modifications. It minimizes potential user interactions by apply- ing a dependency-graph-based information-flow analysis first. Based on over-approximations, this step potentially generates false positives. Pre- cise non-interference proofs are achieved by applying a deductive theorem prover with a specialized information-flow calculus for checking that no path from a secret input to a public output exists. Both tools are fully integrated into a combined approach, which is evaluated on a case study, demonstrating the feasibility of automatic and precise non-interference proofs for complex programs

    Test generation for high coverage with abstraction refinement and coarsening (ARC)

    Get PDF
    Testing is the main approach used in the software industry to expose failures. Producing thorough test suites is an expensive and error prone task that can greatly benefit from automation. Two challenging problems in test automation are generating test input and evaluating the adequacy of test suites: the first amounts to producing a set of test cases that accurately represent the software behavior, the second requires defining appropriate metrics to evaluate the thoroughness of the testing activities. Structural testing addresses these problems by measuring the amount of code elements that are executed by a test suite. The code elements that are not covered by any execution are natural candidates for generating further test cases, and the measured coverage rate can be used to estimate the thoroughness of the test suite. Several empirical studies show that test suites achieving high coverage rates exhibit a high failure detection ability. However, producing highly covering test suites automatically is hard as certain code elements are executed only under complex conditions while other might be not reachable at all. In this thesis we propose Abstraction Refinement and Coarsening (ARC), a goal oriented technique that combines static and dynamic software analysis to automatically generate test suites with high code coverage. At the core of our approach there is an abstract program model that enables the synergistic application of the different analysis components. In ARC we integrate Dynamic Symbolic Execution (DSE) and abstraction refinement to precisely direct test generation towards the coverage goals and detect infeasible elements. ARC includes a novel coarsening algorithm for improved scalability. We implemented ARC-B, a prototype tool that analyses C programs and produces test suites that achieve high branch coverage. Our experiments show that the approach effectively exploits the synergy between symbolic testing and reachability analysis outperforming state of the art test generation approaches. We evaluated ARC-B on industry relevant software, and exposed previously unknown failures in a safety-critical software component

    Integration of Static and Dynamic Analysis Techniques for Checking Noninterference

    Get PDF
    In this article, we present an overview of recent combinations of deductive program verification and automatic test generation on the one hand and static analysis on the other hand, with the goal of checking noninterference. Noninterference is the non-functional property that certain confidential information cannot leak to certain public output, i.e., the confidentiality of that information is always preserved. We define the noninterference properties that are checked along with the individual approaches that we use in different combinations. In one use case, our framework for checking noninterference employs deductive verification to automatically generate tests for noninterference violations with an improved test coverage. In another use case, the framework provides two combinations of deductive verification with static analysis based on system dependence graphs to prove noninterference, thereby reducing the effort for deductive verification

    Full Functional Verification of Linked Data Structures

    Get PDF
    We present the first verification of full functional correctness for a range of linked data structure implementations, including mutable lists, trees, graphs, and hash tables. Specifically, we present the use of the Jahob verification system to verify formal specifications, written in classical higher-order logic, that completely capture the desired behavior of the Java data structure implementations (with the exception of properties involving execution time and/or memory consumption). Given that the desired correctness properties include intractable constructs such as quantifiers, transitive closure, and lambda abstraction, it is a challenge to successfully prove the generated verification conditions. Our verification system uses 'integrated reasoning' to split each verification condition into a conjunction of simpler subformulas, then apply a diverse collection of specialized decision procedures, first-order theorem provers, and, in the worst case, interactive theorem provers to prove each subformula. Techniques such as replacing complex subformulas with stronger but simpler alternatives, exploiting structure inherently present in the verification conditions, and, when necessary, inserting verified lemmas and proof hints into the imperative source code make it possible to seamlessly integrate all of the specialized decision procedures and theorem provers into a single powerful integrated reasoning system. By appropriately applying multiple proof techniques to discharge different subformulas, this reasoning system can effectively prove the complex and challenging verification conditions that arise in this context

    Combining Static and Dynamic Program Analysis Techniques for Checking Relational Properties

    Get PDF
    Die vorliegende Dissertation ist im Bereich der formalen Verifikation von Software angesiedelt. Sie behandelt die Überprüfung relationaler Eigenschaften von Computerprogrammen, d.h. solche Eigenschaften, die zwei oder mehr Programmausführungen betrachten. Die Dissertation konzentriert sich auf zwei spezifische relationale Eigenschaften: (1) Nichtinterferenz und (2) ob ein Programm ein Slice eines anderen Programms ist. Die Nichtinterferenz-Eigenschaft besagt, dass die Ausführung eines Programms mit den gleichen öffentlichen Eingaben die gleichen öffentlichen Ausgaben produziert und dies unabhängig von den geheimen Eingaben (z.B. eines Passworts) ist. Das bedeutet, dass die geheimen Eingaben die öffentlichen Ausgaben nicht beeinflussen. Programm-Slicing ist eine Technik zur Reduzierung eines Programms durch das Entfernen von Programmbefehlen, sodass ein spezifizierter Teil des Programmverhaltens erhalten bleibt, z.B. der Wert einer Variablen in einer Instruktion in dem Programm. Die Dissertation stellt Frameworks zur Verfügung, die es dem Nutzer ermöglichen, die obigen zwei Eigenschaften für ein gegebenes Programm zu analysieren. Die Dissertation erweitert den Stand der Technik in dem Bereich der Verifikation relationaler Eigenschaften, indem sie einerseits neue Ansätze zur Verfügung stellt und andererseits bereits existierende Ansätze miteinander kombiniert. Die Dissertation enthält jeweils einen Teil für die behandelten zwei relationalen Eigenschaften. Nichtinterferenz.\textbf{Nichtinterferenz.} Das Framework zur Überprüfung der Nichtinterferenz stellt neue Ansätze für die automatische Testgenerierung und für das Debuggen des Programms zur Verfügung und kombiniert diese mit Ansätzen, die auf deduktiver Verifikation und Programmabhängigkeitsgraphen basieren. Der erste neue Ansatz ermöglicht die automatische Generierung von Nichtinterferenz-Tests. Er ermöglicht dem Nutzer, nach Verletzungen der Nichtinterferenz-Eigenschaft im Programm zu suchen und stellt zudem ein für relationale Eigenschaften passendes Abdeckungskriterium für die generierten Test-Suites zur Verfügung. Der zweite neue Ansatz ist ein relationaler Debugger zur Analyse von Nichtinterferenz-Gegenbeispielen. Er verwendet bekannte Konzepte des Programm-Debuggens und erweitert diese für die Analyse relationaler Eigenschaften. Um den Nutzer beim Beweisen der Nichtinterferenz-Eigenschaft zu unterstützen, kombiniert das Framework einen auf Programmabhängigkeitsgraphen basierenden Ansatz mit einem auf Logik basierenden Ansatz, der einen Theorembeweiser verwendet. Auf Programmabhängigkeitsgraphen basierende Ansätze berechnen die Abhängigkeiten zwischen den unterschiedlichen Programmteilen und überprüfen, ob die öffentliche Ausgabe von der geheimen Eingabe abhängt. Im Vergleich zu logik-basierten Ansätzen skalieren programmabhängigkeitsgraphen-basierte Ansätze besser. Allerdings, können sie Fehlalarme melden, da sie die Programmabhängigkeiten überapproximieren. Somit bestehen zwei weitere Beiträge des Frameworks in Kombinationen von programmabhängigkeitsgraphen- und logik basierten Ansätzen: (1) der programmabhängigkeitsgraphen basierte Ansatz vereinfacht das Programm, das danach vom logik basierten Ansatz überprüft wird und (2) der logik basierte Ansatz beweist, dass einige vom Programmabhängigkeitsgraphen-basierten Ansatz berechnete Abhängigkeiten Überapproximationen sind und aus der Analyse entfernt werden können. Programm-Slicing.\textbf{Programm-Slicing.} Der zweite Teil der Dissertation behandelt ein Framework für das automatische Programm-Slicing. Während die meisten zum Stand der Technik gehörenden Slicing-Ansätze nur eine syntaktische Programmanalyse durchführen, betrachtet dieses Framework auch die Programmsemantik und kann dadurch mehr Programmbefehle entfernen. Der erste Beitrag des Frameworks besteht aus einem Ansatz zur relationalen Verifikation, der erweitert wurde, um die Korrektheit eines Programm-Slice nachzuweisen, d.h. dass es das spezifizierte Verhalten des Originalprogramms bewahrt. Der Vorteil der Benutzung relationaler Verifikation ist, dass sie auf zwei ähnlichen Programmen automatisch läuft -- was bei einem Slice-Kandidaten und Originalprogramm der Fall ist. Somit, anders als bei den wenigen zum Stand der Technik gehörenden Ansätzen, die die Programmsemantik betrachten, ist dieser Ansatz automatisch. Der zweite Beitrag des Frameworks besteht aus einer neuen Strategie zur Generierung von Slice-Kandidaten durch durch die Verfeinerung von dynamischen Slices (für eine Eingabe gültigen Slices) mithilfe von der relationalen Verifikation gelieferte Gegenbeispiele

    A Mechanized Proof of Kleene’s Theorem in Why3

    Get PDF
    In this dissertation we present a mathematically minded development of the correction proof of Kleene’s theorem conversion of regular expressions into finite automata, on the basis of equivalent expressive power. We formalise a functional implementation of the algorithm and prove, in full detail, the soundness of its mathematical definition, working within the Why3 framework to develop a mechanically verified implementation of the conversion algorithm. The motivation for this work is to test the feasibility of the deductive approach to the verification of software and pave the way to do similar proofs in the context of a static analysis approach to (object-oriented) programming. In particular, on the subject of behavioural types in typestate settings, whose expressiveness stands between regular and context-free languages and, therefore, can greatly benefit from mechanically certified implementations.Nesta dissertação apresentamos um desenvolvimento matemático da prova de correcção da conversão de expressões regulares em autómatos finitos do teorema de Kleene, com base no seu poder expressivo equivalente. Formalizamos uma implementação funcional do algoritmo e provamos, em detalhe, a correcção da sua definição matemática. Trabalhando no framework Why3 para desenvolver uma implementação mecanicamente certificada do algoritmo de conversão. A motivação para este trabalho é testar a viabilidade da metodologia e preparar o caminho para fazer provas semelhantes no contexto de uma abordagem de análise estática na programação (orientada para objectos). Em particular, no tópico dos tipos comportamentais com typestates, cuja expressividade está entre a das linguagens regulares e livres-de-contexto. Podendo, por isso, beneficiar enormemente de implementações mecanicamente certificada
    corecore