17,689 research outputs found

    Combining Static and Dynamic Program Analysis Techniques for Checking Relational Properties

    Get PDF
    Die vorliegende Dissertation ist im Bereich der formalen Verifikation von Software angesiedelt. Sie behandelt die Überprüfung relationaler Eigenschaften von Computerprogrammen, d.h. solche Eigenschaften, die zwei oder mehr Programmausführungen betrachten. Die Dissertation konzentriert sich auf zwei spezifische relationale Eigenschaften: (1) Nichtinterferenz und (2) ob ein Programm ein Slice eines anderen Programms ist. Die Nichtinterferenz-Eigenschaft besagt, dass die Ausführung eines Programms mit den gleichen öffentlichen Eingaben die gleichen öffentlichen Ausgaben produziert und dies unabhängig von den geheimen Eingaben (z.B. eines Passworts) ist. Das bedeutet, dass die geheimen Eingaben die öffentlichen Ausgaben nicht beeinflussen. Programm-Slicing ist eine Technik zur Reduzierung eines Programms durch das Entfernen von Programmbefehlen, sodass ein spezifizierter Teil des Programmverhaltens erhalten bleibt, z.B. der Wert einer Variablen in einer Instruktion in dem Programm. Die Dissertation stellt Frameworks zur Verfügung, die es dem Nutzer ermöglichen, die obigen zwei Eigenschaften für ein gegebenes Programm zu analysieren. Die Dissertation erweitert den Stand der Technik in dem Bereich der Verifikation relationaler Eigenschaften, indem sie einerseits neue Ansätze zur Verfügung stellt und andererseits bereits existierende Ansätze miteinander kombiniert. Die Dissertation enthält jeweils einen Teil für die behandelten zwei relationalen Eigenschaften. Nichtinterferenz.\textbf{Nichtinterferenz.} Das Framework zur Überprüfung der Nichtinterferenz stellt neue Ansätze für die automatische Testgenerierung und für das Debuggen des Programms zur Verfügung und kombiniert diese mit Ansätzen, die auf deduktiver Verifikation und Programmabhängigkeitsgraphen basieren. Der erste neue Ansatz ermöglicht die automatische Generierung von Nichtinterferenz-Tests. Er ermöglicht dem Nutzer, nach Verletzungen der Nichtinterferenz-Eigenschaft im Programm zu suchen und stellt zudem ein für relationale Eigenschaften passendes Abdeckungskriterium für die generierten Test-Suites zur Verfügung. Der zweite neue Ansatz ist ein relationaler Debugger zur Analyse von Nichtinterferenz-Gegenbeispielen. Er verwendet bekannte Konzepte des Programm-Debuggens und erweitert diese für die Analyse relationaler Eigenschaften. Um den Nutzer beim Beweisen der Nichtinterferenz-Eigenschaft zu unterstützen, kombiniert das Framework einen auf Programmabhängigkeitsgraphen basierenden Ansatz mit einem auf Logik basierenden Ansatz, der einen Theorembeweiser verwendet. Auf Programmabhängigkeitsgraphen basierende Ansätze berechnen die Abhängigkeiten zwischen den unterschiedlichen Programmteilen und überprüfen, ob die öffentliche Ausgabe von der geheimen Eingabe abhängt. Im Vergleich zu logik-basierten Ansätzen skalieren programmabhängigkeitsgraphen-basierte Ansätze besser. Allerdings, können sie Fehlalarme melden, da sie die Programmabhängigkeiten überapproximieren. Somit bestehen zwei weitere Beiträge des Frameworks in Kombinationen von programmabhängigkeitsgraphen- und logik basierten Ansätzen: (1) der programmabhängigkeitsgraphen basierte Ansatz vereinfacht das Programm, das danach vom logik basierten Ansatz überprüft wird und (2) der logik basierte Ansatz beweist, dass einige vom Programmabhängigkeitsgraphen-basierten Ansatz berechnete Abhängigkeiten Überapproximationen sind und aus der Analyse entfernt werden können. Programm-Slicing.\textbf{Programm-Slicing.} Der zweite Teil der Dissertation behandelt ein Framework für das automatische Programm-Slicing. Während die meisten zum Stand der Technik gehörenden Slicing-Ansätze nur eine syntaktische Programmanalyse durchführen, betrachtet dieses Framework auch die Programmsemantik und kann dadurch mehr Programmbefehle entfernen. Der erste Beitrag des Frameworks besteht aus einem Ansatz zur relationalen Verifikation, der erweitert wurde, um die Korrektheit eines Programm-Slice nachzuweisen, d.h. dass es das spezifizierte Verhalten des Originalprogramms bewahrt. Der Vorteil der Benutzung relationaler Verifikation ist, dass sie auf zwei ähnlichen Programmen automatisch läuft -- was bei einem Slice-Kandidaten und Originalprogramm der Fall ist. Somit, anders als bei den wenigen zum Stand der Technik gehörenden Ansätzen, die die Programmsemantik betrachten, ist dieser Ansatz automatisch. Der zweite Beitrag des Frameworks besteht aus einer neuen Strategie zur Generierung von Slice-Kandidaten durch durch die Verfeinerung von dynamischen Slices (für eine Eingabe gültigen Slices) mithilfe von der relationalen Verifikation gelieferte Gegenbeispiele

    Node coarsening calculi for program slicing

    Get PDF
    Several approaches to reverse and re-engineering are based upon program slicing. Unfortunately, for large systems, such as those which typically form the subject of reverse engineering activities, the space and time requirements of slicing can be a barrier to successful application. Faced with this problem, several authors have found it helpful to merge control flow graph (CFG) nodes, thereby improving the space and time requirements of standard slicing algorithms. The node-merging process essentially creates a 'coarser' version of the original CFG. The paper introduces a theory for defining control flow graph node coarsening calculi. The theory formalizes properties of interest, when coarsening is used as a precursor to program slicing. The theory is illustrated with a case study of a coarsening calculus, which is proved to have the desired properties of sharpness and consistency

    Controlling Concurrent Change - A Multiview Approach Toward Updatable Vehicle Automation Systems

    Get PDF
    The development of SAE Level 3+ vehicles [{SAE}, 2014] poses new challenges not only for the functional development, but also for design and development processes. Such systems consist of a growing number of interconnected functional, as well as hardware and software components, making safety design increasingly difficult. In order to cope with emergent behavior at the vehicle level, thorough systems engineering becomes a key requirement, which enables traceability between different design viewpoints. Ensuring traceability is a key factor towards an efficient validation and verification of such systems. Formal models can in turn assist in keeping track of how the different viewpoints relate to each other and how the interplay of components affects the overall system behavior. Based on experience from the project Controlling Concurrent Change, this paper presents an approach towards model-based integration and verification of a cause effect chain for a component-based vehicle automation system. It reasons on a cross-layer model of the resulting system, which covers necessary aspects of a design in individual architectural views, e.g. safety and timing. In the synthesis stage of integration, our approach is capable of inserting enforcement mechanisms into the design to ensure adherence to the model. We present a use case description for an environment perception system, starting with a functional architecture, which is the basis for componentization of the cause effect chain. By tying the vehicle architecture to the cross-layer integration model, we are able to map the reasoning done during verification to vehicle behavior

    Integration of Static and Dynamic Analysis Techniques for Checking Noninterference

    Get PDF
    In this article, we present an overview of recent combinations of deductive program verification and automatic test generation on the one hand and static analysis on the other hand, with the goal of checking noninterference. Noninterference is the non-functional property that certain confidential information cannot leak to certain public output, i.e., the confidentiality of that information is always preserved. We define the noninterference properties that are checked along with the individual approaches that we use in different combinations. In one use case, our framework for checking noninterference employs deductive verification to automatically generate tests for noninterference violations with an improved test coverage. In another use case, the framework provides two combinations of deductive verification with static analysis based on system dependence graphs to prove noninterference, thereby reducing the effort for deductive verification

    Using theorem provers to increase the precision of dependence analysis for information flow control

    Get PDF
    Information flow control (IFC) is a category of techniques for enforcing information flow properties. In this paper we present the Combined Approach, a novel IFC technique that combines a scalable system-dependence-graph-based (SDG-based) approach with a precise logic-based approach based on a theorem prover. The Combined Approach has an increased precision compared with the SDG-based approach on its own, without sacrificing its scalability. For every potential illegal information flow reported by the SDG-based approach, the Combined Approach automatically generates proof obligations that, if valid, prove that there is no program path for which the reported information flow can happen. These proof obligations are then relayed to the logic-based approach. We also show how the SDG-based approach can provide additional information to the theorem prover that helps decrease the verification effort. Moreover, we present a prototypical implementation of the Combined Approach that uses the tools JOANA and KeY as the SDG-based and logic-based approach respectively

    A review of slicing techniques in software engineering

    Get PDF
    Program slice is the part of program that may take the program off the path of the desired output at some point of its execution. Such point is known as the slicing criterion. This point is generally identified at a location in a given program coupled with the subset of variables of program. This process in which program slices are computed is called program slicing. Weiser was the person who gave the original definition of program slice in 1979. Since its first definition, many ideas related to the program slice have been formulated along with the numerous numbers of techniques to compute program slice. Meanwhile, distinction between the static slice and dynamic slice was also made. Program slicing is now among the most useful techniques that can fetch the particular elements of a program which are related to a particular computation. Quite a large numbers of variants for the program slicing have been analyzed along with the algorithms to compute the slice. Model based slicing split the large architectures of software into smaller sub models during early stages of SDLC. Software testing is regarded as an activity to evaluate the functionality and features of a system. It verifies whether the system is meeting the requirement or not. A common practice now is to extract the sub models out of the giant models based upon the slicing criteria. Process of model based slicing is utilized to extract the desired lump out of slice diagram. This specific survey focuses on slicing techniques in the fields of numerous programing paradigms like web applications, object oriented, and components based. Owing to the efforts of various researchers, this technique has been extended to numerous other platforms that include debugging of program, program integration and analysis, testing and maintenance of software, reengineering, and reverse engineering. This survey portrays on the role of model based slicing and various techniques that are being taken on to compute the slices

    The University of Arizona program in solid propellants

    Get PDF
    The University of Arizona program is aimed at introducing scientific rigor to the predictability and quality assurance of composite solid propellants. Two separate approaches are followed: to use the modern analytical techniques to experimentally study carefully controlled propellant batches to discern trends in mixing, casting, and cure; and to examine a vast bank of data, that has fairly detailed information on the ingredients, processing, and rocket firing results. The experimental and analytical work is described briefly. The principle findings were that: (1) pre- (dry) blending of the coarse and fine ammonium perchlorate can significantly improve the uniformity of mixing; (2) the Fourier transformed IR spectra of the uncured and cured polymer have valuable data on the state of the fuel; (3) there are considerable non-uniformities in the propellant slurry composition near the solid surfaces (blades, walls) compared to the bulk slurry; and (4) in situ measurements of slurry viscosity continuously during mixing can give a good indication of the state of the slurry. Several important observations in the study of the data bank are discussed

    A Regression Test Selection Technique for Graphical User Interfaces

    Get PDF
    Regression testing is a quality control measure to ensure that the newly modified part of the software still complies with its specified requirements and that the unmodified part has not been affected by the maintenance activity. Regression testing is an important and expensive activity during the software maintenance process and its purpose is to ensure quality and reliability in modified software. Regression testing selection techniques are focused on the reusability of existing test suites for a modified program from a previous version. Many regression testing selection techniques have been approached for conventional and object-oriented software. There is little discussion about those techniques to be applied for the Graphical User Interfaces (GUIs). This thesis addresses the gap. GUIs have characteristics different from traditional software, and the conventional testing techniques do not directly apply to GUIs. Unlike most previous techniques for selective retest, this thesis focuses on developing an event driven regression testing selection technique for GUIs. It defines an event dependence graph (EDG) to identify the interaction and relationship of the events within GUI components, develops an algorithm to construct the EDG for GUIs, and presents the GUI modeling structure and its selection retest technique. An algorithm is given to determine and generate a modified test suite automatically for GUI based on its original version. Experiments are presented on an implementation of this solution and discusses newly found challenges when applied to an established GUI application. Finally, feasibility and future areas of research are addressed on the findings during the implementation of the solution

    Expert Elicitation for Reliable System Design

    Full text link
    This paper reviews the role of expert judgement to support reliability assessments within the systems engineering design process. Generic design processes are described to give the context and a discussion is given about the nature of the reliability assessments required in the different systems engineering phases. It is argued that, as far as meeting reliability requirements is concerned, the whole design process is more akin to a statistical control process than to a straightforward statistical problem of assessing an unknown distribution. This leads to features of the expert judgement problem in the design context which are substantially different from those seen, for example, in risk assessment. In particular, the role of experts in problem structuring and in developing failure mitigation options is much more prominent, and there is a need to take into account the reliability potential for future mitigation measures downstream in the system life cycle. An overview is given of the stakeholders typically involved in large scale systems engineering design projects, and this is used to argue the need for methods that expose potential judgemental biases in order to generate analyses that can be said to provide rational consensus about uncertainties. Finally, a number of key points are developed with the aim of moving toward a framework that provides a holistic method for tracking reliability assessment through the design process.Comment: This paper commented in: [arXiv:0708.0285], [arXiv:0708.0287], [arXiv:0708.0288]. Rejoinder in [arXiv:0708.0293]. Published at http://dx.doi.org/10.1214/088342306000000510 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore