15 research outputs found

    Test Sequences for Web Service Composition using CPN model

    Get PDF
    Web service composition is most mature and effective way to realize the rapidly changing requirements of business in service-oriented solutions. Testing the compositions of web services is complex, due to their distributed nature and asynchronous behaviour. Colored Petri Nets (CPNs) provide a framework for the design, specification, validation and verification of systems. In this paper the CPN model used for composition design verification is reused for test design purpose. We propose an on-the-fly algorithm that generates a test suite that covers all possible paths without redundancy.  The prioritization of test sequences, test suite size and redundancy reduction are also focused. The proposed technique was applied to air line reservation system and the generated test sequences were evaluated against three coverage criteria; Decision Coverage, Input Output Coverage and Transition Coverage. Keywords— CPN, MBT, web service composition testing, test case generatio

    Formale Verifikationsmethodiken fĂĽr nichtlineare analoge Schaltungen

    Get PDF
    The objective of this thesis is to develop new methodologies for formal verification of nonlinear analog circuits. Therefore, new approaches to discrete modeling of analog circuits, specification of analog circuit properties and formal verification algorithms are introduced. Formal approaches to verification of analog circuits are not yet introduced into industrial design flows and still subject to research. Formal verification proves specification conformance for all possible input conditions and all possible internal states of a circuit. Automatically proving that a model of the circuit satisfies a declarative machine-readable property specification is referred to as model checking. Equivalence checking proves the equivalence of two circuit implementations. Starting from the state of the art in modeling analog circuits for simulation-based verification, discrete modeling of analog circuits for state space-based formal verification methodologies is motivated in this thesis. In order to improve the discrete modeling of analog circuits, a new trajectory-directed partitioning algorithm was developed in the scope of this thesis. This new approach determines the partitioning of the state space parallel or orthogonal to the trajectories of the state space dynamics. Therewith, a high accuracy of the successor relation is achieved in combination with a lower number of states necessary for a discrete model of equal accuracy compared to the state-of-the-art hyperbox-approach. The mapping of the partitioning to a discrete analog transition structure (DATS) enables the application of formal verification algorithms. By analyzing digital specification concepts and the existing approaches to analog property specification, the requirements for a new specification language for analog properties have been discussed in this thesis. On the one hand, it shall meet the requirements for formal specification of verification approaches applied to DATS models. On the other hand, the language syntax shall be oriented on natural language phrases. By synthesis of these requirements, the analog specification language (ASL) was developed in the scope of this thesis. The verification algorithms for model checking, that were developed in combination with ASL for application to DATS models generated with the new trajectory-directed approach, offer a significant enhancement compared to the state of the art. In order to prepare a transition of signal-based to state space-based verification methodologies, an approach to transfer transient simulation results from non-formal test bench simulation flows into a partial state space representation in form of a DATS has been developed in the scope of this thesis. As has been demonstrated by examples, the same ASL specification that was developed for formal model checking on complete discrete models could be evaluated without modifications on transient simulation waveforms. An approach to counterexample generation for the formal ASL model checking methodology offers to generate transition sequences from a defined starting state to a specification-violating state for inspection in transient simulation environments. Based on this counterexample generation, a new formal verification methodology using complete state space-covering input stimuli was developed. By conducting a transient simulation with these complete state space-covering input stimuli, the circuit adopts every state and transition that were visited during stimulus generation. An alternative formal verification methodology is given by retransferring the transient simulation responses to a DATS model and by applying the ASL verification algorithms in combination with an ASL property specification. Moreover, the complete state space-covering input stimuli can be applied to develop a formal equivalence checking methodology. Therewith, the equivalence of two implementations can be proven for every inner state of both systems by comparing the transient simulation responses to the complete-coverage stimuli of both circuits. In order to visually inspect the results of the newly introduced verification methodologies, an approach to dynamic state space visualization using multi-parallel particle simulation was developed. Due to the particles being randomly distributed over the complete state space and moving corresponding to the state space dynamics, another perspective to the system's behavior is provided that covers the state space and hence offers formal results. The prototypic implementations of the formal verification methodologies developed in the scope of this thesis have been applied to several example circuits. The acquired results for the new approaches to discrete modeling, specification and verification algorithms all demonstrate the capability of the new verification methodologies to be applied to complex circuit blocks and their properties.Gegenstand dieser Dissertation ist die Entwicklung neuer Methodiken zur formalen Verifikation nichtlinearer analoger elektronischer Schaltungen. Dazu werden im Rahmen dieser Arbeit entstandene neue Ansätze in den Bereichen verifikationsgerechte diskrete Modellierung analoger Schaltungen, Spezifikation analoger Schaltungseigenschaften und formale Verifikationsalgorithmen vorgestellt. Ausgehend vom Stand der Technik der Modellierung analoger Schaltungen für die simulationsbasierte Verifikation wird im Rahmen dieser Arbeit die diskrete Modellierung analoger Schaltungen für zustandsraumbasierte formale Verifikationsverfahren betrachtet. Dazu wurde ein neuer Ansatz zur diskreten Modellierung entwickelt, der die Aufteilungsstruktur anhand der Trajektorien der Vektorfelddynamik bestimmt. So wird eine hohe Genauigkeit der Nachfolgerrelation ermöglicht, woraus eine niedrigere Zahl an Zuständen für ein diskretes Modell gleicher Genauigkeit im Vergleich mit dem bisherigen Stand der Technik folgt. Die Abbildung der Trajektorien-gesteuerten Partitionierung auf eine diskrete analoge Transitionsstruktur (DATS) erlaubt die Anwendung von formalen Verifikationsalgorithmen. Die formale Spezifikation von Eigenschaften in ersten Ansätzen zum Model Checking analoger Schaltungen hat sich stark an den bestehenden temporallogischen Verfahren aus dem Bereich digitaler Hardware orientiert. Ausgehend von einer Analyse digitaler Spezifikationskonzepte und der bestehenden Ansätze für analoge Eigenschaften wurden Anforderungen an eine neue Spezifikationssprache in dieser Arbeit abgeleitet. Die aus diesen Anforderungen im Rahmen dieser Arbeit entwickelte analoge Spezifikationssprache "Analog Specification Language" (ASL) basiert auf einer natürlichsprachlichen Kapselung temporallogischer Operationen, die mit erweiterten Algorithmen zur Transitionspfadbestimmung, Durchführung von Berechnungen auf Zustandsparametern und Oszillationsbestimmung eine hohe Ausdrucksstärke analoger Eigenschaften mit einer anwenderfreundlichen Syntax kombinieren konnte. Die zusammen mit ASL entwickelten Model Checking-Verifikationsalgorithmen zur Auswertung von ASL-Spezifikationen auf einem mit dem Trajektorien-gesteuerten Diskretisierungsverfahren erzeugten DATS-Modell bilden eine wesentliche Erweiterung zum Stand der Technik. Um einen Übergang der Verifikation von signalbasierten zu zustandsraumbasierten Methodiken zu ermöglichen, wurde im Rahmen dieser Arbeit ein Ansatz entwickelt, der die Übertragung von transienten Simulationsergebnissen aus nicht-formalen Testbench-Simulationsumgebungen in eine partielle DATS-Zustandsraumdarstellung ermöglicht. Damit kann, wie anhand von Beispielen gezeigt werden konnte, die gleiche ASL-Spezifikation für Eigenschaften eines vollständigen diskreten Modells ohne Modifikation auch auf Simulationsergebnissen ausgewertet werden. Ein für das formale ASL-basierte Model Checking entwickelter Ansatz zur Erzeugung von Gegenbeispielen für als spezifikationsverletzend identifizierte Zustandsraumgebiete erlaubt es, Transitionsfolgen von einem definierten Startzustand zu einem spezifikationsverletzenden Zustand zu ermitteln. Auf Basis dieses Gegenbeispiel-Verfahrens wurde eine neue formale Eigenschaftsverifikationsmethodik mittels vollständig den Zustandsraum einer Schaltung abdeckenden Eingangsstimuli entwickelt. Die vollständig den Zustandsraum abdeckenden Eingangsstimuli bieten noch eine weitere Anwendungsmöglichkeit im Bereich des Äquivalenzvergleichs. Die im Rahmen dieser Arbeit entwickelte Methodik zum formalen Äquivalenzvergleich auf Basis der vollständig den Zustandsraum abdeckenden Eingangsstimuli ersetzt die anwenderdefinierten Eingangsstimuli durch die vollständig den Zustandsraum abdeckenden. So kann die Äquivalenz für jeden möglichen Zustand der zu vergleichenden Implementierungen anhand eines automatisierten Vergleichs der Simulationsergebnisse beider Implementierungen gezeigt werden. Um die Ergebnisse der neu eingeführten formalen Verifikationsmethodiken visuell zu untersuchen wurde ein Verfahren entwickelt, das den Zustandsraum und seine Dynamik mittels eines Partikel-Simulationsansatzes visualisiert. Da die Partikel über den gesamten Zustandsraum randomisiert verteilt werden und sich dann gemäß der Vektorfelddynamik fortbewegen, kann auch hier ein Einblick in das Systemverhalten gewonnen werden, der eine weitestgehend vollständige und somit formale Repräsentation des Zustandsraums bietet. Die prototypische Implementierung der im Rahmen dieser Arbeit entwickelten formalen Verifikationsmethodiken wurde auf zahlreiche Beispielschaltungen angewendet. Die Ergebnisse für die neuen Ansätze zur diskreten Modellierung, zur Spezifikation und zu Verifikationsalgorithmen analoger Schaltungen zeigen, dass die aus diesen Ansätzen erzeugten Verifikationsmethodiken erfolgreich auf komplexe Zustandsraumstrukturen angewendet werden können

    Explicit or Symbolic Translation of Linear Temporal Logic to Automata

    Get PDF
    Formal verification techniques are growing increasingly vital for the development of safety-critical software and hardware in practice. Techniques such as requirements-based design and model checking for system verification have been successfully used to verify systems for air traffic control, airplane separation assurance, autopilots, CPU logic designs, life-support, medical equipment, and other functions that ensure human safety. Formal behavioral specifications written early in the system-design process and communicated across all design phases increase the efficiency, consistency, and quality of the system under development. We argue that to prevent introducing design or verification errors, it is crucial to test specifications for satisfiability. We advocate for the adaptation of a new sanity check via satisfiability checking for property assurance. Our focus here is on specifications expressed in Linear Temporal Logic (LTL). We demonstrate that LTL satisfiability checking reduces to model checking and satisfiability checking for the specification, its complement, and a conjunction of all properties should be performed as a first step to LTL model checking. We report on an experimental investigation of LTL satisfiability checking. We introduce a large set of rigorous benchmarks to enable objective evaluation of LTL-to-automaton algorithms in terms of scalability, performance, correctness, and size of the automata produced. For explicit model checking, we use the Spin model checker; we tested all LTL-to-explicit automaton translation tools that were publicly available when we conducted our study. For symbolic model checking, we use CadenceSMV, NuSMV, and SAL-SMC for both LTL-to-symbolic automaton translation and to perform the satisfiability check. Our experiments result in two major findings. First, scalability, correctness, and other debilitating performance issues afflict most LTL translation tools. Second, for LTL satisfiability checking, the symbolic approach is clearly superior to the explicit approach. Ironically, the explicit approach to LTL-to-automata had been heavily studied while only one algorithm existed for LTL-to-symbolic automata. Since 1994, there had been essentially no new progress in encoding symbolic automata for BDD-based analysis. Therefore, we introduce a set of 30 symbolic automata encodings. The set consists of novel combinations of existing constructs, such as different LTL formula normal forms, with a novel transition-labeled symbolic automaton form, a new way to encode transitions, and new BDD variable orders based on algorithms for tree decomposition of graphs. An extensive set of experiments demonstrates that these encodings translate to significant, sometimes exponential, improvement over the current standard encoding for symbolic LTL satisfiability checking. Building upon these ideas, we return to the explicit automata domain and focus on the most common type of specifications used in industrial practice: safety properties. We show that we can exploit the inherent determinism of safety properties to create a set of 26 explicit automata encodings comprised of novel aspects including: state numbers versus state labels versus a state look-up table, finite versus infinite acceptance conditions, forward-looking versus backward-looking transition encodings, assignment-based versus BDD-based alphabet representation, state and transition minimization, edge abbreviation, trap-state elimination, and determinization either on-the-fly or up-front using the subset construction. We conduct an extensive experimental evaluation and identify an encoding that offers the best performance in explicit LTL model checking time and is constantly faster than the previous best explicit automaton encoding algorithm

    Computing homomorphic program invariants

    Get PDF
    Program invariants are properties that are true at a particular program point or points. Program invariants are often undocumented assertions made by a programmer that hold the key to reasoning correctly about a software verification task. Unlike the contemporary research in which program invariants are defined to hold for all control flow paths, we propose \textit{homomorphic program invariants}, which hold with respect to a relevant equivalence class of control flow paths. For a problem-specific task, homomorphic program invariants can form stricter assertions. This work demonstrates that the novelty of computing homomorphic program invariants is both useful and practical. Towards our goal of computing homomorphic program invariants, we deal with the challenge of the astronomical number of paths in programs. Since reasoning about a class of program paths must be efficient in order to scale to real-world programs, we extend prior work to efficiently divide program paths into equivalence classes with respect to control flow events of interest. Our technique reasons about inter-procedural paths, which we then use to determine how to modify a program binary to abort execution at the start of an irrelevant program path. With off-the-shelf components, we employ the state-of-the-art in fuzzing and dynamic invariant detection tools to mine homomorphic program invariants. To aid in the task of identifying likely software anomalies, we develop human-in-the-loop analysis methodologies and a toolbox of human-centric static analysis tools. We present work to perform a statically-informed dynamic analysis to efficiently transition from static analysis to dynamic analysis and leverage the strengths of each approach. To evaluate our approach, we apply our techniques to three case study audits of challenge applications from DARPA\u27s Space/Time Analysis for Cybersecurity (STAC) program. In the final case study, we discover an unintentional vulnerability that causes a denial of service (DoS) in space and time, despite the challenge application having been hardened against static and dynamic analysis techniques

    Generalized asset integrity games

    Get PDF
    Generalized assets represent a class of multi-scale adaptive state-transition systems with domain-oblivious performance criteria. The governance of such assets must proceed without exact specifications, objectives, or constraints. Decision making must rapidly scale in the presence of uncertainty, complexity, and intelligent adversaries. This thesis formulates an architecture for generalized asset planning. Assets are modelled as dynamical graph structures which admit topological performance indicators, such as dependability, resilience, and efficiency. These metrics are used to construct robust model configurations. A normalized compression distance (NCD) is computed between a given active/live asset model and a reference configuration to produce an integrity score. The utility derived from the asset is monotonically proportional to this integrity score, which represents the proximity to ideal conditions. The present work considers the situation between an asset manager and an intelligent adversary, who act within a stochastic environment to control the integrity state of the asset. A generalized asset integrity game engine (GAIGE) is developed, which implements anytime algorithms to solve a stochastically perturbed two-player zero-sum game. The resulting planning strategies seek to stabilize deviations from minimax trajectories of the integrity score. Results demonstrate the performance and scalability of the GAIGE. This approach represents a first-step towards domain-oblivious architectures for complex asset governance and anytime planning

    Motion planning and control: a formal methods approach

    Get PDF
    Control of complex systems satisfying rich temporal specification has become an increasingly important research area in fields such as robotics, control, automotive, and manufacturing. Popular specification languages include temporal logics, such as Linear Temporal Logic (LTL) and Computational Tree Logic (CTL), which extend propositional logic to capture the temporal sequencing of system properties. The focus of this dissertation is on the control of high-dimensional systems and on timed specifications that impose explicit time bounds on the satisfaction of tasks. This work proposes and evaluates methods and algorithms for synthesizing provably correct control policies that deal with the scalability problems. Ideas and tools from formal verification, graph theory, and incremental computing are used to synthesize satisfying control strategies. Finite abstractions of the systems are generated, and then composed with automata encoding the specifications. The first part of this dissertation introduces a sampling-based motion planning algorithm that combines long-term temporal logic goals with short-term reactive requirements. The specification has two parts: (1) a global specification given as an LTL formula over a set of static service requests that occur at the regions of a known environment, and (2) a local specification that requires servicing a set of dynamic requests that can be sensed locally during the execution. The proposed computational framework consists of two main ingredients: (a) an off-line sampling-based algorithm for the construction of a global transition system that contains a path satisfying the LTL formula, and (b) an on-line sampling-based algorithm to generate paths that service the local requests, while making sure that the satisfaction of the global specification is not affected. The second part of the dissertation focuses on stochastic systems with temporal and uncertainty constraints. A specification language called Gaussian Distribution Temporal Logic is introduced as an extension of Boolean logic that incorporates temporal evolution and noise mitigation directly into the task specifications. A sampling-based algorithm to synthesize control policies is presented that generates a transition system in the belief space and uses local feedback controllers to break the curse of history associated with belief space planning. Switching control policies are then computed using a product Markov Decision Process between the transition system and the Rabin automaton encoding the specification.The approach is evaluated in experiments using a camera network and ground robot. The third part of this dissertation focuses on control of multi-vehicle systems with timed specifications and charging constraints. A rich expressivity language called Time Window Temporal Logic (TWTL) that describes time bounded specifications is introduced. The temporal relaxation of TWTL formulae with respect to the deadlines of tasks is also discussed. The key ingredient of the solution is an algorithm to translate a TWTL formula to an annotated finite state automaton that encodes all possible temporal relaxations of the given formula. The annotated automata are composed with transition systems encoding the motion of all vehicles, and with charging models to produce control strategies for all vehicles such that the overall system satisfies the mission specification. The methods are evaluated in simulation and experimental trials with quadrotors and charging stations

    Integrated application of compositional and behavioural safety analysis

    Get PDF
    To address challenges arising in the safety assessment of critical engineering systems, research has recently focused on automating the synthesis of predictive models of system failure from design representations. In one approach, known as compositional safety analysis, system failure models such as fault trees and Failure Modes and Effects Analyses (FMEAs) are constructed from component failure models using a process of composition. Another approach has looked into automating system safety analysis via application of formal verification techniques such as model checking on behavioural models of the system represented as state automata. So far, compositional safety analysis and formal verification have been developed separately and seen as two competing paradigms to the problem of model-based safety analysis. This thesis shows that it is possible to move forward the terms of this debate and use the two paradigms synergistically in the context of an advanced safety assessment process. The thesis develops a systematic approach in which compositional safety analysis provides the basis for the systematic construction and refinement of state-automata that record the transition of a system from normal to degraded and failed states. These state automata can be further enhanced and then be model-checked to verify the satisfaction of safety properties. Note that the development of such models in current practice is ad hoc and relies only on expert knowledge, but it being rationalised and systematised in the proposed approach – a key contribution of this thesis. Overall the approach combines the advantages of compositional safety analysis such as simplicity, efficiency and scalability, with the benefits of formal verification such as the ability for automated verification of safety requirements on dynamic models of the system, and leads to an improved model-based safety analysis process. In the context of this process, a novel generic mechanism is also proposed for modelling the detectability of errors which typically arise as a result of component faults and then propagate through the architecture. This mechanism is used to derive analyses that can aid decisions on appropriate detection and recovery mechanisms in the system model. The thesis starts with an investigation of the potential for useful integration of compositional and formal safety analysis techniques. The approach is then developed in detail and guidelines for analysis and refinement of system models are given. Finally, the process is evaluated in three cases studies that were iteratively performed on increasingly refined and improved models of aircraft and automotive braking and cruise control systems. In the light of the results of these studies, the thesis concludes that integration of compositional and formal safety analysis techniques is feasible and potentially useful in the design of safety critical systems

    Nonparametric estimation of first passage time distributions in flowgraph models

    Get PDF
    Statistical flowgraphs represent multistate semi-Markov processes using integral transforms of transition time distributions between adjacent states; these are combined algebraically and inverted to derive parametric estimates for first passage time distributions between nonadjacent states. This dissertation extends previous work in the field by developing estimation methods for flowgraphs using empirical transforms based on sample data, with no assumption of specific parametric probability models for transition times. We prove strong convergence of empirical flowgraph results to the exact parametric results; develop alternatives for numerical inversion of empirical transforms and compare them in terms of computational complexity, accuracy, and ability to determine error bounds; discuss (with examples) the difficulties of determining confidence bands for distribution estimates obtained in this way; develop confidence intervals for moment-based quantities such as the mean; and show how methods based on empirical transforms can be modified to accommodate censored data. Several applications of the nonparametric method, based on reliability and survival data, are presented in detail

    Timed model-based programming : executable specifications for robust mission-critical sequences

    Get PDF
    Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2003.Includes bibliographical references (p. 195-204).There is growing demand for high-reliability embedded systems that operate robustly and autonomously in the presence of tight real-time constraints. For robotic spacecraft, robust plan execution is essential during time-critical mission sequences, due to the very short time available for recovery from anomalies. Traditional approaches to encoding these sequences can lead to brittle behavior under off-nominal execution conditions, due to the high level of complexity in the control specification required to manage the complex spacecraft system interactions. This work describes timed model-based programming, a novel approach for encoding and robustly executing mission-critical spacecraft sequences. The timed model-based programming approach addresses the issues of sequence complexity and unanticipated low-level system interactions by allowing control programs to directly read or write "hidden" states of the plant, that is, states that are not directly observable or controllable. It is then the responsibility of the program's execution kernel to map between hidden states and the plant sensors and control variables. This mapping is performed automatically by a deductive controller using a common-sense plant model, freeing the programmer from the error-prone process of reasoning through a complex set of interactions under a range of possible failure situations. Time is central to the execution of mission-critical sequences; a robust executive must consider time in its control and behavior models, in addition to reactively managing complexity.(cont.) In timed model-based programming, control programs express goals and constraints in terms of both system state and time. Plant models capture the underlying behavior of the system components, including nominal and off-nominal modes, probabilistic transitions, and timed effects such as state transition latency. The contributions of this work are threefold. First, a semantic specification of the timed model-based programming approach is provided. The execution semantics of a timed model-based program are defined in terms of legal state evolutions of a physical plant, represented as a factored Partially Observable Semi-Markov Decision Process. The second contribution is the definition of graphical and textual languages for encoding timed control programs and plant models. The adoption of a visual programming paradigm allows timed model-based programs to be specified and readily inspected by the systems engineers in charge of designing the mission-critical sequences. The third contribution is the development of a Timed Model-based Executive, which takes as input a timed control program and executes it, using timed plant models to track states, diagnose faults and generate control actions. The Timed Model-based Executive has been implemented and demonstrated on a representative spacecraft scenario for Mars entry, descent and landing.by Michel Donald Ingham.Sc.D

    Artificial general intelligence: Proceedings of the Second Conference on Artificial General Intelligence, AGI 2009, Arlington, Virginia, USA, March 6-9, 2009

    Get PDF
    Artificial General Intelligence (AGI) research focuses on the original and ultimate goal of AI – to create broad human-like and transhuman intelligence, by exploring all available paths, including theoretical and experimental computer science, cognitive science, neuroscience, and innovative interdisciplinary methodologies. Due to the difficulty of this task, for the last few decades the majority of AI researchers have focused on what has been called narrow AI – the production of AI systems displaying intelligence regarding specific, highly constrained tasks. In recent years, however, more and more researchers have recognized the necessity – and feasibility – of returning to the original goals of the field. Increasingly, there is a call for a transition back to confronting the more difficult issues of human level intelligence and more broadly artificial general intelligence
    corecore