203 research outputs found

    Parallel symbolic state-space exploration is difficult, but what is the alternative?

    Full text link
    State-space exploration is an essential step in many modeling and analysis problems. Its goal is to find the states reachable from the initial state of a discrete-state model described. The state space can used to answer important questions, e.g., "Is there a dead state?" and "Can N become negative?", or as a starting point for sophisticated investigations expressed in temporal logic. Unfortunately, the state space is often so large that ordinary explicit data structures and sequential algorithms cannot cope, prompting the exploration of (1) parallel approaches using multiple processors, from simple workstation networks to shared-memory supercomputers, to satisfy large memory and runtime requirements and (2) symbolic approaches using decision diagrams to encode the large structured sets and relations manipulated during state-space generation. Both approaches have merits and limitations. Parallel explicit state-space generation is challenging, but almost linear speedup can be achieved; however, the analysis is ultimately limited by the memory and processors available. Symbolic methods are a heuristic that can efficiently encode many, but not all, functions over a structured and exponentially large domain; here the pitfalls are subtler: their performance varies widely depending on the class of decision diagram chosen, the state variable order, and obscure algorithmic parameters. As symbolic approaches are often much more efficient than explicit ones for many practical models, we argue for the need to parallelize symbolic state-space generation algorithms, so that we can realize the advantage of both approaches. This is a challenging endeavor, as the most efficient symbolic algorithm, Saturation, is inherently sequential. We conclude by discussing challenges, efforts, and promising directions toward this goal

    Presentation of the 9th Edition of the Model Checking Contest.

    Get PDF
    International audience; The Model Checking Contest (MCC) is an annual competition of software tools for model checking. Tools must process an increasing benchmark gathered from the whole community and may participate in various examinations: state space generation, computation of global properties, computation of some upper bounds in the model, evaluation of reachability formulas, evaluation of CTL formulas, and evaluation of LTL formulas.For each examination and each model instance, participating tools are provided with up to 3600 s and 16 gigabyte of memory. Then, tool answers are analyzed and confronted to the results produced by other competing tools to detect diverging answers (which are quite rare at this stage of the competition, and lead to penalties).For each examination, golden, silver, and bronze medals are attributed to the three best tools. CPU usage and memory consumption are reported, which is also valuable information for tool developers

    Modelling molecular networks: relationships between different formalisms and levels of details

    Get PDF
    This document is the deliverable 1.3 of French ANR CALAMAR. It presents a study of different formalisms used for modelling and analyzing large molecular regulation networks, their formal links, in terms of mutual encodings and of abstractions, and the corresponding levels of detail captured

    Towards verification of computation orchestration

    Get PDF
    Recently, a promising programming model called Orc has been proposed to support a structured way of orchestrating distributed Web Services. Orc is intuitive because it offers concise constructors to manage concurrent communication, time-outs, priorities, failure of Web Services or communication and so forth. The semantics of Orc is precisely defined. However, there is no automatic verification tool available to verify critical properties against Orc programs. Our goal is to verify the orchestration programs (written in Orc language) which invoke web services to achieve certain goals. To investigate this problem and build useful tools, we explore in two directions. Firstly, we define a Timed Automata semantics for the Orc language, which we prove is semantically equivalent to the operational semantics of Orc. Consequently, Timed Automata models are systematically constructed from Orc programs. The practical implication is that existing tool supports for Timed Automata, e.g., Uppaal, can be used to simulate and model check Orc programs. An experimental tool has been implemented to automate this approach. Secondly, we start with encoding the operational semantics of Orc language in Constraint Logic Programming (CLP), which allows a systematic translation from Orc to CLP. Powerful constraint solvers like CLP(R) are then used to prove traditional safety properties and beyond, e.g., reachability, deadlock-freeness, lower or upper bound of a time interval, etc. Counterexamples are generated when properties are not satisfied. Furthermore, the stepwise execution traces can be automatically generated as the simulation steps. The two different approaches give an insight into the verification problem of Web Service orchestration. The Timed Automata approach has its merits in visualized simulation and efficient verification supported by the well developed tools. On the other hand, the CPL approach gives better expressiveness in both modeling and verification. The two approaches complement each other, which gives a complete solution for the simulation and verification of Computation Orchestration

    Abstract Dependency Graphs for Model Verification

    Get PDF
    corecore