13 research outputs found

    Random walk based heuristic algorithms for distributed memory model checking

    Get PDF
    technical reportModel checking techniques suffer from the state space explosion problem: as the size of the system being verified increases, the total state space of the system increases exponentially. Some of the methods that have been devised to tackle this problem are partial order reduction, symmetry reduction, hash compaction, selective state caching, etc. One approach to the problem that has gained interest in recent years is the parallelization of model checking algorithms. A random walk on the state space has some nice properties, the most important of which is the fact that it lends itself to being parallelized in a natural way. Random walk is a low overhead and a partial search method. Breadth first search, on the other hand, is a high overhead and a full search technique. In this article, we propose various heuristic algorithms that combine random walks on the state space with bounded breadth first search in a parallel context. These algorithms are in the process of being incorporated into a distributed memory model checker

    Actor Network Procedures as Psi-calculi for Security Ceremonies

    Full text link
    The actor network procedures of Pavlovic and Meadows are a recent graphical formalism developed for describing security ceremonies and for reasoning about their security properties. The present work studies the relations of the actor network procedures (ANP) to the recent psi-calculi framework. Psi-calculi is a parametric formalism where calculi like spi- or applied-pi are found as instances. Psi-calculi are operational and largely non-graphical, but have strong foundation based on the theory of nominal sets and process algebras. One purpose of the present work is to give a semantics to ANP through psi-calculi. Another aim was to give a graphical language for a psi-calculus instance for security ceremonies. At the same time, this work provides more insight into the details of the ANPs formalization and the graphical representation.Comment: In Proceedings GraMSec 2014, arXiv:1404.163

    Evaluation of a Simple, Scalable, Parallel Best-First Search Strategy

    Get PDF
    Large-scale, parallel clusters composed of commodity processors are increasingly available, enabling the use of vast processing capabilities and distributed RAM to solve hard search problems. We investigate Hash-Distributed A* (HDA*), a simple approach to parallel best-first search that asynchronously distributes and schedules work among processors based on a hash function of the search state. We use this approach to parallelize the A* algorithm in an optimal sequential version of the Fast Downward planner, as well as a 24-puzzle solver. The scaling behavior of HDA* is evaluated experimentally on a shared memory, multicore machine with 8 cores, a cluster of commodity machines using up to 64 cores, and large-scale high-performance clusters, using up to 2400 processors. We show that this approach scales well, allowing the effective utilization of large amounts of distributed memory to optimally solve problems which require terabytes of RAM. We also compare HDA* to Transposition-table Driven Scheduling (TDS), a hash-based parallelization of IDA*, and show that, in planning, HDA* significantly outperforms TDS. A simple hybrid which combines HDA* and TDS to exploit strengths of both algorithms is proposed and evaluated.Comment: in press, to appear in Artificial Intelligenc

    Parallel bug-finding in concurrent programs via reduced interleaving instances

    Get PDF
    Concurrency poses a major challenge for program verification, but it can also offer an opportunity to scale when subproblems can be analysed in parallel. We exploit this opportunity here and use a parametrizable code-to-code translation to generate a set of simpler program instances, each capturing a reduced set of the original program’s interleavings. These instances can then be checked independently in parallel. Our approach does not depend on the tool that is chosen for the final analysis, is compatible with weak memory models, and amplifies the effectiveness of existing tools, making them find bugs faster and with fewer resources. We use Lazy-CSeq as an off-the-shelf final verifier to demonstrate that our approach is able, already with a small number of cores, to find bugs in the hardest known concurrency benchmarks in a matter of minutes, whereas other dynamic and static tools fail to do so in hours

    Simulation based formal verification of cyber-physical systems

    Get PDF
    Cyber-Physical Systems (CPSs) have become an intrinsic part of the 21st century world. Systems like Smart Grids, Transportation, and Healthcare help us run our lives and businesses smoothly, successfully and safely. Since malfunctions in these CPSs can have serious, expensive, sometimes fatal consequences, System-Level Formal Verification (SLFV) tools are vital to minimise the likelihood of errors occurring during the development process and beyond. Their applicability is supported by the increasingly widespread use of Model Based Design (MBD) tools. MBD enables the simulation of CPS models in order to check for their correct behaviour from the very initial design phase. The disadvantage is that SLFV for complex CPSs is an extremely time-consuming process, which typically requires several months of simulation. Current SLFV tools are aimed at accelerating the verification process with multiple simulators working simultaneously. To this end, they compute all the scenarios in advance in such a way as to split and simulate them in parallel. Furthermore, they compute optimised simulation campaigns in order to simulate common prefixes of these scenarios only once, thus avoiding redundant simulation. Nevertheless, there are still limitations that prevent a more widespread adoption of SLFV tools. Firstly, current tools cannot optimise simulation campaigns from existing datasets with collected scenarios. Secondly, there are currently no methods to predict the time required to complete the SLFV process. This lack of ability to predict the length of the process makes scheduling verification activities highly problematic. In this thesis, we present how we are able to overcome these limitations with the use of a simulation campaign optimiser and an execution time estimator. The optimiser tool is aimed at speeding up the SLFV process by using a data-intensive algorithm to obtain optimised simulation campaigns from existing datasets, that may contain a large quantity of collected scenarios. The estimator tool is able to accurately predict the execution time to simulate a given simulation campaign by using an effective machine-independent method

    Doctor of Philosophy in Computer Science

    Get PDF
    dissertationControl-flow analysis of higher-order languages is a difficult problem, yet an important one. It aids in enabling optimizations, improved reliability, and improved security of programs written in these languages. This dissertation explores three techniques to improve the precision and speed of a small-step abstract interpreter: using a priority work list, environment unrolling, and strong function call. In an abstract interpreter, the interpreter is no longer deterministic and choices can be made in how the abstract state space is explored and trade-offs exist. A priority queue is one option. There are also many ways to abstract the concrete interpreter. Environment unrolling gives a slightly different approach than is usually taken, by holding off abstraction in order to gain precision, which can lead to a faster analysis. Strong function call is an approach to clean up some of the imprecision when making a function call that is introduced when abstractly interpreting a program. An alternative approach to building an abstract interpreter to perform static analysis is through the use of constraint solving. Existing techniques to do this have been developed over the last several decades. This dissertation maps these constraints to three different problems, allowing control-flow analysis of higher-order languages to be solved with tools that are already mature and well developed. The control-flow problem is mapped to pointer analysis of first-order languages, SAT, and linear-algebra operations. These mappings allow for fast and parallel implementations of control-flow analysis of higher-order languages. A recent development in the field of static analysis has been pushdown control-flow analysis, which is able to precisely match calls and returns, a weakness in the existing techniques. This dissertation also provides an encoding of pushdown control-flow analysis to linear-algebra operations. In the process, it demonstrates that under certain conditions (monovariance and flow insensitivity) that in terms of precision, a pushdown control-flow analysis is in fact equivalent to a direct style constraint-based formulation
    corecore