13 research outputs found
Random walk based heuristic algorithms for distributed memory model checking
technical reportModel checking techniques suffer from the state space explosion problem: as the size of the system being verified increases, the total state space of the system increases exponentially. Some of the methods that have been devised to tackle this problem are partial order reduction, symmetry reduction, hash compaction, selective state caching, etc. One approach to the problem that has gained interest in recent years is the parallelization of model checking algorithms. A random walk on the state space has some nice properties, the most important of which is the fact that it lends itself to being parallelized in a natural way. Random walk is a low overhead and a partial search method. Breadth first search, on the other hand, is a high overhead and a full search technique. In this article, we propose various heuristic algorithms that combine random walks on the state space with bounded breadth first search in a parallel context. These algorithms are in the process of being incorporated into a distributed memory model checker
Recommended from our members
A Software Checking Framework Using Distributed Model Checking and Checkpoint/Resume of Virtualized PrOcess Domains
Complexity and heterogeneity of the deployed software applications often result in a wide range of dynamic states at runtime. The corner cases of software failure during execution often slip through the traditional software checking. If the software checking infrastructure supports the transparent checkpoint and resume of the live application states, the checking system can preserve and replay the live states in which the software failures occur. We introduce a novel software checking framework that enables application states including program behaviors and execution contexts to be cloned and resumed on a computing cloud. It employs (1) EXPLODE's model checking engine for a lightweight and general purpose software checking (2) ZAP system for faster, low overhead and transparent checkpoint and resume mechanism through virtualized PODs (PrOcess Domains), which is a collection of host-independent processes, and (3) scalable and distributed checking infrastructure based on Distributed EXPLODE. Efficient and portable checkpoint/resume and replay mechanism employed in this framework enables scalable software checking in order to improve the reliability of software products. The evaluation we conducted showed its feasibility, efficiency and applicability
Actor Network Procedures as Psi-calculi for Security Ceremonies
The actor network procedures of Pavlovic and Meadows are a recent graphical
formalism developed for describing security ceremonies and for reasoning about
their security properties. The present work studies the relations of the actor
network procedures (ANP) to the recent psi-calculi framework. Psi-calculi is a
parametric formalism where calculi like spi- or applied-pi are found as
instances. Psi-calculi are operational and largely non-graphical, but have
strong foundation based on the theory of nominal sets and process algebras. One
purpose of the present work is to give a semantics to ANP through psi-calculi.
Another aim was to give a graphical language for a psi-calculus instance for
security ceremonies. At the same time, this work provides more insight into the
details of the ANPs formalization and the graphical representation.Comment: In Proceedings GraMSec 2014, arXiv:1404.163
Evaluation of a Simple, Scalable, Parallel Best-First Search Strategy
Large-scale, parallel clusters composed of commodity processors are
increasingly available, enabling the use of vast processing capabilities and
distributed RAM to solve hard search problems. We investigate Hash-Distributed
A* (HDA*), a simple approach to parallel best-first search that asynchronously
distributes and schedules work among processors based on a hash function of the
search state. We use this approach to parallelize the A* algorithm in an
optimal sequential version of the Fast Downward planner, as well as a 24-puzzle
solver. The scaling behavior of HDA* is evaluated experimentally on a shared
memory, multicore machine with 8 cores, a cluster of commodity machines using
up to 64 cores, and large-scale high-performance clusters, using up to 2400
processors. We show that this approach scales well, allowing the effective
utilization of large amounts of distributed memory to optimally solve problems
which require terabytes of RAM. We also compare HDA* to Transposition-table
Driven Scheduling (TDS), a hash-based parallelization of IDA*, and show that,
in planning, HDA* significantly outperforms TDS. A simple hybrid which combines
HDA* and TDS to exploit strengths of both algorithms is proposed and evaluated.Comment: in press, to appear in Artificial Intelligenc
Parallel bug-finding in concurrent programs via reduced interleaving instances
Concurrency poses a major challenge for program verification, but it can also offer an opportunity to scale when subproblems can be analysed in parallel. We exploit this opportunity here and use a parametrizable code-to-code translation to generate a set of simpler program instances, each capturing a reduced set of the original program’s interleavings. These instances can then be checked independently in parallel. Our approach does not depend on the tool that is chosen for the final analysis, is compatible with weak memory models, and amplifies the effectiveness of existing tools, making them find bugs faster and with fewer resources. We use Lazy-CSeq as an off-the-shelf final verifier to demonstrate that our approach is able, already with a small number of cores, to find bugs in the hardest known concurrency benchmarks in a matter of minutes, whereas other dynamic and static tools fail to do so in hours
Simulation based formal verification of cyber-physical systems
Cyber-Physical Systems (CPSs) have become an intrinsic part of the 21st century world. Systems like Smart Grids, Transportation, and Healthcare help us run our lives and businesses smoothly, successfully and safely. Since malfunctions in these CPSs can have serious, expensive, sometimes fatal consequences, System-Level Formal Verification (SLFV) tools are vital to minimise the likelihood of errors occurring during the development process and beyond. Their applicability is supported by the increasingly widespread use of Model Based Design (MBD) tools. MBD enables the simulation of CPS models in order to check for their correct behaviour from the very initial design phase. The disadvantage is that SLFV for complex CPSs is an extremely time-consuming process, which typically requires several months of simulation. Current SLFV tools are aimed at accelerating the verification process with multiple simulators working simultaneously. To this end, they compute all the scenarios in advance in such a way as to split and simulate them in parallel. Furthermore, they compute optimised simulation campaigns in order to simulate common prefixes of these scenarios only once, thus avoiding redundant simulation. Nevertheless, there are still limitations that prevent a more widespread adoption of SLFV tools. Firstly, current tools cannot optimise simulation campaigns from existing datasets with collected scenarios. Secondly, there are currently no methods to predict the time required to complete the SLFV process. This lack of ability to predict the length of the process makes scheduling verification activities highly problematic. In this thesis, we present how we are able to overcome these limitations with the use of a simulation campaign optimiser and an execution time estimator. The optimiser tool is aimed at speeding up the SLFV process by using a data-intensive algorithm to obtain optimised simulation campaigns from existing datasets, that may contain a large quantity of collected scenarios. The estimator tool is able to accurately predict the execution time to simulate a given simulation campaign by using an effective machine-independent method
Recommended from our members
MKorat : a novel approach for memorizing the Korat search and some potential applications
Writing logical constraints that describe properties of desired inputs enables an effective approach for systematic software testing, which can find many bugs. The key problem in systematic constraint-based testing is efficiently exploring very large spaces of all possible inputs to enumerate desired valid inputs. The Korat technique provides an effective solution to this problem. Korat uses desired input properties written as imperative predicates and implements a backtracking search that prunes large parts of the input space and enumerates all non-isomorphic inputs within a given bound on input size. Despite the effectiveness of Korat’s pruning, systematically creating and running large numbers of tests can be costly in practice. Previous work introduced parallel test generation and execution using Korat to make it more practical. We build on a specific algorithm, SEQ-ON, introduced in previous work for equi-distancing candidate inputs, which allows re-execution of Korat for input generation using parallel workers with evenly distributed workload. Our key insight is that the Korat search typically encounters many consecutive candidates that are all invalid inputs and such invalid ranges of candidates can be memoized succinctly to optimize re-execution of Korat. We introduce a novel approach for memoizing Korat’s checking of consecutive invalid candidates, embody the approach into three new techniques based on SEQ-ON, evaluate the techniques using a standard suite of data structure subjects to show the efficacy of our approach, and show some potential applications of it in two new application domains for Korat. We believe our work opens a promising new direction to optimize solving of imperative constraints and using them in novel application domains.Electrical and Computer Engineerin
Doctor of Philosophy in Computer Science
dissertationControl-flow analysis of higher-order languages is a difficult problem, yet an important one. It aids in enabling optimizations, improved reliability, and improved security of programs written in these languages. This dissertation explores three techniques to improve the precision and speed of a small-step abstract interpreter: using a priority work list, environment unrolling, and strong function call. In an abstract interpreter, the interpreter is no longer deterministic and choices can be made in how the abstract state space is explored and trade-offs exist. A priority queue is one option. There are also many ways to abstract the concrete interpreter. Environment unrolling gives a slightly different approach than is usually taken, by holding off abstraction in order to gain precision, which can lead to a faster analysis. Strong function call is an approach to clean up some of the imprecision when making a function call that is introduced when abstractly interpreting a program. An alternative approach to building an abstract interpreter to perform static analysis is through the use of constraint solving. Existing techniques to do this have been developed over the last several decades. This dissertation maps these constraints to three different problems, allowing control-flow analysis of higher-order languages to be solved with tools that are already mature and well developed. The control-flow problem is mapped to pointer analysis of first-order languages, SAT, and linear-algebra operations. These mappings allow for fast and parallel implementations of control-flow analysis of higher-order languages. A recent development in the field of static analysis has been pushdown control-flow analysis, which is able to precisely match calls and returns, a weakness in the existing techniques. This dissertation also provides an encoding of pushdown control-flow analysis to linear-algebra operations. In the process, it demonstrates that under certain conditions (monovariance and flow insensitivity) that in terms of precision, a pushdown control-flow analysis is in fact equivalent to a direct style constraint-based formulation