101,135 research outputs found

    Variations on Multi-Core Nested Depth-First Search

    Get PDF
    Recently, two new parallel algorithms for on-the-fly model checking of LTL properties were presented at the same conference: Automated Technology for Verification and Analysis, 2011. Both approaches extend Swarmed NDFS, which runs several sequential NDFS instances in parallel. While parallel random search already speeds up detection of bugs, the workers must share some global information in order to speed up full verification of correct models. The two algorithms differ considerably in the global information shared between workers, and in the way they synchronize. Here, we provide a thorough experimental comparison between the two algorithms, by measuring the runtime of their implementations on a multi-core machine. Both algorithms were implemented in the same framework of the model checker LTSmin, using similar optimizations, and have been subjected to the full BEEM model database. Because both algorithms have complementary advantages, we constructed an algorithm that combines both ideas. This combination clearly has an improved speedup. We also compare the results with the alternative parallel algorithm for accepting cycle detection OWCTY-MAP. Finally, we study a simple statistical model for input models that do contain accepting cycles. The goal is to distinguish the speedup due to parallel random search from the speedup that can be attributed to clever work sharing schemes.Comment: In Proceedings PDMC 2011, arXiv:1111.006

    Directed Explicit Model Checking with HSF-SPIN

    Get PDF
    We present the explicit state model checker HSF-SPIN which is based on the model checker SPIN and its Promela modeling language. HSF-SPIN incorporates directed search algorithms for checking safety and a large class of LTL-specified liveness properties. We start off from the A* algorithm and define heuristics to accelerate the search into the direction of a specified failure situation. Next we propose an improved nested depth-first search algorithm that exploits the structure of Promela Never-Claims. As a result of both improvements, counterexamples will be shorter and the explored part of the state space will be smaller than with classical approaches, allowing to analyze larger state spaces. We evaluate the impact of the new heuristics and algorithms on a set of protocol models, some of which are real-world industrial protocols

    Performance studies of PV: an On-the-fly model-checker for LTL-X featuring selective caching and partial order reduction

    Get PDF
    Journal ArticleWe present an enumerative model-checker PV that uses a new partial order reduction algorithm called Twophase. This algorithm does not use the in-stack check to implement the proviso, making the combination of Twophase with on-the-fly LTL-X model-checking based on nested depth-first search, as well as with selective state caching very straightforward. We present a thorough evaluation of PV in terms of several states, memory, search depth, and runtimes. Our very encouraging results, often orders of magnitude better, are objectively explained in this paper. We also explain the different selective state caching methods supported by PV as well as its user interface geared towards verifying cache coherence protocols for conformance against formal memory models, We offer the source code of PV as well as our examples through out webpage

    Automated verification of Nested DFS

    Get PDF
    In this paper we demonstrate the automated verification of the Nested Depth-First Search (NDFS) algorithm for detecting accepting cycles. The starting point is a recursive formulation of the NDFS algorithm. We use Dafny to annotate the algorithm with invariants and a global specification. The global specification requires that NDFS indeed solves the accepting cycle problem. The invariants are proved automatically by the SMT solver Z3 underlying Dafny. The global specifications, however, need some inductive reasoning on paths in a graph. To prove these properties, some auxiliary lemmas had to be provided. The full specification is contained in this paper. It fits on 4 pages, is verified by Dafny in about 2 minutes, and was developed in a couple of weeks

    Weighted ancestors in suffix trees

    Full text link
    The classical, ubiquitous, predecessor problem is to construct a data structure for a set of integers that supports fast predecessor queries. Its generalization to weighted trees, a.k.a. the weighted ancestor problem, has been extensively explored and successfully reduced to the predecessor problem. It is known that any solution for both problems with an input set from a polynomially bounded universe that preprocesses a weighted tree in O(n polylog(n)) space requires \Omega(loglogn) query time. Perhaps the most important and frequent application of the weighted ancestors problem is for suffix trees. It has been a long-standing open question whether the weighted ancestors problem has better bounds for suffix trees. We answer this question positively: we show that a suffix tree built for a text w[1..n] can be preprocessed using O(n) extra space, so that queries can be answered in O(1) time. Thus we improve the running times of several applications. Our improvement is based on a number of data structure tools and a periodicity-based insight into the combinatorial structure of a suffix tree.Comment: 27 pages, LNCS format. A condensed version will appear in ESA 201

    Nested quantum search and NP-complete problems

    Full text link
    A quantum algorithm is known that solves an unstructured search problem in a number of iterations of order d\sqrt{d}, where dd is the dimension of the search space, whereas any classical algorithm necessarily scales as O(d)O(d). It is shown here that an improved quantum search algorithm can be devised that exploits the structure of a tree search problem by nesting this standard search algorithm. The number of iterations required to find the solution of an average instance of a constraint satisfaction problem scales as dα\sqrt{d^\alpha}, with a constant α<1\alpha<1 depending on the nesting depth and the problem considered. When applying a single nesting level to a problem with constraints of size 2 such as the graph coloring problem, this constant α\alpha is estimated to be around 0.62 for average instances of maximum difficulty. This corresponds to a square-root speedup over a classical nested search algorithm, of which our presented algorithm is the quantum counterpart.Comment: 18 pages RevTeX, 3 Postscript figure

    Automated Refactoring of Nested-IF Formulae in Spreadsheets

    Full text link
    Spreadsheets are the most popular end-user programming software, where formulae act like programs and also have smells. One well recognized common smell of spreadsheet formulae is nest-IF expressions, which have low readability and high cognitive cost for users, and are error-prone during reuse or maintenance. However, end users usually lack essential programming language knowledge and skills to tackle or even realize the problem. The previous research work has made very initial attempts in this aspect, while no effective and automated approach is currently available. This paper firstly proposes an AST-based automated approach to systematically refactoring nest-IF formulae. The general idea is two-fold. First, we detect and remove logic redundancy on the AST. Second, we identify higher-level semantics that have been fragmented and scattered, and reassemble the syntax using concise built-in functions. A comprehensive evaluation has been conducted against a real-world spreadsheet corpus, which is collected in a leading IT company for research purpose. The results with over 68,000 spreadsheets with 27 million nest-IF formulae reveal that our approach is able to relieve the smell of over 99\% of nest-IF formulae. Over 50% of the refactorings have reduced nesting levels of the nest-IFs by more than a half. In addition, a survey involving 49 participants indicates that for most cases the participants prefer the refactored formulae, and agree on that such automated refactoring approach is necessary and helpful
    corecore