388,878 research outputs found

    Combining k-Induction with Continuously-Refined Invariants

    Full text link
    Bounded model checking (BMC) is a well-known and successful technique for finding bugs in software. k-induction is an approach to extend BMC-based approaches from falsification to verification. Automatically generated auxiliary invariants can be used to strengthen the induction hypothesis. We improve this approach and further increase effectiveness and efficiency in the following way: we start with light-weight invariants and refine these invariants continuously during the analysis. We present and evaluate an implementation of our approach in the open-source verification-framework CPAchecker. Our experiments show that combining k-induction with continuously-refined invariants significantly increases effectiveness and efficiency, and outperforms all existing implementations of k-induction-based software verification in terms of successful verification results.Comment: 12 pages, 5 figures, 2 tables, 2 algorithm

    Dynamic Race Prediction in Linear Time

    Full text link
    Writing reliable concurrent software remains a huge challenge for today's programmers. Programmers rarely reason about their code by explicitly considering different possible inter-leavings of its execution. We consider the problem of detecting data races from individual executions in a sound manner. The classical approach to solving this problem has been to use Lamport's happens-before (HB) relation. Until now HB remains the only approach that runs in linear time. Previous efforts in improving over HB such as causally-precedes (CP) and maximal causal models fall short due to the fact that they are not implementable efficiently and hence have to compromise on their race detecting ability by limiting their techniques to bounded sized fragments of the execution. We present a new relation weak-causally-precedes (WCP) that is provably better than CP in terms of being able to detect more races, while still remaining sound. Moreover it admits a linear time algorithm which works on the entire execution without having to fragment it.Comment: 22 pages, 8 figures, 1 algorithm, 1 tabl

    Element Distinctness, Frequency Moments, and Sliding Windows

    Full text link
    We derive new time-space tradeoff lower bounds and algorithms for exactly computing statistics of input data, including frequency moments, element distinctness, and order statistics, that are simple to calculate for sorted data. We develop a randomized algorithm for the element distinctness problem whose time T and space S satisfy T in O (n^{3/2}/S^{1/2}), smaller than previous lower bounds for comparison-based algorithms, showing that element distinctness is strictly easier than sorting for randomized branching programs. This algorithm is based on a new time and space efficient algorithm for finding all collisions of a function f from a finite set to itself that are reachable by iterating f from a given set of starting points. We further show that our element distinctness algorithm can be extended at only a polylogarithmic factor cost to solve the element distinctness problem over sliding windows, where the task is to take an input of length 2n-1 and produce an output for each window of length n, giving n outputs in total. In contrast, we show a time-space tradeoff lower bound of T in Omega(n^2/S) for randomized branching programs to compute the number of distinct elements over sliding windows. The same lower bound holds for computing the low-order bit of F_0 and computing any frequency moment F_k, k neq 1. This shows that those frequency moments and the decision problem F_0 mod 2 are strictly harder than element distinctness. We complement this lower bound with a T in O(n^2/S) comparison-based deterministic RAM algorithm for exactly computing F_k over sliding windows, nearly matching both our lower bound for the sliding-window version and the comparison-based lower bounds for the single-window version. We further exhibit a quantum algorithm for F_0 over sliding windows with T in O(n^{3/2}/S^{1/2}). Finally, we consider the computations of order statistics over sliding windows.Comment: arXiv admin note: substantial text overlap with arXiv:1212.437

    Programming Not Only by Example

    Full text link
    In recent years, there has been tremendous progress in automated synthesis techniques that are able to automatically generate code based on some intent expressed by the programmer. A major challenge for the adoption of synthesis remains in having the programmer communicate their intent. When the expressed intent is coarse-grained (for example, restriction on the expected type of an expression), the synthesizer often produces a long list of results for the programmer to choose from, shifting the heavy-lifting to the user. An alternative approach, successfully used in end-user synthesis is programming by example (PBE), where the user leverages examples to interactively and iteratively refine the intent. However, using only examples is not expressive enough for programmers, who can observe the generated program and refine the intent by directly relating to parts of the generated program. We present a novel approach to interacting with a synthesizer using a granular interaction model. Our approach employs a rich interaction model where (i) the synthesizer decorates a candidate program with debug information that assists in understanding the program and identifying good or bad parts, and (ii) the user is allowed to provide feedback not only on the expected output of a program, but also on the underlying program itself. That is, when the user identifies a program as (partially) correct or incorrect, they can also explicitly indicate the good or bad parts, to allow the synthesizer to accept or discard parts of the program instead of discarding the program as a whole. We show the value of our approach in a controlled user study. Our study shows that participants have strong preference to using granular feedback instead of examples, and are able to provide granular feedback much faster

    Asymptotic Proportion of Hard Instances of the Halting Problem

    Get PDF
    Although the halting problem is undecidable, imperfect testers that fail on some instances are possible. Such instances are called hard for the tester. One variant of imperfect testers replies "I don't know" on hard instances, another variant fails to halt, and yet another replies incorrectly "yes" or "no". Also the halting problem has three variants: does a given program halt on the empty input, does a given program halt when given itself as its input, or does a given program halt on a given input. The failure rate of a tester for some size is the proportion of hard instances among all instances of that size. This publication investigates the behaviour of the failure rate as the size grows without limit. Earlier results are surveyed and new results are proven. Some of them use C++ on Linux as the computational model. It turns out that the behaviour is sensitive to the details of the programming language or computational model, but in many cases it is possible to prove that the proportion of hard instances does not vanish.Comment: 18 pages. The differences between this version and arXiv:1307.7066v1 are significant. They have been listed in the last paragraph of Section 1. Excluding layout, this arXiv version is essentially identical to the Acta Cybernetica versio
    corecore