216 research outputs found

    on the efficiency of convex polyhedra

    Get PDF
    Abstract The domain of convex polyhedra plays a special role in the collection of numerical domains considered for program analysis and verification. As far as precision is concerned, it would be the most natural choice in many contexts but, due to its worst case exponential complexity, it is sometimes considered an unaffordable option. This has led to a systematic quest for simpler domains that are capable of reasonable precision using less computational resources. There are anyway cases where the use of the domain of convex polyhedra turns out to be feasible, also due to recent progress in their implementation. After reviewing a few known approaches to decrease the amount of resources needed when computing on this domain, we will introduce a couple of novel techniques that can be used to further improve its efficiency, without incurring precision losses

    Abstract Interpretation with Unfoldings

    Full text link
    We present and evaluate a technique for computing path-sensitive interference conditions during abstract interpretation of concurrent programs. In lieu of fixed point computation, we use prime event structures to compactly represent causal dependence and interference between sequences of transformers. Our main contribution is an unfolding algorithm that uses a new notion of independence to avoid redundant transformer application, thread-local fixed points to reduce the size of the unfolding, and a novel cutoff criterion based on subsumption to guarantee termination of the analysis. Our experiments show that the abstract unfolding produces an order of magnitude fewer false alarms than a mature abstract interpreter, while being several orders of magnitude faster than solver-based tools that have the same precision.Comment: Extended version of the paper (with the same title and authors) to appear at CAV 201

    Non-polynomial Worst-Case Analysis of Recursive Programs

    Full text link
    We study the problem of developing efficient approaches for proving worst-case bounds of non-deterministic recursive programs. Ranking functions are sound and complete for proving termination and worst-case bounds of nonrecursive programs. First, we apply ranking functions to recursion, resulting in measure functions. We show that measure functions provide a sound and complete approach to prove worst-case bounds of non-deterministic recursive programs. Our second contribution is the synthesis of measure functions in nonpolynomial forms. We show that non-polynomial measure functions with logarithm and exponentiation can be synthesized through abstraction of logarithmic or exponentiation terms, Farkas' Lemma, and Handelman's Theorem using linear programming. While previous methods obtain worst-case polynomial bounds, our approach can synthesize bounds of the form O(nlogn)\mathcal{O}(n\log n) as well as O(nr)\mathcal{O}(n^r) where rr is not an integer. We present experimental results to demonstrate that our approach can obtain efficiently worst-case bounds of classical recursive algorithms such as (i) Merge-Sort, the divide-and-conquer algorithm for the Closest-Pair problem, where we obtain O(nlogn)\mathcal{O}(n \log n) worst-case bound, and (ii) Karatsuba's algorithm for polynomial multiplication and Strassen's algorithm for matrix multiplication, where we obtain O(nr)\mathcal{O}(n^r) bound such that rr is not an integer and close to the best-known bounds for the respective algorithms.Comment: 54 Pages, Full Version to CAV 201

    CARET analysis of multithreaded programs

    Full text link
    Dynamic Pushdown Networks (DPNs) are a natural model for multithreaded programs with (recursive) procedure calls and thread creation. On the other hand, CARET is a temporal logic that allows to write linear temporal formulas while taking into account the matching between calls and returns. We consider in this paper the model-checking problem of DPNs against CARET formulas. We show that this problem can be effectively solved by a reduction to the emptiness problem of B\"uchi Dynamic Pushdown Systems. We then show that CARET model checking is also decidable for DPNs communicating with locks. Our results can, in particular, be used for the detection of concurrent malware.Comment: Pre-proceedings paper presented at the 27th International Symposium on Logic-Based Program Synthesis and Transformation (LOPSTR 2017), Namur, Belgium, 10-12 October 2017 (arXiv:1708.07854

    Interface Compliance of Inline Assembly: Automatically Check, Patch and Refine

    Full text link
    Inline assembly is still a common practice in low-level C programming, typically for efficiency reasons or for accessing specific hardware resources. Such embedded assembly codes in the GNU syntax (supported by major compilers such as GCC, Clang and ICC) have an interface specifying how the assembly codes interact with the C environment. For simplicity reasons, the compiler treats GNU inline assembly codes as blackboxes and relies only on their interface to correctly glue them into the compiled C code. Therefore, the adequacy between the assembly chunk and its interface (named compliance) is of primary importance, as such compliance issues can lead to subtle and hard-to-find bugs. We propose RUSTInA, the first automated technique for formally checking inline assembly compliance, with the extra ability to propose (proven) patches and (optimization) refinements in certain cases. RUSTInA is based on an original formalization of the inline assembly compliance problem together with novel dedicated algorithms. Our prototype has been evaluated on 202 Debian packages with inline assembly (2656 chunks), finding 2183 issues in 85 packages -- 986 significant issues in 54 packages (including major projects such as ffmpeg or ALSA), and proposing patches for 92% of them. Currently, 38 patches have already been accepted (solving 156 significant issues), with positive feedback from development teams

    SC-Square: Overview to 2021.

    Get PDF

    A task-based approach to parallel parametric linear programming solving, and application to polyhedral computations

    Full text link
    Parametric linear programming is a central operation for polyhedral computations, as well as in certain control applications.Here we propose a task-based scheme for parallelizing it, with quasi-linear speedup over large problems.This type of parallel applications is challenging, because several tasks mightbe computing the same region. In this paper, we are presenting thealgorithm itself with a parallel redundancy elimination algorithm, andconducting a thorough performance analysis.Comment: arXiv admin note: text overlap with arXiv:1904.0607
    corecore