11 research outputs found
Guided Unfoldings for Finding Loops in Standard Term Rewriting
In this paper, we reconsider the unfolding-based technique that we have
introduced previously for detecting loops in standard term rewriting. We
improve it by guiding the unfolding process, using distinguished positions in
the rewrite rules. This results in a depth-first computation of the unfoldings,
whereas the original technique was breadth-first. We have implemented this new
approach in our tool NTI and compared it to the previous one on a bunch of
rewrite systems. The results we get are promising (better times, more
successful proofs).Comment: Pre-proceedings paper presented at the 28th International Symposium
on Logic-Based Program Synthesis and Transformation (LOPSTR 2018), Frankfurt
am Main, Germany, 4-6 September 2018 (arXiv:1808.03326
Confluence of CHR Revisited:Invariants and Modulo Equivalence
Abstract simulation of one transition system by another is introduced as a
means to simulate a potentially infinite class of similar transition sequences
within a single transition sequence. This is useful for proving confluence
under invariants of a given system, as it may reduce the number of proof cases
to consider from infinity to a finite number. The classical confluence results
for Constraint Handling Rules (CHR) can be explained in this way, using CHR as
a simulation of itself. Using an abstract simulation based on a ground
representation, we extend these results to include confluence under invariant
and modulo equivalence, which have not been done in a satisfactory way before.Comment: Pre-proceedings paper presented at the 28th International Symposium
on Logic-Based Program Synthesis and Transformation (LOPSTR 2018), Frankfurt
am Main, Germany, 4-6 September 2018 (arXiv:1808.03326
Multivariant Assertion-based Guidance in Abstract Interpretation
Approximations during program analysis are a necessary evil, as they ensure
essential properties, such as soundness and termination of the analysis, but
they also imply not always producing useful results. Automatic techniques have
been studied to prevent precision loss, typically at the expense of larger
resource consumption. In both cases (i.e., when analysis produces inaccurate
results and when resource consumption is too high), it is necessary to have
some means for users to provide information to guide analysis and thus improve
precision and/or performance. We present techniques for supporting within an
abstract interpretation framework a rich set of assertions that can deal with
multivariance/context-sensitivity, and can handle different run-time semantics
for those assertions that cannot be discharged at compile time. We show how the
proposed approach can be applied to both improving precision and accelerating
analysis. We also provide some formal results on the effects of such assertions
on the analysis results.Comment: Pre-proceedings paper presented at the 28th International Symposium
on Logic-Based Program Synthesis and Transformation (LOPSTR 2018), Frankfurt
am Main, Germany, 4-6 September 2018 (arXiv:1808.03326
Towards computing distances among abstract interpretations
Abstract interpretation is a technique which safely approximates the execution of programs. These aproximations can then be used by static analysis tools to reason about properties that hold for all possible executions, in order to optimize, verify or debug programs, among other applications. Different abstractions, called abstract domains, and analysis algorithms, computing the fixpoints involved in different ways, are used in this process, resulting in different aproximations, all of which are correct but may have different precision.
This use of abstract interpretations is purely qualitative: it relies on an order ⊑ in the abstract domains and the fact that one abstract interpretation over-aproximates or underaproximates the actual (or some given) semantics of programs. A quantitative use of abstract interpretations is not covered by the existing theory, that is, there is no way to measure how close two abstract interpretations are to each other, even when one overaproximates the other. However, the structure of abstract domains and (logic) programs suggests that one could define distances and metrics among those abstract domains and abstract interpretations, and those distances could arguably find many applications, such as comparing the precision of different analyses.
In this work we develop theory and tools to work with abstract interpretations quantitatively, in the context of the Ciao and CiaoPP environment. First, we develop a theory for distances in abstract domains and propose distances for CiaoPP domains. Later, we extend those distances to distances between whole analyses of programs. Finally, we apply successfully those distances in experiments to measure the precision of different analyses