35 research outputs found
An Overview of Ciao and its uses of DataLog for Program Analysis and Optimization
-Objectives:
•Next-generation, high-level, multiparadigm programming language: Ciao.
•Program development environments which perform, as part of compilation:
Verification / debugging(i.e., detect bugs and offer guarantees of safety, reliability, and efficiency.)
Optimization (optimized compilation, parallelization, ...)Using throughout techniques that are at the same time rigorous and practical.
•Apply in a real system, with users –reality check!
•Support also mainstream languages (e.g., Java / Java bytecode).
- Several uses of Datalog and related techniques
From Outermost Reduction Semantics to Abstract Machine
Reduction semantics is a popular format for small-step operational semantics of deterministic programming languages with computational effects.Each reduction semantics gives rise to a reduction-based normalization function where the reduction sequence is enumerated.Refocusing is a practical way to transform a reduction-based normalization function into a reduction-free one where the reduction sequence is not enumerated.This reduction-free normalization function takes the form of an abstract machine that navigates from one redex site to the next without systematically detouring via the root of the term to enumerate the reduction sequence, in contrast to the reduction-based normalization function.We have discovered that refocusing does not apply as readily for reduction semantics that use an outermost reduction strategy and have overlapping rules where a contractum can be a proper subpart of a redex.In this article, we consider such an outermost reduction semantics with backward-overlapping rules, and we investigate how to apply refocusing to still obtain a reduction-free normalization function in the form of an abstract machine
Incremental and Modular Context-sensitive Analysis
Context-sensitive global analysis of large code bases can be expensive, which
can make its use impractical during software development. However, there are
many situations in which modifications are small and isolated within a few
components, and it is desirable to reuse as much as possible previous analysis
results. This has been achieved to date through incremental global analysis
fixpoint algorithms that achieve cost reductions at fine levels of granularity,
such as changes in program lines. However, these fine-grained techniques are
not directly applicable to modular programs, nor are they designed to take
advantage of modular structures. This paper describes, implements, and
evaluates an algorithm that performs efficient context-sensitive analysis
incrementally on modular partitions of programs. The experimental results show
that the proposed modular algorithm shows significant improvements, in both
time and memory consumption, when compared to existing non-modular, fine-grain
incremental analysis techniques. Furthermore, thanks to the proposed
inter-modular propagation of analysis information, our algorithm also
outperforms traditional modular analysis even when analyzing from scratch.Comment: 56 pages, 27 figures. To be published in Theory and Practice of Logic
Programming. v3 corresponds to the extended version of the ICLP2018 Technical
Communication. v4 is the revised version submitted to Theory and Practice of
Logic Programming. v5 (this one) is the final author version to be published
in TPL
On Computational Small Steps and Big Steps: Refocusing for Outermost Reduction
We study the relationship between small-step semantics, big-step semantics and abstract machines, for programming languages that employ an outermost reduction strategy, i.e., languages where reductions near the root of the abstract syntax tree are performed before reductions near the leaves.In particular, we investigate how Biernacka and Danvy's syntactic correspondence and Reynolds's functional correspondence can be applied to inter-derive semantic specifications for such languages.The main contribution of this dissertation is three-fold:First, we identify that backward overlapping reduction rules in the small-step semantics cause the refocusing step of the syntactic correspondence to be inapplicable.Second, we propose two solutions to overcome this in-applicability: backtracking and rule generalization.Third, we show how these solutions affect the other transformations of the two correspondences.Other contributions include the application of the syntactic and functional correspondences to Boolean normalization.In particular, we show how to systematically derive a spectrum of normalization functions for negational and conjunctive normalization
Verification of Imperative Programs by Constraint Logic Program Transformation
We present a method for verifying partial correctness properties of
imperative programs that manipulate integers and arrays by using techniques
based on the transformation of constraint logic programs (CLP). We use CLP as a
metalanguage for representing imperative programs, their executions, and their
properties. First, we encode the correctness of an imperative program, say
prog, as the negation of a predicate 'incorrect' defined by a CLP program T. By
construction, 'incorrect' holds in the least model of T if and only if the
execution of prog from an initial configuration eventually halts in an error
configuration. Then, we apply to program T a sequence of transformations that
preserve its least model semantics. These transformations are based on
well-known transformation rules, such as unfolding and folding, guided by
suitable transformation strategies, such as specialization and generalization.
The objective of the transformations is to derive a new CLP program TransfT
where the predicate 'incorrect' is defined either by (i) the fact 'incorrect.'
(and in this case prog is not correct), or by (ii) the empty set of clauses
(and in this case prog is correct). In the case where we derive a CLP program
such that neither (i) nor (ii) holds, we iterate the transformation. Since the
problem is undecidable, this process may not terminate. We show through
examples that our method can be applied in a rather systematic way, and is
amenable to automation by transferring to the field of program verification
many techniques developed in the field of program transformation.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455
Interpolant tree automata and their application in Horn clause verification
This paper investigates the combination of abstract interpretation over the
domain of convex polyhedra with interpolant tree automata, in an
abstraction-refinement scheme for Horn clause verification. These techniques
have been previously applied separately, but are combined in a new way in this
paper. The role of an interpolant tree automaton is to provide a generalisation
of a spurious counterexample during refinement, capturing a possibly infinite
set of spurious counterexample traces. In our approach these traces are then
eliminated using a transformation of the Horn clauses. We compare this approach
with two other methods; one of them uses interpolant tree automata in an
algorithm for trace abstraction and refinement, while the other uses abstract
interpretation over the domain of convex polyhedra without the generalisation
step. Evaluation of the results of experiments on a number of Horn clause
verification problems indicates that the combination of interpolant tree
automaton with abstract interpretation gives some increase in the power of the
verification tool, while sometimes incurring a performance overhead.Comment: In Proceedings VPT 2016, arXiv:1607.0183
Towards Zero-Overhead Disambiguation of Deep Priority Conflicts
**Context** Context-free grammars are widely used for language prototyping
and implementation. They allow formalizing the syntax of domain-specific or
general-purpose programming languages concisely and declaratively. However, the
natural and concise way of writing a context-free grammar is often ambiguous.
Therefore, grammar formalisms support extensions in the form of *declarative
disambiguation rules* to specify operator precedence and associativity, solving
ambiguities that are caused by the subset of the grammar that corresponds to
expressions.
**Inquiry** Implementing support for declarative disambiguation within a
parser typically comes with one or more of the following limitations in
practice: a lack of parsing performance, or a lack of modularity (i.e.,
disallowing the composition of grammar fragments of potentially different
languages). The latter subject is generally addressed by scannerless
generalized parsers. We aim to equip scannerless generalized parsers with novel
disambiguation methods that are inherently performant, without compromising the
concerns of modularity and language composition.
**Approach** In this paper, we present a novel low-overhead implementation
technique for disambiguating deep associativity and priority conflicts in
scannerless generalized parsers with lightweight data-dependency.
**Knowledge** Ambiguities with respect to operator precedence and
associativity arise from combining the various operators of a language. While
*shallow conflicts* can be resolved efficiently by one-level tree patterns,
*deep conflicts* require more elaborate techniques, because they can occur
arbitrarily nested in a tree. Current state-of-the-art approaches to solving
deep priority conflicts come with a severe performance overhead.
**Grounding** We evaluated our new approach against state-of-the-art
declarative disambiguation mechanisms. By parsing a corpus of popular
open-source repositories written in Java and OCaml, we found that our approach
yields speedups of up to 1.73x over a grammar rewriting technique when parsing
programs with deep priority conflicts--with a modest overhead of 1-2 % when
parsing programs without deep conflicts.
**Importance** A recent empirical study shows that deep priority conflicts
are indeed wide-spread in real-world programs. The study shows that in a corpus
of popular OCaml projects on Github, up to 17 % of the source files contain
deep priority conflicts. However, there is no solution in the literature that
addresses efficient disambiguation of deep priority conflicts, with support for
modular and composable syntax definitions
Transient Typechecks are (Almost) Free
Transient gradual typing imposes run-time type tests that typically cause a linear slowdown in
programs’ performance. This performance impact discourages the use of type annotations because
adding types to a program makes the program slower. A virtual machine can employ standard justin-time optimizations to reduce the overhead of transient checks to near zero. These optimizations
can give gradually-typed languages performance comparable to state-of-the-art dynamic languages,
so programmers can add types to their code without affecting their programs’ performance
Abstract Program Slicing: an Abstract Interpretation-based approach to Program Slicing
In the present paper we formally define the notion of abstract program
slicing, a general form of program slicing where properties of data are
considered instead of their exact value. This approach is applied to a language
with numeric and reference values, and relies on the notion of abstract
dependencies between program components (statements).
The different forms of (backward) abstract slicing are added to an existing
formal framework where traditional, non-abstract forms of slicing could be
compared. The extended framework allows us to appreciate that abstract slicing
is a generalization of traditional slicing, since traditional slicing (dealing
with syntactic dependencies) is generalized by (semantic) non-abstract forms of
slicing, which are actually equivalent to an abstract form where the identity
abstraction is performed on data.
Sound algorithms for computing abstract dependencies and a systematic
characterization of program slices are provided, which rely on the notion of
agreement between program states