7,416 research outputs found

    12th International Workshop on Termination (WST 2012) : WST 2012, February 19–23, 2012, Obergurgl, Austria / ed. by Georg Moser

    Get PDF
    This volume contains the proceedings of the 12th International Workshop on Termination (WST 2012), to be held February 19–23, 2012 in Obergurgl, Austria. The goal of the Workshop on Termination is to be a venue for presentation and discussion of all topics in and around termination. In this way, the workshop tries to bridge the gaps between different communities interested and active in research in and around termination. The 12th International Workshop on Termination in Obergurgl continues the successful workshops held in St. Andrews (1993), La Bresse (1995), Ede (1997), Dagstuhl (1999), Utrecht (2001), Valencia (2003), Aachen (2004), Seattle (2006), Paris (2007), Leipzig (2009), and Edinburgh (2010). The 12th International Workshop on Termination did welcome contributions on all aspects of termination and complexity analysis. Contributions from the imperative, constraint, functional, and logic programming communities, and papers investigating applications of complexity or termination (for example in program transformation or theorem proving) were particularly welcome. We did receive 18 submissions which all were accepted. Each paper was assigned two reviewers. In addition to these 18 contributed talks, WST 2012, hosts three invited talks by Alexander Krauss, Martin Hofmann, and Fausto Spoto

    Polytool: polynomial interpretations as a basis for termination analysis of Logic programs

    Full text link
    Our goal is to study the feasibility of porting termination analysis techniques developed for one programming paradigm to another paradigm. In this paper, we show how to adapt termination analysis techniques based on polynomial interpretations - very well known in the context of term rewrite systems (TRSs) - to obtain new (non-transformational) ter- mination analysis techniques for definite logic programs (LPs). This leads to an approach that can be seen as a direct generalization of the traditional techniques in termination analysis of LPs, where linear norms and level mappings are used. Our extension general- izes these to arbitrary polynomials. We extend a number of standard concepts and results on termination analysis to the context of polynomial interpretations. We also propose a constraint-based approach for automatically generating polynomial interpretations that satisfy the termination conditions. Based on this approach, we implemented a new tool, called Polytool, for automatic termination analysis of LPs

    A Practical Type Analysis for Verification of Modular Prolog Programs

    Get PDF
    Regular types are a powerful tool for computing very precise descriptive types for logic programs. However, in the context of real life, modular Prolog programs, the accurate results obtained by regular types often come at the price of efficiency. In this paper we propose a combination of techniques aimed at improving analysis efficiency in this context. As a first technique we allow optionally reducing the accuracy of inferred types by using only the types defined by the user or present in the libraries. We claim that, for the purpose of verifying type signatures given in the form of assertions the precision obtained using this approach is sufficient, and show that analysis times can be reduced significantly. Our second technique is aimed at dealing with situations where we would like to limit the amount of reanalysis performed, especially for library modules. Borrowing some ideas from polymorphic type systems, we show how to solve the problem by admitting parameters in type specifications. This allows us to compose new call patterns with some pre computed analysis info without losing any information. We argue that together these two techniques contribute to the practical and scalable analysis and verification of types in Prolog programs

    An integration of partial evaluation in a generic abstract interpretation framework

    Get PDF
    Information generated by abstract interpreters has long been used to perform program specialization. Additionally, if the abstract interpreter generates a multivariant analysis, it is also possible to perform múltiple specialization. Information about valúes of variables is propagated by simulating program execution and performing fixpoint computations for recursive calis. In contrast, traditional partial evaluators (mainly) use unfolding for both propagating valúes of variables and transforming the program. It is known that abstract interpretation is a better technique for propagating success valúes than unfolding. However, the program transformations induced by unfolding may lead to important optimizations which are not directly achievable in the existing frameworks for múltiple specialization based on abstract interpretation. The aim of this work is to devise a specialization framework which integrates the better information propagation of abstract interpretation with the powerful program transformations performed by partial evaluation, and which can be implemented via small modifications to existing generic abstract interpreters. With this aim, we will relate top-down abstract interpretation with traditional concepts in partial evaluation and sketch how the sophisticated techniques developed for controlling partial evaluation can be adapted to the proposed specialization framework. We conclude that there can be both practical and conceptual advantages in the proposed integration of partial evaluation and abstract interpretation

    Abstract Program Slicing: an Abstract Interpretation-based approach to Program Slicing

    Get PDF
    In the present paper we formally define the notion of abstract program slicing, a general form of program slicing where properties of data are considered instead of their exact value. This approach is applied to a language with numeric and reference values, and relies on the notion of abstract dependencies between program components (statements). The different forms of (backward) abstract slicing are added to an existing formal framework where traditional, non-abstract forms of slicing could be compared. The extended framework allows us to appreciate that abstract slicing is a generalization of traditional slicing, since traditional slicing (dealing with syntactic dependencies) is generalized by (semantic) non-abstract forms of slicing, which are actually equivalent to an abstract form where the identity abstraction is performed on data. Sound algorithms for computing abstract dependencies and a systematic characterization of program slices are provided, which rely on the notion of agreement between program states
    corecore