208 research outputs found

    Analyzing logic programs with dynamic scheduling

    Get PDF
    Traditional logic programming languages, such as Prolog, use a fixed left-to-right atom scheduling rule. Recent logic programming languages, however, usually provide more flexible scheduling in which computation generally proceeds leftto- right but in which some calis are dynamically "delayed" until their arguments are sufRciently instantiated to allow the cali to run efficiently. Such dynamic scheduling has a significant cost. We give a framework for the global analysis of logic programming languages with dynamic scheduling and show that program analysis based on this framework supports optimizations which remove much of the overhead of dynamic scheduling

    A practical application of sharing and freeness inference

    Get PDF
    Abstract is not available

    Global flow analysis as a practical compilation tool

    Get PDF
    This paper addresses the issue of the practicality of global flow analysis in logic program compilation, in terms of speed of the analysis, precisión, and usefulness of the information obtained. To this end, design and implementation aspects are discussed for two practical abstract interpretation-based flow analysis systems: MA , the MCC And-parallel Analyzer and Annotator; and Ms, an experimental mode inference system developed for SB-Prolog. The paper also provides performance data obtained (rom these implementations and, as an example of an application, a study of the usefulness of the mode information obtained in reducing run-time checks in independent and-parallelism.Based on the results obtained, it is concluded that the overhead of global flow analysis is not prohibitive, while the results of analysis can be quite precise and useful

    Some considerations on the compile-time analysis of constraint logic programs

    Full text link
    This paper discusses some issues which arise in the dataflow analysis of constraint logic programming (CLP) languages. The basic technique applied is that of abstract interpretation. First, some types of optimizations possible in a number of CLP systems (including efficient parallelization) are presented and the information that has to be obtained at compile-time in order to be able to implement such optimizations is considered. Two approaches are then proposed and discussed for obtaining this information for a CLP program: one based on an analysis of a CLP metainterpreter using standard Prolog analysis tools, and a second one based on direct analysis of the CLP program. For the second approach an abstract domain which approximates groundness (also referred to as "definiteness") information (i.e. constraint to a single valué) and the related abstraction functions are presented

    Abstract multiple specialization and its application to program parallelization.

    Get PDF
    Program specialization optimizes programs for known valúes of the input. It is often the case that the set of possible input valúes is unknown, or this set is infinite. However, a form of specialization can still be performed in such cases by means of abstract interpretation, specialization then being with respect to abstract valúes (substitutions), rather than concrete ones. We study the múltiple specialization of logic programs based on abstract interpretation. This involves in principie, and based on information from global analysis, generating several versions of a program predicate for different uses of such predicate, optimizing these versions, and, finally, producing a new, "multiply specialized" program. While múltiple specialization has received theoretical attention, little previous evidence exists on its practicality. In this paper we report on the incorporation of múltiple specialization in a parallelizing compiler and quantify its effects. A novel approach to the design and implementation of the specialization system is proposed. The resulting implementation techniques result in identical specializations to those of the best previously proposed techniques but require little or no modification of some existing abstract interpreters. Our results show that, using the proposed techniques, the resulting "abstract múltiple specialization" is indeed a relevant technique in practice. In particular, in the parallelizing compiler application, a good number of run-time tests are eliminated and invariants extracted automatically from loops, resulting generally in lower overheads and in several cases in increased speedups

    A practical approach to the global analysis of CLP programs

    Get PDF
    This paper presents and illustrates with an example a practical approach to the dataflow analysis of programs written in constraint logic programming (CLP) languages using abstract interpretation. It is first argued that, from the framework point of view, it sufnces to propose relatively simple extensions of traditional analysis methods which have already been proved useful and practical and for which efncient fixpoint algorithms have been developed. This is shown by proposing a simple but quite general extensión of Bruynooghe's traditional framework to the analysis of CLP programs. In this extensión constraints are viewed not as "suspended goals" but rather as new information in the store, following the traditional view of CLP. Using this approach, and as an example of its use, a complete, constraint system independent, abstract analysis is presented for approximating definiteness information. The analysis is in fact of quite general applicability. It has been implemented and used in the analysis of CLP(R) and Prolog-III applications. Results from the implementation of this analysis are also presented

    Detection and optimization of suspension-free logic programs

    Get PDF
    AbstractIn recent years, language mechanisms to suspend, or delay, the execution of goals until certain variables become bound have become increasingly popular in logic programming languages. While convenient, such mechanisms can make control flow within a program difficult to predict at compile time, and therefore render many traditional compiler optimizations inapplicable. Unfortunately, this performance cost is also incurred by programs that do not use any delay primitives. In this paper, we describe a simple dataflow analysis for detecting computations where suspension effects can be ignored, and discuss several low-level optimizations that rely on this information. Our algorithm has been implemented in the jc system. Optimizations based on information it produces result in significant performance improvements, demonstrating speed comparable to or exceeding that of optimized C programs

    Effectiveness of combined sharing and freeness analysis using abstract interpretation

    Get PDF
    This paper presents improved unification algorithms, an implementation, and an analysis of the effectiveness of an abstract interpreter based on the sharing + freeness domain presented in a previous paper, which was designed to accurately and concisely represent combined freeness and sharing information for program variables. We first briefly review this domain and the unification algorithms previously proposed. We then improve these algorithms and correct them to deal with some cases which were not well analyzed previously, illustrating the improvement with an example. We then present the implementation of the improved algorithm and evaluate its performance by comparing the effectiveness of the information inferred to that of other interpreters available to us for an application (program parallelization) that is common to all these interpreters. All these systems have been embedded in a real parallelizing compiler. Effectiveness of the analysis is measured in terms of actual final performance of the system: i.e. in terms of the actual speedups obtained. The results show good performance for the combined domain in that it improves the accuracy of both types of information and also in that the analyzer using the combined domain is more effective in the application than any of the other analyzers it is compared to

    A compiler approach to scalable concurrent program design

    Get PDF
    The programmer's most powerful tool for controlling complexity in program design is abstraction. We seek to use abstraction in the design of concurrent programs, so as to separate design decisions concerned with decomposition, communication, synchronization, mapping, granularity, and load balancing. This paper describes programming and compiler techniques intended to facilitate this design strategy. The programming techniques are based on a core programming notation with two important properties: the ability to separate concurrent programming concerns, and extensibility with reusable programmer-defined abstractions. The compiler techniques are based on a simple transformation system together with a set of compilation transformations and portable run-time support. The transformation system allows programmer-defined abstractions to be defined as source-to-source transformations that convert abstractions into the core notation. The same transformation system is used to apply compilation transformations that incrementally transform the core notation toward an abstract concurrent machine. This machine can be implemented on a variety of concurrent architectures using simple run-time support. The transformation, compilation, and run-time system techniques have been implemented and are incorporated in a public-domain program development toolkit. This toolkit operates on a wide variety of networked workstations, multicomputers, and shared-memory multiprocessors. It includes a program transformer, concurrent compiler, syntax checker, debugger, performance analyzer, and execution animator. A variety of substantial applications have been developed using the toolkit, in areas such as climate modeling and fluid dynamics

    Effectiveness of global analysis in strict independence-based automatic program parallelization

    Full text link
    This paper presents a study of the effectiveness of global analysis in the parallelization of logic programs using strict independence. A number of well-known approximation domains are selected and tlieir usefulness for the application in hand is explained. Also, methods for using the information provided by such domains to improve parallelization are proposed. Local and global analyses are built using these domains and such analyses are embedded in a complete parallelizing compiler. Then, the performance of the domains (and the system in general) is assessed for this application through a number of experiments. We argüe that the results offer significant insight into the characteristics of these domains, the demands of the application, and the tradeoffs involved
    • …
    corecore