7 research outputs found

    Automatic parallelization of irregular and pointer-based computations: perspectives from logic and constraint programming

    Get PDF
    Irregular computations pose some of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures which make heavy use of pointers. This complicates all the steps of a parallelizing compiler, from independence detection to task partitioning and placement. In the past decade there has been significant progress in the development of parallelizing compilers for logic programming and, more recently, constraint programming. The typical applications of these paradigms frequently involve irregular computations, which arguably makes the techniques used in these compilers potentially interesting. In this paper we introduce in a tutorial way some of the problems faced by parallelizing compilers for logic and constraint programs. These include the need for inter-procedural pointer aliasing analysis for independence detection and having to manage speculative and irregular computations through task granularity control and dynamic task allocation. We also provide pointers to some of the progress made in these áreas. In the associated talk we demónstrate representatives of several generations of these parallelizing compilers

    Heap Garbage Collection in XSB: Practice and Experience

    No full text
    Starting from a theoretical understanding of the usefulness logic of a logic programming system with built-in tabling, and from a collector that did not take the characteristics of a tabled abstract machine into account we have build two heap garbage collectors (one mark&slide, one mark&copy) for XSB on top of the CHAT implementation model for the suspension/resumption of consumers. Based on this experience we discuss implementation issues that are general to heap garbage collection for the WAM and also issues that are specific to an implementation with tabling: as such, this paper documents our own implementation and can serve as guidance for anyone attempting a similar feat. We report on the behaviour of the garbage collectors on different kinds of programs. We also present figures on the extent of internal fragmentation and the effectiveness of early reset in Prolog systems with and without tabling

    Copying garbage collection for the WAM: To mark or not to mark?

    No full text
    Garbage collection by copying is becoming more and more popular for Prolog. In principle copying requires a marking phase in order to be safe. However, some systems use a copying garbage collector without marking prior to copying, and instead postpone the copying of cells that might cause problems. Such systems always perform minor collections, and it is not clear whether postponing works for major collections. Postponin

    Parallel execution of Prolog programs

    Get PDF
    Since the early days of logic programming, researchers in the field realised the potential for exploitation of parallelism present in the execution of logic programs. Their high-level nature, the presence of non-determinism, and their referential transparency, among other characteristics, make logic programs interesting candidates for obtaining speedups through parallel execution. At the same time, the fact that the typical applications of logic programming frequently involve irregular computations, make heavy use of dynamic data structures with logical variables, and involve search and speculation, makes the techniques used in the corresponding parallelizing compilers and run-time systems potentially interesting even outside the field. The objective of this paper is to provide a comprehensive survey of the issues arising in parallel execution of logic programming languages along with the most relevant approaches explored to date in the field. Focus is mostly given to the challenges emerging from the parallel execution of Prolog programs. The paper describes the major techniques used for shared memory implementation of Or-parallelism, And-parallelism, and combinations of the two. We also explore some related issues, such as memory management, compile-time analysis, and execution visualisation
    corecore