1,253 research outputs found

    Efficient management of backtracking in and-parallelism

    Full text link
    A backtracking algorithm for AND-Parallelism and its implementation at the Abstract Machine level are presented: first, a class of AND-Parallelism models based on goal independence is defined, and a generalized version of Restricted AND-Parallelism (RAP) introduced as characteristic of this class. A simple and efficient backtracking algorithm for R A P is then discussed. An implementation scheme is presented for this algorithm which offers minimum overhead, while retaining the performance and storage economy of sequent ial implementations and taking advantage of goal independence to avoid unnecessary backtracking ("restricted intelligent backtracking"). Finally, the implementation of backtracking in sequential and AND-Parallcl systems is explained through a number of examples

    Independent AND-parallel implementation of narrowing

    Get PDF
    We present a parallel graph narrowing machine, which is used to implement a functional logic language on a shared memory multiprocessor. It is an extensión of an abstract machine for a purely functional language. The result is a programmed graph reduction machine which integrates the mechanisms of unification, backtracking, and independent and-parallelism. In the machine, the subexpressions of an expression can run in parallel. In the case of backtracking, the structure of an expression is used to avoid the reevaluation of subexpressions as far as possible. Deterministic computations are detected. Their results are maintained and need not be reevaluated after backtracking

    Divided we stand: Parallel distributed stack memory management

    Get PDF
    We present an overview of the stack-based memory management techniques that we used in our non-deterministic and-parallel Prolog systems: &-Prolog and DASWAM. We believe that the problems associated with non-deterministic and-parallel systems are more general than those encountered in or-parallel and deterministic and-parallel systems, which can be seen as subsets of this more general case. We develop on the previously proposed "marker scheme", lifting some of the restrictions associated with the selection of goals while keeping (virtual) memory consumption down. We also review some of the other problems associated with the stack-based management scheme, such as handling of forward and backward execution, cut, and roll-backs

    An abstract machine for restricted and-parallel execution of logic programs

    Full text link
    Although the sequential execution speed of logic programs has been greatly improved by the concepts introduced in the Warren Abstract Machine (WAM), parallel execution represents the only way to increase this speed beyond the natural limits of sequential systems. However, most proposed parallel logic programming execution models lack the performance optimizations and storage efficiency of sequential systems. This paper presents a parallel abstract machine which is an extension of the WAM and is thus capable of supporting ANDParallelism without giving up the optimizations present in sequential implementations. A suitable instruction set, which can be used as a target by a variety of logic programming languages, is also included. Special instructions are provided to support a generalized version of "Restricted AND-Parallelism" (RAP), a technique which reduces the overhead traditionally associated with the run-time management of variable binding conflicts to a series of simple run-time checks, which select one out of a series of compiled execution graphs

    Relating goal scheduling, precedence, and memory management in and-parallel execution of logic programs

    Full text link
    The interactions among three important issues involved in the implementation of logic programs in parallel (goal scheduling, precedence, and memory management) are discussed. A simplified, parallel memory management model and an efficient, load-balancing goal scheduling strategy are presented. It is shown how, for systems which support "don't know" non-determinism, special care has to be taken during goal scheduling if the space recovery characteristics of sequential systems are to be preserved. A solution based on selecting only "newer" goals for execution is described, and an algorithm is proposed for efficiently maintaining and determining precedence relationships and variable ages across parallel goals. It is argued that the proposed schemes and algorithms make it possible to extend the storage performance of sequential systems to parallel execution without the considerable overhead previously associated with it. The results are applicable to a wide class of parallel and coroutining systems, and they represent an efficient alternative to "all heap" or "spaghetti stack" allocation models

    Complete and efficient methods for supporting side effects in independent/restricted and-parallelism

    Get PDF
    It has been shown that it is possible to exploit Independent/Restricted And-parallelism in logic programs while retaining the conventional "don't know" semantics of such programs. In particular, it is possible to parallelize pure Prolog programs while maintaining the semantics of the language. However, when builtin side-effects (such as write or assert) appear in the program, if an identical observable behaviour to that of sequential Prolog implementations is to be preserved, such side-effects have to be properly sequenced. Previously proposed solutions to this problem are either incomplete (lacking, for example, backtracking semantics) or they force sequentialization of significant portions of the execution graph which could otherwise run in parallel. In this paper a series of side-effect synchronization methods are proposed which incur lower overhead and allow more parallelism than those previously proposed. Most importantly, and unlike previous proposals, they have well-defined backward execution behaviour and require only a small modification to a given (And-parallel) Prolog implementation

    Experimenting with independent and-parallel prolog using standard prolog

    Get PDF
    This paper presents an approximation to the study of parallel systems using sequential tools. The Independent And-parallelism in Prolog is an example of parallel processing paradigm in the framework of logic programming, and implementations like <fc-Prolog uncover the potential performance of parallel processing. But this potential can also be explored using only sequential systems. Being the spirit of this paper to show how this can be done with a standard system, only standard Prolog will be used in the implementations included. Such implementations include tests for parallelism in And-Prolog, a correctnesschecking meta-interpreter of <fc-Prolog and a simulator of parallel execution for <fc-Prolog

    A Framework for Efficient Execution of Logic Programs.

    Get PDF
    The focus of this dissertation is to develop an efficient framework for sequential execution of logic programs. Within this framework the logic programs are executed by pruning the goal-search tree whenever applicable. Three new concepts for pruning of computation during execution of logic programs are introduced. (1) Failure-binding. A Failure-binding for a literal is a binding which when applied to the literal fails the goal obtained from the literal. Failure-bindings for a literal are identified by analyzing the goal-tree of a goal which is obtained from the literal. The failure-bindings for a literal are used for intelligent backtracking based on the generator-consumer approach. Intelligent backtracking based on failure-bindings prune the computation of search space which lead to late detection of failure. (2) Failure-solution. A Failure-solution of a goal is unacceptable to some other subgoal in the forward execution. Failure-solutions of a goal are identified by analyzing the history of computation, during execution. Failure-solutions of the goals are used for intelligent forward execution. Intelligent forward execution prunes the computation of search space which leads to repeated failure resulting from repeated successes of a goal. (3) Forward jumping. Forward jumping is a method to avoid reexecution of some subgoals after backtracking (instead of naive forward execution after backtracking). Forward jumping is based on the dynamic subgoal dependencies in a rule. Such jumping prunes the computation of the search spaces which leads to the same sequences of successes of subgoals after backtracking. To facilitate the implementation of these concepts a new data structure, called segmented-stack, is defined. The space complexity of a segmented stack is linear in the number of nodes in the stack. Depth-first search as well as breadth-first search are very easily implemented on a segmented-stack during execution of logic programs. Execution of logic programs on a segmented-stack allows association of the search space, as well as the solutions, of a goal with the frame of the goal. This enables implementation of intelligent backtracking, intelligent forward execution and forward jumping. The search based on each of these paradigms is proved to be sound and complete. It is also shown that the implementation of these paradigms preserves the order of results obtained by Prolog. The effects of the non-logical operators, in Prolog, on the paradigms are studied. The search based on the these paradigms is compared individually, and collectively, with the standard search by Prolog
    • …
    corecore