88,452 research outputs found

    Independence in CLP Languages

    Get PDF
    Studying independence of goals has proven very useful in the context of logic programming. In particular, it has provided a formal basis for powerful automatic parallelization tools, since independence ensures that two goals may be evaluated in parallel while preserving correctness and eciency. We extend the concept of independence to constraint logic programs (CLP) and prove that it also ensures the correctness and eciency of the parallel evaluation of independent goals. Independence for CLP languages is more complex than for logic programming as search space preservation is necessary but no longer sucient for ensuring correctness and eciency. Two additional issues arise. The rst is that the cost of constraint solving may depend upon the order constraints are encountered. The second is the need to handle dynamic scheduling. We clarify these issues by proposing various types of search independence and constraint solver independence, and show how they can be combined to allow dierent optimizations, from parallelism to intelligent backtracking. Sucient conditions for independence which can be evaluated \a priori" at run-time are also proposed. Our study also yields new insights into independence in logic programming languages. In particular, we show that search space preservation is not only a sucient but also a necessary condition for ensuring correctness and eciency of parallel execution

    Minimizing the overheads of dependent {AND}-parallelism

    Get PDF
    Parallel implementations of programming languages need to control synchronization overheads. Synchronization is essential for ensuring the correctness of parallel code, yet it adds overheads that aren\u27t present in sequential programs. This is an important problem for parallel logic programming systems, because almost every action in such programs requires accessing variables, and the traditional approach of adding synchronization code to all such accesses is so prohibitively expensive that a parallel version of the program may run more slowly on four processors than a sequential version would run on one processor. We present a program transformation for implementing dependent AND-parallelism in logic programming languages that uses mode information to add synchronization code only to the variable accesses that actually need it

    An abstract machine for restricted and-parallel execution of logic programs

    Full text link
    Although the sequential execution speed of logic programs has been greatly improved by the concepts introduced in the Warren Abstract Machine (WAM), parallel execution represents the only way to increase this speed beyond the natural limits of sequential systems. However, most proposed parallel logic programming execution models lack the performance optimizations and storage efficiency of sequential systems. This paper presents a parallel abstract machine which is an extension of the WAM and is thus capable of supporting ANDParallelism without giving up the optimizations present in sequential implementations. A suitable instruction set, which can be used as a target by a variety of logic programming languages, is also included. Special instructions are provided to support a generalized version of "Restricted AND-Parallelism" (RAP), a technique which reduces the overhead traditionally associated with the run-time management of variable binding conflicts to a series of simple run-time checks, which select one out of a series of compiled execution graphs

    Some methodological issues in the design of CIAO, a generic, parallel concurrent constraint system

    Get PDF
    We informally discuss several issues related to the parallel execution of logic programming systems and concurrent logic programming systems, and their generalization to constraint programming. We propose a new view of these systems, based on a particular definition of parallelism. We argüe that, under this view, a large number of the actual systems and models can be explained through the application, at different levéis of granularity, of only a few basic principies: determinism, non-failure, independence (also referred to as stability), granularity, etc. Also, and based on the convergence of concepts that this view brings, we sketch a model for the implementation of several parallel constraint logic programming source languages and models based on a common, generic abstract machine and an intermedíate kernel language

    Parallel logic programming systems

    Get PDF
    Projet CHLOEParallelizing logic programming has attracted much interest in the research community, because of the intrinsic or and and parallelisms of logic programs. One research stream aims at transparent exploitation of parallelism in existing logic programming languages such as Prolog while the family of concurrent logic languages develops constructs allowing programmers to express the concurrency, that is the communication and synchronization between parallel process, inside their algorithms. This paper mainly concentrates on transparent exploitation of parallelism and surveys the most mature solutions to the problems to be solved in order to obtain efficient implementations. These solutions have been implemented and the most efficient parallel logic programming systems reach effective speedups over state-of-the-art sequential Prolog implementations. The paper also addresses current and prospective research issues aiming to extend the applicability and the efficiency of existing systems,such as models merging the transparent parallelism and the concurrent logic languages approaches, combination of constraint logic programming with parallelism and use of highly parallel architectures

    Topics in Programming Languages, a Philosophical Analysis through the case of Prolog

    Get PDF
    [EN]Programming languages seldom find proper anchorage in philosophy of logic, language and science. is more, philosophy of language seems to be restricted to natural languages and linguistics, and even philosophy of logic is rarely framed into programming languages topics. The logic programming paradigm and Prolog are, thus, the most adequate paradigm and programming language to work on this subject, combining natural language processing and linguistics, logic programming and constriction methodology on both algorithms and procedures, on an overall philosophizing declarative status. Not only this, but the dimension of the Fifth Generation Computer system related to strong Al wherein Prolog took a major role. and its historical frame in the very crucial dialectic between procedural and declarative paradigms, structuralist and empiricist biases, serves, in exemplar form, to treat straight ahead philosophy of logic, language and science in the contemporaneous age as well. In recounting Prolog's philosophical, mechanical and algorithmic harbingers, the opportunity is open to various routes. We herein shall exemplify some: - the mechanical-computational background explored by Pascal, Leibniz, Boole, Jacquard, Babbage, Konrad Zuse, until reaching to the ACE (Alan Turing) and EDVAC (von Neumann), offering the backbone in computer architecture, and the work of Turing, Church, Gödel, Kleene, von Neumann, Shannon, and others on computability, in parallel lines, throughly studied in detail, permit us to interpret ahead the evolving realm of programming languages. The proper line from lambda-calculus, to the Algol-family, the declarative and procedural split with the C language and Prolog, and the ensuing branching and programming languages explosion and further delimitation, are thereupon inspected as to relate them with the proper syntax, semantics and philosophical élan of logic programming and Prolog

    Preface Volume 30, Issue 3

    Get PDF
    AbstractOne of the main areas of research in logic programming is the design and implementation of sequential and parallel (constraint) logic programming systems. This research goes broadly from the design and specification of novel implementation technology to its actual evaluation in real life situations. A series of workshops on Implementations of Logic Programming Systems, previously held in Budapest (1993), Ithaca (1994), Portland (1995), Bonn (1996), Port Jefferson (1997), Manchester (1998) and Las Cruces (1999) provided a forum for ongoing research on the design and implementation of sequential and parallel (constraint) logic programming systems.This volume contains a collection of papers presented at the Workshop on Parallelism and Implementation Technology for (Constraint) Logic Programming, held in Las Cruces on December 1st, 1999, in conjunction with ICLP'99. The workshop was sponsored and organised by COMPULOG AMERICAS. The workshop also received support from the Association for Logic Programming and from the Department of Computer Science, New Mexico State University.Papers from both academia and industry were invited. Preference was given to the analysis and description of implemented systems (or currently under implementation) and their associated techniques, problems found in their development or design, and steps taken towards the solution of these problems.Topics included, but were not limited to: •standard and non—standard sequential implementation schemes (e.g., generalization/modification of WAM, translation to C, etc.);implementation of parallel logic programming systems;balance between compile-time effort and run-time machinery;techniques for the implementation of different declarative programming paradigms based on, or extending, logic programming (e.g., constraint logic programming, concurrent constraint languages, equational-logic languages);performance evaluation of sequential and parallel logic programming systems, both through benchmarking and using real world applications;other implementation-related issues, such as memory management, register allocation, use of global optimisations, etc.We were very fortunate to have so many interesting research papers, ranging over widely different subjects and giving a broad coverage of current research in sequential and parallel implementation of logic programming systems. Papers on sequential logic programming systems, focus on varied topics: constraint evaluation, support for extensions to logic programming, and abstract machines for performance evaluation. Papers on parallel logic programming systems also focus on diverse topics ranging from distributed implementations, garbage collection, to optimisations for exploiting and-or parallelism.The editors would like to thank all authors that chose to submit their work to this book, and also for their cooperation in making this document possible. We would also like to thank all referees involved in assessing the papers in this special volume.This volume will be published as volume 30, Issue 3 in the series Electronic Notes in Theoretical Computer Science (ENTCS). This series is published electronically through the facilities of Elsevier Science B.V. and its auspices. The volumes in the ENTCS series can be accessed at the URL http://www.elsevier.nl/locate/entcs March 14, 2000Horst Reiche

    Parallelism in declarative languages

    Get PDF
    Imperative programming languages were initially built for uniprocessor systems that evolved out of the Von Neumann machine model. This model of storage oriented computation blocks parallelism and increases the cost of parallel program development and porting. Declarative languages based on mathematical models of computation, seem more suitable for the development of parallel programs. In the first part of this thesis we examine different language families under the declarative paradigm: functional, logic, and constraint languages. Functional languages are based on the abstract model of functions and (lamda)-calculus. They were initially developed for symbolic computation, but today they are commonly used in numerical analysis and many other application areas. Pure lisp is a widely known member of this class. Logic languages are based on first order predicate calculus. Although they were initially developed for theorem proving, fifth generation operating systems are written in them. Most logic languages are descendants or distant relatives of Prolog. Constraint languages are related to logic languages. In a constraint language you define a program object by placing constraints on its structure and its behavior. They were initially used in graphics applications, but today researchers work on using them in parallel computation. Here we will compare and contrast the language classes above, locate advantages and deficiencies, and explain different choices made by language implementors. In the second part of thesis we describe a front end for the CONSUL, a prototype constraint language for programming multiprocessors. The most important features of the front end are compact representation of constraints, type definitions, functional use of relations, and the ability to split programs into multiple files
    • …
    corecore