61,536 research outputs found

    Parallel consistency in constraint programming

    Get PDF
    Program parallelization becomes increasingly important when new multi-core architectures provide ways to improve performance. One of the greatest challenges of this development lies in programming parallel applications. Using declarative languages, such as constraint programming, can make the transition to parallelism easier by hiding the parallelization details in a framework. Automatic parallelization in constraint programming has previously focused on data parallelism. In this paper, we look at task parallelism, specifically the case of parallel consistency. We have developed two models of parallel consistency, one that shares intermediate results and one that does not. We evaluate which model is better in our experiments. Our results show that parallelizing consistency can provide the programmer with a robust scalability for regular problems with global constraints

    Combining parallel search and parallel consistency in constraint programming

    Get PDF
    Program parallelization becomes increasingly important when new multi-core architectures provide ways to improve performance. One of the greatest challenges of this development lies in programming parallel applications. Declarative languages, such as constraint programming, can make the transition to parallelism easier by hiding the parallelization details in a framework. Automatic parallelization in constraint programming has mostly focused on parallel search. While search and consistency are intrinsically linked, the consistency part of the solving process is often more time-consuming. We have previously looked at parallel consistency and found it to be quite promising. In this paper we investigate how to combine parallel search with parallel consistency. We evaluate which problems are suitable and which are not. Our results show that parallelizing the entire solving process in constraint programming is a major challenge as parallel search and parallel consistency typically suit different types of problems

    Parallelism in Constraint Programming

    Get PDF
    Writing efficient parallel programs is the biggest challenge of the software industry for the foreseeable future. We are currently in a time when parallel computers are the norm, not the exception. Soon, parallel processors will be standard even in cell phones. Without drastic changes in hardware development, all software must be parallelized to its fullest extent. Parallelism can increase performance and reduce power consumption at the same time. Many programs will execute faster on a dual-core processor than a single core processor running at twice the speed. Halving the speed of a processor can reduce the power consumption up to four times. Hence, parallelism gives more performance per unit of power to efficient programs. In order to make use of parallel hardware, we need to overcome the difficulties of parallel programming. To many programmers, it is easier to learn a handful of small domain-specific programming languages than to learn efficient parallel programming. The frameworks for these languages can then automatically parallelize the program. Automatically parallelizing traditional programs is usually much more difficult. In this thesis, we study and present parallelism in constraint programming (CP). We have developed the first constraint framework that automatically parallelizes both the consistency and the search of the solving process. This allows programmers to avoid the difficult issues of parallel programming. We also study distributed CP with independent agents and propose solutions to this problem. Our results show that automatic parallelism in CP can provide very good performance. Our parallel consistency scales very well for problems with many large constraints. We also manage to combine parallel consistency and parallel search with a performance increase. The communication and load-balancing schemes we developed increase the scalability of parallel search. Our model for distributed CP is orders of magnitude faster than traditional approaches. As far as we know, it is the first to solve standard benchmark scheduling problems

    Deterministic Consistency: A Programming Model for Shared Memory Parallelism

    Full text link
    The difficulty of developing reliable parallel software is generating interest in deterministic environments, where a given program and input can yield only one possible result. Languages or type systems can enforce determinism in new code, and runtime systems can impose synthetic schedules on legacy parallel code. To parallelize existing serial code, however, we would like a programming model that is naturally deterministic without language restrictions or artificial scheduling. We propose "deterministic consistency", a parallel programming model as easy to understand as the "parallel assignment" construct in sequential languages such as Perl and JavaScript, where concurrent threads always read their inputs before writing shared outputs. DC supports common data- and task-parallel synchronization abstractions such as fork/join and barriers, as well as non-hierarchical structures such as producer/consumer pipelines and futures. A preliminary prototype suggests that software-only implementations of DC can run applications written for popular parallel environments such as OpenMP with low (<10%) overhead for some applications.Comment: 7 pages, 3 figure

    Timed Soft Concurrent Constraint Programs: An Interleaved and a Parallel Approach

    Full text link
    We propose a timed and soft extension of Concurrent Constraint Programming. The time extension is based on the hypothesis of bounded asynchrony: the computation takes a bounded period of time and is measured by a discrete global clock. Action prefixing is then considered as the syntactic marker which distinguishes a time instant from the next one. Supported by soft constraints instead of crisp ones, tell and ask agents are now equipped with a preference (or consistency) threshold which is used to determine their success or suspension. In the paper we provide a language to describe the agents behavior, together with its operational and denotational semantics, for which we also prove the compositionality and correctness properties. After presenting a semantics using maximal parallelism of actions, we also describe a version for their interleaving on a single processor (with maximal parallelism for time elapsing). Coordinating agents that need to take decisions both on preference values and time events may benefit from this language. To appear in Theory and Practice of Logic Programming (TPLP)

    Independence in CLP Languages

    Get PDF
    Studying independence of goals has proven very useful in the context of logic programming. In particular, it has provided a formal basis for powerful automatic parallelization tools, since independence ensures that two goals may be evaluated in parallel while preserving correctness and eciency. We extend the concept of independence to constraint logic programs (CLP) and prove that it also ensures the correctness and eciency of the parallel evaluation of independent goals. Independence for CLP languages is more complex than for logic programming as search space preservation is necessary but no longer sucient for ensuring correctness and eciency. Two additional issues arise. The rst is that the cost of constraint solving may depend upon the order constraints are encountered. The second is the need to handle dynamic scheduling. We clarify these issues by proposing various types of search independence and constraint solver independence, and show how they can be combined to allow dierent optimizations, from parallelism to intelligent backtracking. Sucient conditions for independence which can be evaluated \a priori" at run-time are also proposed. Our study also yields new insights into independence in logic programming languages. In particular, we show that search space preservation is not only a sucient but also a necessary condition for ensuring correctness and eciency of parallel execution

    Service discovery and negotiation with COWS

    Get PDF
    To provide formal foundations to current (web) services technologies, we put forward using COWS, a process calculus for specifying, combining and analysing services, as a uniform formalism for modelling all the relevant phases of the life cycle of service-oriented applications, such as publication, discovery, negotiation, deployment and execution. In this paper, we show that constraints and operations on them can be smoothly incorporated in COWS, and propose a disciplined way to model multisets of constraints and to manipulate them through appropriate interaction protocols. Therefore, we demonstrate that also QoS requirement specifications and SLA achievements, and the phases of dynamic service discovery and negotiation can be comfortably modelled in COWS. We illustrate our approach through a scenario for a service-based web hosting provider
    corecore