89,361 research outputs found

    Towards an abstract parallel branch and bound machine

    Get PDF
    Many (parallel) branch and bound algorithms look very different from each other at first glance. They exploit, however, the same underlying computational model. This phenomenon can be used to define branch and bound algorithms in terms of a set of basic rules that are applied in a specific (predefined) order. In the sequential case, the specification of Mitten's rules turns out to be sufficient for the development of branch and bound algorithms. In the parallel case, the situation is a bit more complicated. We have to consider extra parameters such as work distribution and knowledge sharing. Here, the implementation of parallel branch and bound algorithms can be seen as a tuning of the parameters combined with the specification of Mitten's rules. These observations lead to generic systems, where the user provides the specifications of the problem to be solved, and the system generates a branch and bound algorithm running on a specific architecture. We will discuss some proposals that appeared in the literature. Next, we raise the question whether the proposed models are flexible enough. We analyze the design decisions to be taken when implementing a parallel branch and bound algorithm. It results in a classification model, which is validated by checking whether it captures existing branch and bound implementations. Finally, we return to the issue of flexibility of existing systems, and propose to add an abstract machine model to the generic framework. The model defines a virtual parallel branch and bound machine, within which the design decisions can be expressed in terms of the abstract machine. We will outline some ideas on which the machine may be based, and present directions of future work

    B-LOG: A branch and bound methodology for the parallel execution of logic programs

    Get PDF
    We propose a computational methodology -"B-LOG"-, which offers the potential for an effective implementation of Logic Programming in a parallel computer. We also propose a weighting scheme to guide the search process through the graph and we apply the concepts of parallel "branch and bound" algorithms in order to perform a "best-first" search using an information theoretic bound. The concept of "session" is used to speed up the search process in a succession of similar queries. Within a session, we strongly modify the bounds in a local database, while bounds kept in a global database are weakly modified to provide a better initial condition for other sessions. We also propose an implementation scheme based on a database machine using "semantic paging", and the "B-LOG processor" based on a scoreboard driven controller

    Space-Efficient Parallel Algorithms for Combinatorial Search Problems

    Get PDF
    We present space-efficient parallel strategies for two fundamental combinatorial search problems, namely, backtrack search and branch-and-bound, both involving the visit of an nn-node tree of height hh under the assumption that a node can be accessed only through its father or its children. For both problems we propose efficient algorithms that run on a pp-processor distributed-memory machine. For backtrack search, we give a deterministic algorithm running in O(n/p+hlogp)O(n/p+h\log p) time, and a Las Vegas algorithm requiring optimal O(n/p+h)O(n/p+h) time, with high probability. Building on the backtrack search algorithm, we also derive a Las Vegas algorithm for branch-and-bound which runs in O((n/p+hlogplogn)hlog2n)O((n/p+h\log p \log n)h\log^2 n) time, with high probability. A remarkable feature of our algorithms is the use of only constant space per processor, which constitutes a significant improvement upon previous algorithms whose space requirements per processor depend on the (possibly huge) tree to be explored.Comment: Extended version of the paper in the Proc. of 38th International Symposium on Mathematical Foundations of Computer Science (MFCS

    A Non-blocking Buddy System for Scalable Memory Allocation on Multi-core Machines

    Get PDF
    Common implementations of core memory allocation components handle concurrent allocation/release requests by synchronizing threads via spin-locks. This approach is not prone to scale with large thread counts, a problem that has been addressed in the literature by introducing layered allocation services or replicating the core allocators - the bottom most ones within the layered architecture. Both these solutions tend to reduce the pressure of actual concurrent accesses to each individual core allocator. In this article we explore an alternative approach to scalability of memory allocation/release, which can be still combined with those literature proposals. We present a fully non-blocking buddy-system, that allows threads to proceed in parallel, and commit their allocations/releases unless a conflict is materialized while handling its metadata. Beyond improving scalability and performance it is resilient to performance degradation in face of concurrent accesses independently of the current level of fragmentation of the handled memory blocks

    NBBS: A Non-blocking Buddy System for Multi-core Machines

    Get PDF
    Common implementations of core memory allocation components, like the Linux buddy system, handle concurrent allocation/release requests by synchronizing threads via spinlocks. This approach is not prone to scale with large thread counts, a problem that has been addressed in the literature by introducing layered allocation services or replicating the core allocators—the bottom most ones within the layered architecture. Both these solutions tend to reduce the pressure of actual concurrent accesses to each individual core allocator. In this article we explore an alternative approach to scalability of memory allocation/release, which can be still combined with those literature proposals. We present a fully non-blocking buddy-system, where threads performing concurrent allocations/releases do not undergo any spinlock based synchronization. Our solution allows threads to proceed in parallel, and commit their allocations/releases unless a conflict is materialized while handling its metadata. Conflict detection relies on conventional atomic machine instructions in the Read-Modify-Write (RMW) class. Beyond improving scalability and performance, our solution can also avoid wasting clock cycles for spin-lock operations by threads that could in principle carry out their memory allocation/release in full concurrency. Thus, it is resilient to performance degradation—in face of concurrent accesses—independently of the current level of fragmentation of the handled memory blocks
    corecore