18 research outputs found

    Greedy Graph Colouring is a Misleading Heuristic

    Full text link
    State of the art maximum clique algorithms use a greedy graph colouring as a bound. We show that greedy graph colouring can be misleading, which has implications for parallel branch and bound

    Multi-threading a state-of-the-art maximum clique algorithm

    Get PDF
    We present a threaded parallel adaptation of a state-of-the-art maximum clique algorithm for dense, computationally challenging graphs. We show that near-linear speedups are achievable in practice and that superlinear speedups are common. We include results for several previously unsolved benchmark problems

    Partially ordered distributed computations on asynchronous point-to-point networks

    Full text link
    Asynchronous executions of a distributed algorithm differ from each other due to the nondeterminism in the order in which the messages exchanged are handled. In many situations of interest, the asynchronous executions induced by restricting nondeterminism are more efficient, in an application-specific sense, than the others. In this work, we define partially ordered executions of a distributed algorithm as the executions satisfying some restricted orders of their actions in two different frameworks, those of the so-called event- and pulse-driven computations. The aim of these restrictions is to characterize asynchronous executions that are likely to be more efficient for some important classes of applications. Also, an asynchronous algorithm that ensures the occurrence of partially ordered executions is given for each case. Two of the applications that we believe may benefit from the restricted nondeterminism are backtrack search, in the event-driven case, and iterative algorithms for systems of linear equations, in the pulse-driven case

    Asynchronous parallel branch and bound and anomalies

    Get PDF
    The parallel execution of branch and bound algorithms can result in seemingly unreasonable speedups or slowdowns. Almost never the speedup is equal to the increase in computing power. For synchronous parallel branch and bound, these effects have been studiedd extensively. For asynchronous parallelizations, only little is known. In this paper, we derive sufficient conditions to guarantee that an asynchronous parallel branch and bound algorithm (with elimination by lower bound tests and dominance) will be at least as fast as its sequential counterpart. The technique used for obtaining the results seems to be more generally applicable. The essential observations are that, under certain conditions, the parallel algorithm will always work on at least one node, that is branched from by the sequential algorithm, and that the parallel algorithm, after elimination of all such nodes, is able to conclude that the optimal solution has been found. Finally, some of the theoretical results are brought into connection with a few practical experiments

    Study on performance evaluation for multiprocessor scheduling algorithms

    Get PDF
    制度:新 ; 文部省報告番号:甲1769号 ; 学位の種類:博士(工学) ; 授与年月日:2003/3/6 ; 早大学位記番号:新353

    On parallel Branch and Bound frameworks for Global Optimization

    Get PDF
    Branch and Bound (B&B) algorithms are known to exhibit an irregularity of the search tree. Therefore, developing a parallel approach for this kind of algorithms is a challenge. The efficiency of a B&B algorithm depends on the chosen Branching, Bounding, Selection, Rejection, and Termination rules. The question we investigate is how the chosen platform consisting of programming language, used libraries, or skeletons influences programming effort and algorithm performance. Selection rule and data management structures are usually hidden to programmers for frameworks with a high level of abstraction, as well as the load balancing strategy, when the algorithm is run in parallel. We investigate the question by implementing a multidimensional Global Optimization B&B algorithm with the help of three frameworks with a different level of abstraction (from more to less): Bobpp, Threading Building Blocks (TBB), and a customized Pthread implementation. The following has been found. The Bobpp implementation is easy to code, but exhibits the poorest scalability. On the contrast, the TBB and Pthread implementations scale almost linearly on the used platform. The TBB approach shows a slightly better productivity

    Towards an abstract parallel branch and bound machine

    Get PDF
    Many (parallel) branch and bound algorithms look very different from each other at first glance. They exploit, however, the same underlying computational model. This phenomenon can be used to define branch and bound algorithms in terms of a set of basic rules that are applied in a specific (predefined) order. In the sequential case, the specification of Mitten's rules turns out to be sufficient for the development of branch and bound algorithms. In the parallel case, the situation is a bit more complicated. We have to consider extra parameters such as work distribution and knowledge sharing. Here, the implementation of parallel branch and bound algorithms can be seen as a tuning of the parameters combined with the specification of Mitten's rules. These observations lead to generic systems, where the user provides the specifications of the problem to be solved, and the system generates a branch and bound algorithm running on a specific architecture. We will discuss some proposals that appeared in the literature. Next, we raise the question whether the proposed models are flexible enough. We analyze the design decisions to be taken when implementing a parallel branch and bound algorithm. It results in a classification model, which is validated by checking whether it captures existing branch and bound implementations. Finally, we return to the issue of flexibility of existing systems, and propose to add an abstract machine model to the generic framework. The model defines a virtual parallel branch and bound machine, within which the design decisions can be expressed in terms of the abstract machine. We will outline some ideas on which the machine may be based, and present directions of future work

    Sequential and parallel solution-biased search for subgraph algorithms

    Get PDF
    Funding: This work was supported by the Engineering and Physical Sciences Research Council (grant numbers EP/P026842/1, EP/M508056/1, and EP/N007565).The current state of the art in subgraph isomorphism solving involves using degree as a value-ordering heuristic to direct backtracking search. Such a search makes a heavy commitment to the first branching choice, which is often incorrect. To mitigate this, we introduce and evaluate a new approach, which we call “solution-biased search”. By combining a slightly-random value-ordering heuristic, rapid restarts, and nogood recording, we design an algorithm which instead uses degree to direct the proportion of search effort spent in different subproblems. This increases performance by two orders of magnitude on satisfiable instances, whilst not affecting performance on unsatisfiable instances. This algorithm can also be parallelised in a very simple but effective way: across both satisfiable and unsatisfiable instances, we get a further speedup of over thirty from thirty-six cores, and over one hundred from ten distributed-memory hosts. Finally, we show that solution-biased search is also suitable for optimisation problems, by using it to improve two maximum common induced subgraph algorithms.Postprin

    Parallel computing in combinatorial optimization

    Get PDF
    corecore