213 research outputs found

    Greedy Graph Colouring is a Misleading Heuristic

    Full text link
    State of the art maximum clique algorithms use a greedy graph colouring as a bound. We show that greedy graph colouring can be misleading, which has implications for parallel branch and bound

    Learning Combinatorial Node Labeling Algorithms

    Full text link
    We present a graph neural network to learn graph coloring heuristics using reinforcement learning. Our learned deterministic heuristics give better solutions than classical degree-based greedy heuristics and only take seconds to evaluate on graphs with tens of thousands of vertices. As our approach is based on policy-gradients, it also learns a probabilistic policy as well. These probabilistic policies outperform all greedy coloring baselines and a machine learning baseline. Our approach generalizes several previous machine-learning frameworks, which applied to problems like minimum vertex cover. We also demonstrate that our approach outperforms two greedy heuristics on minimum vertex cover

    Examination scheduling using the ant system.

    Get PDF
    This work is concerned with heuristic approaches to examination timetabling. It is demonstrated that a relatively new evolutionary method, the Ant System, can be the basis of a successful two-phase solution method. The first phase exploits ant feedback in order both to produce large volumes of feasible timetables and to optimise secondary objectives. The second phase acts as a repair facility where solution quality is improved further while maintaining feasibility. This is accomplished without increasing computational effort to unrealistic levels. The work builds on an existing implementation for the graph colouring problem, the natural model for examination scheduling. It is demonstrated that by adjusting the graph model to allow the accommodation of several side constraints as well incorporating enhancement techniques within the algorithm itself, the Ant System algorithm becomes very effective at producing feasible timetables. The enhancements include a diversification function, new reward functions and trail replenishment tactics. It is observed that the achievement of second-order objectives can be enhanced through a variety of means. A modified elitist strategy (ERF) significantly improves the performance of the Ant System due to the extra emphasis on second-order feedback. It is also shown that through the incorporation of the ERF, trail limits and, in particular, 19th century evolutionary theory the area of the solution space explored by the ants during the infancy of the search can be reduced. In addition, a good level of exploration is maintained as the search matures. This balance between exploration and exploitation is the main determinant of solution quality. The use of a repair facility, as is common practice with evolutionary algorithms, encourages fitter solutions. The interaction between Lamarckian evolution and searching in an extended neighbourhood through the graph theoretic concept of Kempe chains leads to better overall solutions

    Solving hard subgraph problems in parallel

    Get PDF
    This thesis improves the state of the art in exact, practical algorithms for finding subgraphs. We study maximum clique, subgraph isomorphism, and maximum common subgraph problems. These are widely applicable: within computing science, subgraph problems arise in document clustering, computer vision, the design of communication protocols, model checking, compiler code generation, malware detection, cryptography, and robotics; beyond, applications occur in biochemistry, electrical engineering, mathematics, law enforcement, fraud detection, fault diagnosis, manufacturing, and sociology. We therefore consider both the ``pure'' forms of these problems, and variants with labels and other domain-specific constraints. Although subgraph-finding should theoretically be hard, the constraint-based search algorithms we discuss can easily solve real-world instances involving graphs with thousands of vertices, and millions of edges. We therefore ask: is it possible to generate ``really hard'' instances for these problems, and if so, what can we learn? By extending research into combinatorial phase transition phenomena, we develop a better understanding of branching heuristics, as well as highlighting a serious flaw in the design of graph database systems. This thesis also demonstrates how to exploit two of the kinds of parallelism offered by current computer hardware. Bit parallelism allows us to carry out operations on whole sets of vertices in a single instruction---this is largely routine. Thread parallelism, to make use of the multiple cores offered by all modern processors, is more complex. We suggest three desirable performance characteristics that we would like when introducing thread parallelism: lack of risk (parallel cannot be exponentially slower than sequential), scalability (adding more processing cores cannot make runtimes worse), and reproducibility (the same instance on the same hardware will take roughly the same time every time it is run). We then detail the difficulties in guaranteeing these characteristics when using modern algorithmic techniques. Besides ensuring that parallelism cannot make things worse, we also increase the likelihood of it making things better. We compare randomised work stealing to new tailored strategies, and perform experiments to identify the factors contributing to good speedups. We show that whilst load balancing is difficult, the primary factor influencing the results is the interaction between branching heuristics and parallelism. By using parallelism to explicitly offset the commitment made to weak early branching choices, we obtain parallel subgraph solvers which are substantially and consistently better than the best sequential algorithms

    Distributed constraint satisfaction for coordinating and integrating a large-scale, heterogeneous enterprise

    Get PDF
    Market forces are continuously driving public and private organisations towards higher productivity, shorter process and production times, and fewer labour hours. To cope with these changes, organisations are adopting new organisational models of coordination and cooperation that increase their flexibility, consistency, efficiency, productivity and profit margins. In this thesis an organisational model of coordination and cooperation is examined using a real life example; the technical integration of a distributed large-scale project of an international physics collaboration. The distributed resource constraint project scheduling problem is modelled and solved with the methods of distributed constraint satisfaction. A distributed local search method, the distributed breakout algorithm (DisBO), is used as the basis for the coordination scheme. The efficiency of the local search method is improved by extending it with an incremental problem solving scheme with variable ordering. The scheme is implemented as central algorithm, incremental breakout algorithm (IncBO), and as distributed algorithm, distributed incremental breakout algorithm (DisIncBO). In both cases, strong performance gains are observed for solving underconstrained problems. Distributed local search algorithms are incomplete and lack a termination guarantee. When problems contain hard or unsolvable subproblems and are tightly or overconstrained, local search falls into infinite cycles without explanation. A scheme is developed that identifies hard or unsolvable subproblems and orders these to size. This scheme is based on the constraint weight information generated by the breakout algorithm during search. This information, combined with the graph structure, is used to derive a fail first variable order. Empirical results show that the derived variable order is 'perfect'. When it guides simple backtracking, exceptionally hard problems do not occur, and, when problems are unsolvable, the fail depth is always the shortest. Two hybrid algorithms, BOBT and BOBT-SUSP are developed. When the problem is unsolvable, BOBT returns the minimal subproblem within the search scope and BOBT-SUSP returns the smallest unsolvable subproblem using a powerful weight sum constraint. A distributed hybrid algorithm (DisBOBT) is developed that combines DisBO with DisBT. The distributed hybrid algorithm first attempts to solve the problem with DisBO. If no solution is available after a bounded number of breakouts, DisBO is terminated, and DisBT solves the problem. DisBT is guided by a distributed variable order that is derived from the constraint weight information and the graph structure. The variable order is incrementally established, every time the partial solution needs to be extended, the next variable within the order is identified. Empirical results show strong performance gains, especially when problems are overconstrained and contain small unsolvable subproblems

    Combinatorial Optimisation Problems in Logistics and Scheduling

    Get PDF
    This thesis presents a variety of problems and results in the fields of logistics and, in particular, of maritime and railways logistics. We first present a brief introduction to these problems, their characteristics, and the role they have in the quest for more efficient and greener global supply chains and transport systems; we also present the methodological tools employed for their solution. After this introduction, each chapter presents one specific problem, and corresponds to a self-contained research paper

    VOCAL 2018. 8th VOCAL Optimization Conference: Advanced Algorithms

    Get PDF

    The Incremental Constraint of k-Server

    Get PDF
    Online algorithms are characterized by operating on an input sequence revealed over time versus a single static input. Instead of generating a single solution, they produce a sequence of incremental solutions corresponding to the input seen so far. An online algorithm's ignorance of future inputs limits its ability to produce optimal solutions. The incremental nature of its solutions is also an obstacle. The two factors can be differentiated by examining the corresponding incremental algorithm, which has knowledge of future inputs, but must still provide a competitive solution at each step. In this thesis, the lower bound of the incremental constraint of k-server is shown to be to 2
    • …
    corecore