54 research outputs found

    Learning to Optimize Computational Resources: Frugal Training with Generalization Guarantees

    Full text link
    Algorithms typically come with tunable parameters that have a considerable impact on the computational resources they consume. Too often, practitioners must hand-tune the parameters, a tedious and error-prone task. A recent line of research provides algorithms that return nearly-optimal parameters from within a finite set. These algorithms can be used when the parameter space is infinite by providing as input a random sample of parameters. This data-independent discretization, however, might miss pockets of nearly-optimal parameters: prior research has presented scenarios where the only viable parameters lie within an arbitrarily small region. We provide an algorithm that learns a finite set of promising parameters from within an infinite set. Our algorithm can help compile a configuration portfolio, or it can be used to select the input to a configuration algorithm for finite parameter spaces. Our approach applies to any configuration problem that satisfies a simple yet ubiquitous structure: the algorithm's performance is a piecewise constant function of its parameters. Prior research has exhibited this structure in domains from integer programming to clustering

    Dynamic lot size MIPs for multiple products and ELSPs with shortages, capacity and changeover limits

    Get PDF
    Scheduling multiple products with limited resources and varying demands remain a critical challenge for many industries. This work presents mixed integer programs (MIPs) that solve the Economic Lot Sizing Problem (ELSP) and other Dynamic Lot-Sizing (DLS) models with multiple items. DLS systems are classified, extended and formulated as MIPs. Especially, logical constraints are a key ingredient in succeeding in this endeavour. They were used to formulate the setup/changeover of items in the production line. Minimising the holding, shortage and setup costs is the primary objective for ELSPs. This is achieved by finding an optimal production schedule taking into account the limited manufacturing capacity. Case studies for a production plants are used to demonstrate the functionality of the MIPs. Optimal DLS and ELSP solutions are given for a set of test-instances. Insights into the runtime and solution quality are given.Comment: 14 pages, 6 figure

    Optimising halting station of passenger railway lines

    Get PDF
    In many real life passenger railway networks, the types of stations and lines characterisethe halting stations of the train lines. Common types are Regional, Interregional or Intercity.This paper considers the problem of altering the halts of lines by both upgrading and downgrading stations, such that this results in less total travel time. We propose a combination of reduction methods, Lagrangian relaxation, and a problem-specific multiplier adjustment algorithm to solve the presented mixed integer linear programming formulation. A computational study of several real-life instances based on problem data of the Dutch passenger railway operator NS Reizigers is included.mathematical economics and econometrics ;

    Exact Combinatorial Optimization with Graph Convolutional Neural Networks

    Full text link
    Combinatorial optimization problems are typically tackled by the branch-and-bound paradigm. We propose a new graph convolutional neural network model for learning branch-and-bound variable selection policies, which leverages the natural variable-constraint bipartite graph representation of mixed-integer linear programs. We train our model via imitation learning from the strong branching expert rule, and demonstrate on a series of hard problems that our approach produces policies that improve upon state-of-the-art machine-learning methods for branching and generalize to instances significantly larger than seen during training. Moreover, we improve for the first time over expert-designed branching rules implemented in a state-of-the-art solver on large problems. Code for reproducing all the experiments can be found at https://github.com/ds4dm/learn2branch.Comment: Accepted paper at the NeurIPS 2019 conferenc

    Branch-and-Bound Solves Random Binary IPs in Polytime

    Full text link
    Branch-and-bound is the workhorse of all state-of-the-art mixed integer linear programming (MILP) solvers. These implementations of branch-and-bound typically use variable branching, that is, the child nodes are obtained by fixing some variable to an integer value vv in one node and to v+1v + 1 in the other node. Even though modern MILP solvers are able to solve very large-scale instances efficiently, relatively little attention has been given to understanding why the underlying branch-and-bound algorithm performs so well. In this paper our goal is to theoretically analyze the performance of the standard variable branching based branch-and-bound algorithm. In order to avoid the exponential worst-case lower bounds, we follow the common idea of considering random instances. More precisely, we consider random integer programs where the entries of the coefficient matrix and the objective function are randomly sampled. Our main result is that with good probability branch-and-bound with variable branching explores only a polynomial number of nodes to solve these instances, for a fixed number of constraints. To the best of our knowledge this is the first known such result for a standard version of branch-and-bound. We believe that this result provides a compelling indication of why branch-and-bound with variable branching works so well in practice

    Hybrid Models for Learning to Branch

    Full text link
    A recent Graph Neural Network (GNN) approach for learning to branch has been shown to successfully reduce the running time of branch-and-bound algorithms for Mixed Integer Linear Programming (MILP). While the GNN relies on a GPU for inference, MILP solvers are purely CPU-based. This severely limits its application as many practitioners may not have access to high-end GPUs. In this work, we ask two key questions. First, in a more realistic setting where only a CPU is available, is the GNN model still competitive? Second, can we devise an alternate computationally inexpensive model that retains the predictive power of the GNN architecture? We answer the first question in the negative, and address the second question by proposing a new hybrid architecture for efficient branching on CPU machines. The proposed architecture combines the expressive power of GNNs with computationally inexpensive multi-linear perceptrons (MLP) for branching. We evaluate our methods on four classes of MILP problems, and show that they lead to up to 26% reduction in solver running time compared to state-of-the-art methods without a GPU, while extrapolating to harder problems than it was trained on.Comment: Preprint. Under revie

    Parameterizing Branch-and-Bound Search Trees to Learn Branching Policies

    Full text link
    Branch and Bound (B&B) is the exact tree search method typically used to solve Mixed-Integer Linear Programming problems (MILPs). Learning branching policies for MILP has become an active research area, with most works proposing to imitate the strong branching rule and specialize it to distinct classes of problems. We aim instead at learning a policy that generalizes across heterogeneous MILPs: our main hypothesis is that parameterizing the state of the B&B search tree can aid this type of generalization. We propose a novel imitation learning framework, and introduce new input features and architectures to represent branching. Experiments on MILP benchmark instances clearly show the advantages of incorporating an explicit parameterization of the state of the search tree to modulate the branching decisions, in terms of both higher accuracy and smaller B&B trees. The resulting policies significantly outperform the current state-of-the-art method for "learning to branch" by effectively allowing generalization to generic unseen instances.Comment: AAAI 2021 camera-ready version with supplementary materials, improved readability of figures in main article. Code, data and trained models are available at https://github.com/ds4dm/branch-search-tree
    • …
    corecore