297 research outputs found

    A General Large Neighborhood Search Framework for Solving Integer Programs

    Get PDF
    This paper studies how to design abstractions of large-scale combinatorial optimization problems that can leverage existing state-of-the-art solvers in general purpose ways, and that are amenable to data-driven design. The goal is to arrive at new approaches that can reliably outperform existing solvers in wall-clock time. We focus on solving integer programs, and ground our approach in the large neighborhood search (LNS) paradigm, which iteratively chooses a subset of variables to optimize while leaving the remainder fixed. The appeal of LNS is that it can easily use any existing solver as a subroutine, and thus can inherit the benefits of carefully engineered heuristic approaches and their software implementations. We also show that one can learn a good neighborhood selector from training data. Through an extensive empirical validation, we demonstrate that our LNS framework can significantly outperform, in wall-clock time, compared to state-of-the-art commercial solvers such as Gurobi

    Learning to Optimize: from Theory to Practice

    Get PDF
    Optimization is at the heart of everyday applications, from finding the fastest route for navigation to designing efficient drugs for diseases. The study of optimization algorithms has focused on developing general approaches that do not adapt to specific problem instances. While they enjoy wide applicability, they forgo the potentially useful information embedded in the structure of an instance. Furthermore, as new optimization problems appear, the algorithm development process relies heavily on domain expertise to identify special properties and design methods to exploit them. Such design philosophy is labor-intensive and difficult to deploy efficiently to a broad range of domain-specific optimization problems, which are becoming ubiquitous in the pursuit of ever more personalized applications. In this dissertation, we consider different hybrid versions of classical optimization algorithms with data-driven techniques. We aim to equip classical algorithms with the ability to adapt their behaviors on the fly based on specific problem instances. A common theme in our approaches is to train the data-driven components on a pre-collected batch of representative problem instances to optimize some performance metrics, e.g., wall-clock time. Varying the integration details, we present several approaches to learning data-driven optimization modules for combinatorial optimization problems and study the corresponding fundamental research questions on policy learning. We provide multiple practical experimental results to showcase the practicality of our methods which lead to state-of-the-art performance on some classes of problems.</p
    • ā€¦
    corecore