49,092 research outputs found

    A Computational Comparison of Optimization Methods for the Golomb Ruler Problem

    Full text link
    The Golomb ruler problem is defined as follows: Given a positive integer n, locate n marks on a ruler such that the distance between any two distinct pair of marks are different from each other and the total length of the ruler is minimized. The Golomb ruler problem has applications in information theory, astronomy and communications, and it can be seen as a challenge for combinatorial optimization algorithms. Although constructing high quality rulers is well-studied, proving optimality is a far more challenging task. In this paper, we provide a computational comparison of different optimization paradigms, each using a different model (linear integer, constraint programming and quadratic integer) to certify that a given Golomb ruler is optimal. We propose several enhancements to improve the computational performance of each method by exploring bound tightening, valid inequalities, cutting planes and branching strategies. We conclude that a certain quadratic integer programming model solved through a Benders decomposition and strengthened by two types of valid inequalities performs the best in terms of solution time for small-sized Golomb ruler problem instances. On the other hand, a constraint programming model improved by range reduction and a particular branching strategy could have more potential to solve larger size instances due to its promising parallelization features

    Nonlinear Integer Programming

    Full text link
    Research efforts of the past fifty years have led to a development of linear integer programming as a mature discipline of mathematical optimization. Such a level of maturity has not been reached when one considers nonlinear systems subject to integrality requirements for the variables. This chapter is dedicated to this topic. The primary goal is a study of a simple version of general nonlinear integer problems, where all constraints are still linear. Our focus is on the computational complexity of the problem, which varies significantly with the type of nonlinear objective function in combination with the underlying combinatorial structure. Numerous boundary cases of complexity emerge, which sometimes surprisingly lead even to polynomial time algorithms. We also cover recent successful approaches for more general classes of problems. Though no positive theoretical efficiency results are available, nor are they likely to ever be available, these seem to be the currently most successful and interesting approaches for solving practical problems. It is our belief that the study of algorithms motivated by theoretical considerations and those motivated by our desire to solve practical instances should and do inform one another. So it is with this viewpoint that we present the subject, and it is in this direction that we hope to spark further research.Comment: 57 pages. To appear in: M. J\"unger, T. Liebling, D. Naddef, G. Nemhauser, W. Pulleyblank, G. Reinelt, G. Rinaldi, and L. Wolsey (eds.), 50 Years of Integer Programming 1958--2008: The Early Years and State-of-the-Art Surveys, Springer-Verlag, 2009, ISBN 354068274

    The Voice of Optimization

    Full text link
    We introduce the idea that using optimal classification trees (OCTs) and optimal classification trees with-hyperplanes (OCT-Hs), interpretable machine learning algorithms developed by Bertsimas and Dunn [2017, 2018], we are able to obtain insight on the strategy behind the optimal solution in continuous and mixed-integer convex optimization problem as a function of key parameters that affect the problem. In this way, optimization is not a black box anymore. Instead, we redefine optimization as a multiclass classification problem where the predictor gives insights on the logic behind the optimal solution. In other words, OCTs and OCT-Hs give optimization a voice. We show on several realistic examples that the accuracy behind our method is in the 90%-100% range, while even when the predictions are not correct, the degree of suboptimality or infeasibility is very low. We compare optimal strategy predictions of OCTs and OCT-Hs and feedforward neural networks (NNs) and conclude that the performance of OCT-Hs and NNs is comparable. OCTs are somewhat weaker but often competitive. Therefore, our approach provides a novel insightful understanding of optimal strategies to solve a broad class of continuous and mixed-integer optimization problems

    A MINLP Solution for Pellet Reactor Modeling

    Get PDF
    A fluidized bed reactor for phosphate precipitation and removal from wastewater is modeled according to a two-step procedure. The first modeling phase, based on the development of a thermodynamic model for the computation of phosphate conversion, previously presented elsewhere is not reported here. The second step is related to the reactor modeling in the core of this paper. The pellet reactor is modeled as a reactor network involving a set of elementary cells representing ideal flow patterns. All the potential solutions are imbedded into a superstructure and the modeling problem is expressed as a MINLP problem. The MINLP problem is solved by means of the GAMS package, first for two flow rate values corresponding to two experimental fluidized bed behaviours, and then for the two flow rates considered simultaneously. In each case, the problem consists in finding an output concentration as close as possible to the experimental output concentration. Three objective functions are studied. The results are compared with those of Montastruc et al. (2004) who used a different numerical procedure. Whatever the considered case, the solutions found are structurally simpler than the ones of Montastruc et al. (2004). A major assessment of this study is that the reactor efficiency can easily be deduced, without any precise knowledge of some key parameters such as the density and thickness of the calcium phosphate layer. Finally a last numerical study concerning the superstructure definition shows that too complex a superstructure does not provide significant refinements on the solution
    corecore