18,952 research outputs found
Nonlinear Integer Programming
Research efforts of the past fifty years have led to a development of linear
integer programming as a mature discipline of mathematical optimization. Such a
level of maturity has not been reached when one considers nonlinear systems
subject to integrality requirements for the variables. This chapter is
dedicated to this topic.
The primary goal is a study of a simple version of general nonlinear integer
problems, where all constraints are still linear. Our focus is on the
computational complexity of the problem, which varies significantly with the
type of nonlinear objective function in combination with the underlying
combinatorial structure. Numerous boundary cases of complexity emerge, which
sometimes surprisingly lead even to polynomial time algorithms.
We also cover recent successful approaches for more general classes of
problems. Though no positive theoretical efficiency results are available, nor
are they likely to ever be available, these seem to be the currently most
successful and interesting approaches for solving practical problems.
It is our belief that the study of algorithms motivated by theoretical
considerations and those motivated by our desire to solve practical instances
should and do inform one another. So it is with this viewpoint that we present
the subject, and it is in this direction that we hope to spark further
research.Comment: 57 pages. To appear in: M. J\"unger, T. Liebling, D. Naddef, G.
Nemhauser, W. Pulleyblank, G. Reinelt, G. Rinaldi, and L. Wolsey (eds.), 50
Years of Integer Programming 1958--2008: The Early Years and State-of-the-Art
Surveys, Springer-Verlag, 2009, ISBN 354068274
Beyond Chance-Constrained Convex Mixed-Integer Optimization: A Generalized Calafiore-Campi Algorithm and the notion of -optimization
The scenario approach developed by Calafiore and Campi to attack
chance-constrained convex programs utilizes random sampling on the uncertainty
parameter to substitute the original problem with a representative continuous
convex optimization with convex constraints which is a relaxation of the
original. Calafiore and Campi provided an explicit estimate on the size of
the sampling relaxation to yield high-likelihood feasible solutions of the
chance-constrained problem. They measured the probability of the original
constraints to be violated by the random optimal solution from the relaxation
of size .
This paper has two main contributions. First, we present a generalization of
the Calafiore-Campi results to both integer and mixed-integer variables. In
fact, we demonstrate that their sampling estimates work naturally for variables
restricted to some subset of . The key elements are
generalizations of Helly's theorem where the convex sets are required to
intersect . The size of samples in both algorithms will
be directly determined by the -Helly numbers.
Motivated by the first half of the paper, for any subset , we introduce the notion of an -optimization problem, where the
variables take on values over . It generalizes continuous, integer, and
mixed-integer optimization. We illustrate with examples the expressive power of
-optimization to capture sophisticated combinatorial optimization problems
with difficult modular constraints. We reinforce the evidence that
-optimization is "the right concept" by showing that the well-known
randomized sampling algorithm of K. Clarkson for low-dimensional convex
optimization problems can be extended to work with variables taking values over
.Comment: 16 pages, 0 figures. This paper has been revised and split into two
parts. This version is the second part of the original paper. The first part
of the original paper is arXiv:1508.02380 (the original article contained 24
pages, 3 figures
From Uncertainty Data to Robust Policies for Temporal Logic Planning
We consider the problem of synthesizing robust disturbance feedback policies
for systems performing complex tasks. We formulate the tasks as linear temporal
logic specifications and encode them into an optimization framework via
mixed-integer constraints. Both the system dynamics and the specifications are
known but affected by uncertainty. The distribution of the uncertainty is
unknown, however realizations can be obtained. We introduce a data-driven
approach where the constraints are fulfilled for a set of realizations and
provide probabilistic generalization guarantees as a function of the number of
considered realizations. We use separate chance constraints for the
satisfaction of the specification and operational constraints. This allows us
to quantify their violation probabilities independently. We compute disturbance
feedback policies as solutions of mixed-integer linear or quadratic
optimization problems. By using feedback we can exploit information of past
realizations and provide feasibility for a wider range of situations compared
to static input sequences. We demonstrate the proposed method on two robust
motion-planning case studies for autonomous driving
The N-K Problem in Power Grids: New Models, Formulations and Numerical Experiments (extended version)
Given a power grid modeled by a network together with equations describing
the power flows, power generation and consumption, and the laws of physics, the
so-called N-k problem asks whether there exists a set of k or fewer arcs whose
removal will cause the system to fail. The case where k is small is of
practical interest. We present theoretical and computational results involving
a mixed-integer model and a continuous nonlinear model related to this
question.Comment: 40 pages 3 figure
The Voice of Optimization
We introduce the idea that using optimal classification trees (OCTs) and
optimal classification trees with-hyperplanes (OCT-Hs), interpretable machine
learning algorithms developed by Bertsimas and Dunn [2017, 2018], we are able
to obtain insight on the strategy behind the optimal solution in continuous and
mixed-integer convex optimization problem as a function of key parameters that
affect the problem. In this way, optimization is not a black box anymore.
Instead, we redefine optimization as a multiclass classification problem where
the predictor gives insights on the logic behind the optimal solution. In other
words, OCTs and OCT-Hs give optimization a voice. We show on several realistic
examples that the accuracy behind our method is in the 90%-100% range, while
even when the predictions are not correct, the degree of suboptimality or
infeasibility is very low. We compare optimal strategy predictions of OCTs and
OCT-Hs and feedforward neural networks (NNs) and conclude that the performance
of OCT-Hs and NNs is comparable. OCTs are somewhat weaker but often
competitive. Therefore, our approach provides a novel insightful understanding
of optimal strategies to solve a broad class of continuous and mixed-integer
optimization problems
- …