23,034 research outputs found
Lifting Linear Extension Complexity Bounds to the Mixed-Integer Setting
Mixed-integer mathematical programs are among the most commonly used models
for a wide set of problems in Operations Research and related fields. However,
there is still very little known about what can be expressed by small
mixed-integer programs. In particular, prior to this work, it was open whether
some classical problems, like the minimum odd-cut problem, can be expressed by
a compact mixed-integer program with few (even constantly many) integer
variables. This is in stark contrast to linear formulations, where recent
breakthroughs in the field of extended formulations have shown that many
polytopes associated to classical combinatorial optimization problems do not
even admit approximate extended formulations of sub-exponential size.
We provide a general framework for lifting inapproximability results of
extended formulations to the setting of mixed-integer extended formulations,
and obtain almost tight lower bounds on the number of integer variables needed
to describe a variety of classical combinatorial optimization problems. Among
the implications we obtain, we show that any mixed-integer extended formulation
of sub-exponential size for the matching polytope, cut polytope, traveling
salesman polytope or dominant of the odd-cut polytope, needs many integer variables, where is the number of vertices of the
underlying graph. Conversely, the above-mentioned polyhedra admit
polynomial-size mixed-integer formulations with only or (for the traveling salesman polytope) many integer variables.
Our results build upon a new decomposition technique that, for any convex set
, allows for approximating any mixed-integer description of by the
intersection of with the union of a small number of affine subspaces.Comment: A conference version of this paper will be presented at SODA 201
Mixed-Integer Convex Nonlinear Optimization with Gradient-Boosted Trees Embedded
Decision trees usefully represent sparse, high dimensional and noisy data.
Having learned a function from this data, we may want to thereafter integrate
the function into a larger decision-making problem, e.g., for picking the best
chemical process catalyst. We study a large-scale, industrially-relevant
mixed-integer nonlinear nonconvex optimization problem involving both
gradient-boosted trees and penalty functions mitigating risk. This
mixed-integer optimization problem with convex penalty terms broadly applies to
optimizing pre-trained regression tree models. Decision makers may wish to
optimize discrete models to repurpose legacy predictive models, or they may
wish to optimize a discrete model that particularly well-represents a data set.
We develop several heuristic methods to find feasible solutions, and an exact,
branch-and-bound algorithm leveraging structural properties of the
gradient-boosted trees and penalty functions. We computationally test our
methods on concrete mixture design instance and a chemical catalysis industrial
instance
Efficient Semidefinite Branch-and-Cut for MAP-MRF Inference
We propose a Branch-and-Cut (B&C) method for solving general MAP-MRF
inference problems. The core of our method is a very efficient bounding
procedure, which combines scalable semidefinite programming (SDP) and a
cutting-plane method for seeking violated constraints. In order to further
speed up the computation, several strategies have been exploited, including
model reduction, warm start and removal of inactive constraints.
We analyze the performance of the proposed method under different settings,
and demonstrate that our method either outperforms or performs on par with
state-of-the-art approaches. Especially when the connectivities are dense or
when the relative magnitudes of the unary costs are low, we achieve the best
reported results. Experiments show that the proposed algorithm achieves better
approximation than the state-of-the-art methods within a variety of time
budgets on challenging non-submodular MAP-MRF inference problems.Comment: 21 page
Users Guide for SnadiOpt: A Package Adding Automatic Differentiation to Snopt
SnadiOpt is a package that supports the use of the automatic differentiation
package ADIFOR with the optimization package Snopt. Snopt is a general-purpose
system for solving optimization problems with many variables and constraints.
It minimizes a linear or nonlinear function subject to bounds on the variables
and sparse linear or nonlinear constraints. It is suitable for large-scale
linear and quadratic programming and for linearly constrained optimization, as
well as for general nonlinear programs. The method used by Snopt requires the
first derivatives of the objective and constraint functions to be available.
The SnadiOpt package allows users to avoid the time-consuming and error-prone
process of evaluating and coding these derivatives. Given Fortran code for
evaluating only the values of the objective and constraints, SnadiOpt
automatically generates the code for evaluating the derivatives and builds the
relevant Snopt input files and sparse data structures.Comment: pages i-iv, 1-2
Polynomial Kernels for Weighted Problems
Kernelization is a formalization of efficient preprocessing for NP-hard
problems using the framework of parameterized complexity. Among open problems
in kernelization it has been asked many times whether there are deterministic
polynomial kernelizations for Subset Sum and Knapsack when parameterized by the
number of items.
We answer both questions affirmatively by using an algorithm for compressing
numbers due to Frank and Tardos (Combinatorica 1987). This result had been
first used by Marx and V\'egh (ICALP 2013) in the context of kernelization. We
further illustrate its applicability by giving polynomial kernels also for
weighted versions of several well-studied parameterized problems. Furthermore,
when parameterized by the different item sizes we obtain a polynomial
kernelization for Subset Sum and an exponential kernelization for Knapsack.
Finally, we also obtain kernelization results for polynomial integer programs
Towards Energy Consumption Verification via Static Analysis
In this paper we leverage an existing general framework for resource usage
verification and specialize it for verifying energy consumption specifications
of embedded programs. Such specifications can include both lower and upper
bounds on energy usage, and they can express intervals within which energy
usage is to be certified to be within such bounds. The bounds of the intervals
can be given in general as functions on input data sizes. Our verification
system can prove whether such energy usage specifications are met or not. It
can also infer the particular conditions under which the specifications hold.
To this end, these conditions are also expressed as intervals of functions of
input data sizes, such that a given specification can be proved for some
intervals but disproved for others. The specifications themselves can also
include preconditions expressing intervals for input data sizes. We report on a
prototype implementation of our approach within the CiaoPP system for the XC
language and XS1-L architecture, and illustrate with an example how embedded
software developers can use this tool, and in particular for determining values
for program parameters that ensure meeting a given energy budget while
minimizing the loss in quality of service.Comment: Presented at HIP3ES, 2015 (arXiv: 1501.03064
Lower bounds for several online variants of bin packing
We consider several previously studied online variants of bin packing and
prove new and improved lower bounds on the asymptotic competitive ratios for
them. For that, we use a method of fully adaptive constructions. In particular,
we improve the lower bound for the asymptotic competitive ratio of online
square packing significantly, raising it from roughly 1.68 to above 1.75.Comment: WAOA 201
On the Sample Size of Random Convex Programs with Structured Dependence on the Uncertainty (Extended Version)
The "scenario approach" provides an intuitive method to address chance
constrained problems arising in control design for uncertain systems. It
addresses these problems by replacing the chance constraint with a finite
number of sampled constraints (scenarios). The sample size critically depends
on Helly's dimension, a quantity always upper bounded by the number of decision
variables. However, this standard bound can lead to computationally expensive
programs whose solutions are conservative in terms of cost and violation
probability. We derive improved bounds of Helly's dimension for problems where
the chance constraint has certain structural properties. The improved bounds
lower the number of scenarios required for these problems, leading both to
improved objective value and reduced computational complexity. Our results are
generally applicable to Randomized Model Predictive Control of chance
constrained linear systems with additive uncertainty and affine disturbance
feedback. The efficacy of the proposed bound is demonstrated on an inventory
management example.Comment: Accepted for publication at Automatic
- …