308 research outputs found
Multivariate McCormick relaxations
McCormick (Math Prog 10(1):147–175, 1976) provides the framework for convex/concave relaxations of factorable functions, via rules for the product of functions and compositions of the form F ∘ f, where F is a univariate function. Herein, the composition theorem is generalized to allow multivariate outer functions F, and theory for the propagation of subgradients is presented. The generalization interprets the McCormick relaxation approach as a decomposition method for the auxiliary variable method. In addition to extending the framework, the new result provides a tool for the proof of relaxations of specific functions. Moreover, a direct consequence is an improved relaxation for the product of two functions, at least as tight as McCormick’s result, and often tighter. The result also allows the direct relaxation of multilinear products of functions. Furthermore, the composition result is applied to obtain improved convex underestimators for the minimum/maximum and the division of two functions for which current relaxations are often weak. These cases can be extended to allow composition of a variety of functions for which relaxations have been proposed
Chebyshev model arithmetic for factorable functions
This article presents an arithmetic for the computation of Chebyshev models for factorable functions and an analysis of their convergence properties. Similar to Taylor models, Chebyshev models consist of a pair of a multivariate polynomial approximating the factorable function and an interval remainder term bounding the actual gap with this polynomial approximant. Propagation rules and local convergence bounds are established for the addition, multiplication and composition operations with Chebyshev models. The global convergence of this arithmetic as the polynomial expansion order increases is also discussed. A generic implementation of Chebyshev model arithmetic is available in the library MC++. It is shown through several numerical case studies that Chebyshev models provide tighter bounds than their Taylor model counterparts, but this comes at the price of extra computational burden
Global Optimization of Gaussian processes
Gaussian processes~(Kriging) are interpolating data-driven models that are
frequently applied in various disciplines. Often, Gaussian processes are
trained on datasets and are subsequently embedded as surrogate models in
optimization problems. These optimization problems are nonconvex and global
optimization is desired. However, previous literature observed computational
burdens limiting deterministic global optimization to Gaussian processes
trained on few data points. We propose a reduced-space formulation for
deterministic global optimization with trained Gaussian processes embedded. For
optimization, the branch-and-bound solver branches only on the degrees of
freedom and McCormick relaxations are propagated through explicit Gaussian
process models. The approach also leads to significantly smaller and
computationally cheaper subproblems for lower and upper bounding. To further
accelerate convergence, we derive envelopes of common covariance functions for
GPs and tight relaxations of acquisition functions used in Bayesian
optimization including expected improvement, probability of improvement, and
lower confidence bound. In total, we reduce computational time by orders of
magnitude compared to state-of-the-art methods, thus overcoming previous
computational burdens. We demonstrate the performance and scaling of the
proposed method and apply it to Bayesian optimization with global optimization
of the acquisition function and chance-constrained programming. The Gaussian
process models, acquisition functions, and training scripts are available
open-source within the "MeLOn - Machine Learning Models for Optimization"
toolbox~(https://git.rwth-aachen.de/avt.svt/public/MeLOn)
Convex Hull Formulations for Mixed-Integer Multilinear Functions
In this paper, we present convex hull formulations for a mixed-integer,
multilinear term/function (MIMF) that features products of multiple continuous
and binary variables. We develop two equivalent convex relaxations of an MIMF
and study their polyhedral properties in their corresponding higher-dimensional
spaces. We numerically observe that the proposed formulations consistently
perform better than state-of-the-art relaxation approaches
Linear Programming Relaxations of Quadratically Constrained Quadratic Programs
We investigate the use of linear programming tools for solving semidefinite
programming relaxations of quadratically constrained quadratic problems.
Classes of valid linear inequalities are presented, including sparse PSD cuts,
and principal minors PSD cuts. Computational results based on instances from
the literature are presented.Comment: Published in IMA Volumes in Mathematics and its Applications, 2012,
Volume 15
Global Deterministic Optimization with Artificial Neural Networks Embedded
Artificial neural networks (ANNs) are used in various applications for
data-driven black-box modeling and subsequent optimization. Herein, we present
an efficient method for deterministic global optimization of ANN embedded
optimization problems. The proposed method is based on relaxations of
algorithms using McCormick relaxations in a reduced-space [\textit{SIOPT}, 20
(2009), pp. 573-601] including the convex and concave envelopes of the
nonlinear activation function of ANNs. The optimization problem is solved using
our in-house global deterministic solver MAiNGO. The performance of the
proposed method is shown in four optimization examples: an illustrative
function, a fermentation process, a compressor plant and a chemical process
optimization. The results show that computational solution time is favorable
compared to the global general-purpose optimization solver BARON.Comment: J Optim Theory Appl (2018
- …