144,958 research outputs found
Gradient-Free Methods for Saddle-Point Problem
In the paper, we generalize the approach Gasnikov et. al, 2017, which allows
to solve (stochastic) convex optimization problems with an inexact
gradient-free oracle, to the convex-concave saddle-point problem. The proposed
approach works, at least, like the best existing approaches. But for a special
set-up (simplex type constraints and closeness of Lipschitz constants in 1 and
2 norms) our approach reduces times the required number of
oracle calls (function calculations). Our method uses a stochastic
approximation of the gradient via finite differences. In this case, the
function must be specified not only on the optimization set itself, but in a
certain neighbourhood of it. In the second part of the paper, we analyze the
case when such an assumption cannot be made, we propose a general approach on
how to modernize the method to solve this problem, and also we apply this
approach to particular cases of some classical sets
Landscape Surrogate: Learning Decision Losses for Mathematical Optimization Under Partial Information
Recent works in learning-integrated optimization have shown promise in
settings where the optimization problem is only partially observed or where
general-purpose optimizers perform poorly without expert tuning. By learning an
optimizer to tackle these challenging problems with as the
objective, the optimization process can be substantially accelerated by
leveraging past experience. The optimizer can be trained with supervision from
known optimal solutions or implicitly by optimizing the compound function
. The implicit approach may not require optimal solutions as
labels and is capable of handling problem uncertainty; however, it is slow to
train and deploy due to frequent calls to optimizer during both
training and testing. The training is further challenged by sparse gradients of
, especially for combinatorial solvers. To address these
challenges, we propose using a smooth and learnable Landscape Surrogate as
a replacement for . This surrogate, learnable by neural
networks, can be computed faster than the solver , provides dense
and smooth gradients during training, can generalize to unseen optimization
problems, and is efficiently learned via alternating optimization. We test our
approach on both synthetic problems, including shortest path and
multidimensional knapsack, and real-world problems such as portfolio
optimization, achieving comparable or superior objective values compared to
state-of-the-art baselines while reducing the number of calls to .
Notably, our approach outperforms existing methods for computationally
expensive high-dimensional problems
Multiagent cooperation for solving global optimization problems: an extendible framework with example cooperation strategies
This paper proposes the use of multiagent cooperation for solving global optimization problems through the introduction of a new multiagent environment, MANGO. The strength of the environment lays in itsflexible structure based on communicating software agents that attempt to solve a problem cooperatively. This structure allows the execution of a wide range of global optimization algorithms described as a set of interacting operations. At one extreme, MANGO welcomes an individual non-cooperating agent, which is basically the traditional way of solving a global optimization problem. At the other extreme, autonomous agents existing in the environment cooperate as they see fit during run time. We explain the development and communication tools provided in the environment as well as examples of agent realizations and cooperation scenarios. We also show how the multiagent structure is more effective than having a single nonlinear optimization algorithm with randomly selected initial points
A Lower Bound for the Optimization of Finite Sums
This paper presents a lower bound for optimizing a finite sum of
functions, where each function is -smooth and the sum is -strongly
convex. We show that no algorithm can reach an error in minimizing
all functions from this class in fewer than iterations, where is a
surrogate condition number. We then compare this lower bound to upper bounds
for recently developed methods specializing to this setting. When the functions
involved in this sum are not arbitrary, but based on i.i.d. random data, then
we further contrast these complexity results with those for optimal first-order
methods to directly optimize the sum. The conclusion we draw is that a lot of
caution is necessary for an accurate comparison, and identify machine learning
scenarios where the new methods help computationally.Comment: Added an erratum, we are currently working on extending the result to
randomized algorithm
Semi-proximal Mirror-Prox for Nonsmooth Composite Minimization
We propose a new first-order optimisation algorithm to solve high-dimensional
non-smooth composite minimisation problems. Typical examples of such problems
have an objective that decomposes into a non-smooth empirical risk part and a
non-smooth regularisation penalty. The proposed algorithm, called Semi-Proximal
Mirror-Prox, leverages the Fenchel-type representation of one part of the
objective while handling the other part of the objective via linear
minimization over the domain. The algorithm stands in contrast with more
classical proximal gradient algorithms with smoothing, which require the
computation of proximal operators at each iteration and can therefore be
impractical for high-dimensional problems. We establish the theoretical
convergence rate of Semi-Proximal Mirror-Prox, which exhibits the optimal
complexity bounds, i.e. , for the number of calls to linear
minimization oracle. We present promising experimental results showing the
interest of the approach in comparison to competing methods
- …