56,341 research outputs found
A nonmonotone GRASP
A greedy randomized adaptive search procedure (GRASP) is an itera-
tive multistart metaheuristic for difficult combinatorial optimization problems. Each
GRASP iteration consists of two phases: a construction phase, in which a feasible
solution is produced, and a local search phase, in which a local optimum in the
neighborhood of the constructed solution is sought. Repeated applications of the con-
struction procedure yields different starting solutions for the local search and the
best overall solution is kept as the result. The GRASP local search applies iterative
improvement until a locally optimal solution is found. During this phase, starting from
the current solution an improving neighbor solution is accepted and considered as the
new current solution. In this paper, we propose a variant of the GRASP framework that
uses a new “nonmonotone” strategy to explore the neighborhood of the current solu-
tion. We formally state the convergence of the nonmonotone local search to a locally
optimal solution and illustrate the effectiveness of the resulting Nonmonotone GRASP
on three classical hard combinatorial optimization problems: the maximum cut prob-
lem (MAX-CUT), the weighted maximum satisfiability problem (MAX-SAT), and
the quadratic assignment problem (QAP)
Second-order Quantile Methods for Experts and Combinatorial Games
We aim to design strategies for sequential decision making that adjust to the
difficulty of the learning problem. We study this question both in the setting
of prediction with expert advice, and for more general combinatorial decision
tasks. We are not satisfied with just guaranteeing minimax regret rates, but we
want our algorithms to perform significantly better on easy data. Two popular
ways to formalize such adaptivity are second-order regret bounds and quantile
bounds. The underlying notions of 'easy data', which may be paraphrased as "the
learning problem has small variance" and "multiple decisions are useful", are
synergetic. But even though there are sophisticated algorithms that exploit one
of the two, no existing algorithm is able to adapt to both.
In this paper we outline a new method for obtaining such adaptive algorithms,
based on a potential function that aggregates a range of learning rates (which
are essential tuning parameters). By choosing the right prior we construct
efficient algorithms and show that they reap both benefits by proving the first
bounds that are both second-order and incorporate quantiles
- …