62,159 research outputs found
A nonmonotone GRASP
A greedy randomized adaptive search procedure (GRASP) is an itera-
tive multistart metaheuristic for difficult combinatorial optimization problems. Each
GRASP iteration consists of two phases: a construction phase, in which a feasible
solution is produced, and a local search phase, in which a local optimum in the
neighborhood of the constructed solution is sought. Repeated applications of the con-
struction procedure yields different starting solutions for the local search and the
best overall solution is kept as the result. The GRASP local search applies iterative
improvement until a locally optimal solution is found. During this phase, starting from
the current solution an improving neighbor solution is accepted and considered as the
new current solution. In this paper, we propose a variant of the GRASP framework that
uses a new “nonmonotone” strategy to explore the neighborhood of the current solu-
tion. We formally state the convergence of the nonmonotone local search to a locally
optimal solution and illustrate the effectiveness of the resulting Nonmonotone GRASP
on three classical hard combinatorial optimization problems: the maximum cut prob-
lem (MAX-CUT), the weighted maximum satisfiability problem (MAX-SAT), and
the quadratic assignment problem (QAP)
First-order regret bounds for combinatorial semi-bandits
We consider the problem of online combinatorial optimization under
semi-bandit feedback, where a learner has to repeatedly pick actions from a
combinatorial decision set in order to minimize the total losses associated
with its decisions. After making each decision, the learner observes the losses
associated with its action, but not other losses. For this problem, there are
several learning algorithms that guarantee that the learner's expected regret
grows as with the number of rounds . In this
paper, we propose an algorithm that improves this scaling to
, where is the total loss of the best
action. Our algorithm is among the first to achieve such guarantees in a
partial-feedback scheme, and the first one to do so in a combinatorial setting.Comment: To appear at COLT 201
Cakewalk Sampling
We study the task of finding good local optima in combinatorial optimization
problems. Although combinatorial optimization is NP-hard in general, locally
optimal solutions are frequently used in practice. Local search methods however
typically converge to a limited set of optima that depend on their
initialization. Sampling methods on the other hand can access any valid
solution, and thus can be used either directly or alongside methods of the
former type as a way for finding good local optima. Since the effectiveness of
this strategy depends on the sampling distribution, we derive a robust learning
algorithm that adapts sampling distributions towards good local optima of
arbitrary objective functions. As a first use case, we empirically study the
efficiency in which sampling methods can recover locally maximal cliques in
undirected graphs. Not only do we show how our adaptive sampler outperforms
related methods, we also show how it can even approach the performance of
established clique algorithms. As a second use case, we consider how greedy
algorithms can be combined with our adaptive sampler, and we demonstrate how
this leads to superior performance in k-medoid clustering. Together, these
findings suggest that our adaptive sampler can provide an effective strategy to
combinatorial optimization problems that arise in practice.Comment: Accepted as a conference paper by AAAI-2020 (oral presentation
- …