135,985 research outputs found
Exploring Local Optima in Schematic Layout
In search-based graph drawing methods there are
typically a number of parameters that control the search algorithm.
These parameters do not affect the ?tness function, but
nevertheless have an impact on the ?nal layout. One such search
method is hill climbing, and, in the context of schematic layout, we
explore how varying three parameters (grid spacing, the starting
distance of allowed node movement and the number of iterations)
affects the resultant diagram. Although we cannot characterize
schematics completely and so cannot yet automatically assign
parameters for diagrams, we observe that when parameters are
set to values that increase the search space, they also tend to
improve the ?nal layout. We come to the conclusion that hillclimbing
methods for schematic layout are more prone to reaching
local optima than had previously been expected and that a wider
search, as described in this paper, can mitigate this, so resulting
in a better layout
Performance Guarantees of Local Search for Multiprocessor Scheduling
Increasing interest has recently been shown in analyzing the worst-case behavior of local search algorithms. In particular, the quality of local optima and the time needed to find the local optima by the simplest form of local search has been studied. This paper deals with worst-case performance of local search algorithms for makespan minimization on parallel machines. We analyze the quality of the local optima obtained by iterative improvement over the jump, swap, multi-exchange, and the newly defined push neighborhoods. Finally, for the jump neighborhood we provide bounds on the number of local search steps required to find a local optimum.operations research and management science;
Efficient approaches for escaping higher order saddle points in non-convex optimization
Local search heuristics for non-convex optimizations are popular in applied
machine learning. However, in general it is hard to guarantee that such
algorithms even converge to a local minimum, due to the existence of
complicated saddle point structures in high dimensions. Many functions have
degenerate saddle points such that the first and second order derivatives
cannot distinguish them with local optima. In this paper we use higher order
derivatives to escape these saddle points: we design the first efficient
algorithm guaranteed to converge to a third order local optimum (while existing
techniques are at most second order). We also show that it is NP-hard to extend
this further to finding fourth order local optima
- …
