16,548 research outputs found

    Optimization in random field Ising models by quantum annealing

    Full text link
    We investigate the properties of quantum annealing applied to the random field Ising model in one, two and three dimensions. The decay rate of the residual energy, defined as the energy excess from the ground state, is find to be ereslog(NMC)ζe_{res}\sim \log(N_{MC})^{-\zeta} with ζ\zeta in the range 2...62...6, depending on the strength of the random field. Systems with ``large clusters'' are harder to optimize as measured by ζ\zeta. Our numerical results suggest that in the ordered phase ζ=2\zeta=2 whereas in the paramagnetic phase the annealing procedure can be tuned so that ζ6\zeta\to6.Comment: 7 pages (2 columns), 9 figures, published with minor changes, one reference updated after the publicatio

    Simulated annealing with time-dependent energy function via Sobolev inequalities

    Get PDF
    AbstractWe analyze the simulated annealing algorithm with an energy function Ut that depends on time. Assuming some regularity conditions on Ut (especially that Ut does not change too quickly in time), and choosing a logarithmic cooling schedule for the algorithm, we derive bounds on the Radon-Nikodym density of the distribution of the annealing algorithm at time t with respect to the invariant measure πt at time t. Moreover, we estimate the entrance time of the algorithm into typical subsets V of the state space in terms of πt(Vc)

    An Analysis of the Value of Information when Exploring Stochastic, Discrete Multi-Armed Bandits

    Full text link
    In this paper, we propose an information-theoretic exploration strategy for stochastic, discrete multi-armed bandits that achieves optimal regret. Our strategy is based on the value of information criterion. This criterion measures the trade-off between policy information and obtainable rewards. High amounts of policy information are associated with exploration-dominant searches of the space and yield high rewards. Low amounts of policy information favor the exploitation of existing knowledge. Information, in this criterion, is quantified by a parameter that can be varied during search. We demonstrate that a simulated-annealing-like update of this parameter, with a sufficiently fast cooling schedule, leads to an optimal regret that is logarithmic with respect to the number of episodes.Comment: Entrop

    Optimization by Record Dynamics

    Full text link
    Large dynamical changes in thermalizing glassy systems are triggered by trajectories crossing record sized barriers, a behavior revealing the presence of a hierarchical structure in configuration space. The observation is here turned into a novel local search optimization algorithm dubbed Record Dynamics Optimization, or RDO. RDO uses the Metropolis rule to accept or reject candidate solutions depending on the value of a parameter akin to the temperature, and minimizes the cost function of the problem at hand through cycles where its `temperature' is raised and subsequently decreased in order to expediently generate record high (and low) values of the cost function. Below, RDO is introduced and then tested by searching the ground state of the Edwards-Anderson spin-glass model, in two and three spatial dimensions. A popular and highly efficient optimization algorithm, Parallel Tempering (PT) is applied to the same problem as a benchmark. RDO and PT turn out to produce solution of similar quality for similar numerical effort, but RDO is simpler to program and additionally yields geometrical information on the system's configuration space which is of interest in many applications. In particular, the effectiveness of RDO strongly indicates the presence of the above mentioned hierarchically organized configuration space, with metastable regions indexed by the cost (or energy) of the transition states connecting them.Comment: 14 pages, 12 figure

    Escaping the Local Minima via Simulated Annealing: Optimization of Approximately Convex Functions

    Full text link
    We consider the problem of optimizing an approximately convex function over a bounded convex set in Rn\mathbb{R}^n using only function evaluations. The problem is reduced to sampling from an \emph{approximately} log-concave distribution using the Hit-and-Run method, which is shown to have the same O\mathcal{O}^* complexity as sampling from log-concave distributions. In addition to extend the analysis for log-concave distributions to approximate log-concave distributions, the implementation of the 1-dimensional sampler of the Hit-and-Run walk requires new methods and analysis. The algorithm then is based on simulated annealing which does not relies on first order conditions which makes it essentially immune to local minima. We then apply the method to different motivating problems. In the context of zeroth order stochastic convex optimization, the proposed method produces an ϵ\epsilon-minimizer after O(n7.5ϵ2)\mathcal{O}^*(n^{7.5}\epsilon^{-2}) noisy function evaluations by inducing a O(ϵ/n)\mathcal{O}(\epsilon/n)-approximately log concave distribution. We also consider in detail the case when the "amount of non-convexity" decays towards the optimum of the function. Other applications of the method discussed in this work include private computation of empirical risk minimizers, two-stage stochastic programming, and approximate dynamic programming for online learning.Comment: 27 page
    corecore