4 research outputs found

    Improving simulated annealing through derandomization

    Get PDF
    We propose and study a version of simulated annealing (SA) on continuous state spaces based on (t,s)R(t,s)_R-sequences. The parameter RNˉR\in\bar{\mathbb{N}} regulates the degree of randomness of the input sequence, with the case R=0R=0 corresponding to IID uniform random numbers and the limiting case R=R=\infty to (t,s)(t,s)-sequences. Our main result, obtained for rectangular domains, shows that the resulting optimization method, which we refer to as QMC-SA, converges almost surely to the global optimum of the objective function φ\varphi for any RNR\in\mathbb{N}. When φ\varphi is univariate, we are in addition able to show that the completely deterministic version of QMC-SA is convergent. A key property of these results is that they do not require objective-dependent conditions on the cooling schedule. As a corollary of our theoretical analysis, we provide a new almost sure convergence result for SA which shares this property under minimal assumptions on φ\varphi. We further explain how our results in fact apply to a broader class of optimization methods including for example threshold accepting, for which to our knowledge no convergence results currently exist. We finally illustrate the superiority of QMC-SA over SA algorithms in a numerical study.Comment: 33 pages, 4 figures (final version

    On the convergence rate issues of general Markov search for global minimum

    Get PDF
    This paper focuses on the convergence rate problem of general Markov search for global minimum. Many of existing methods are designed for overcoming a very hard problem which is how to efficiently localize and approximate the global minimum of the multimodal function f while all information which can be used are the f-values evaluated for generated points. Because such methods use poor information on f, the following problem may occur: the closer to the optimum, the harder to generate a “better” (in sense of the cost function) state. This paper explores this issue on theoretical basis. To do so the concept of lazy convergence for a globally convergent method is introduced: a globally convergent method is called lazy if the probability of generating a better state from one step to another goes to zero with time. Such issue is the cause of very undesired convergence properties. This paper shows when an optimization method has to be lazy and the presented general results cover, in particular, the class of simulated annealing algorithms and monotone random search. Furthermore, some attention is put on accelerated random search and evolution strategies

    Improving Simulated Annealing through Derandomization

    No full text
    Non UBCUnreviewedAuthor affiliation: Harvard UniversityPostdoctora
    corecore