11 research outputs found
On the convergence rate issues of general Markov search for global minimum
This paper focuses on the convergence rate problem of general Markov search for global minimum. Many of existing methods are designed for overcoming a very hard problem which is how to efficiently localize and approximate the global minimum of the multimodal function f while all information which can be used are the f-values evaluated for generated points. Because such methods use poor information on f, the following problem may occur: the closer to the optimum, the harder to generate a “better” (in sense of the cost function) state. This paper explores this issue on theoretical basis. To do so the concept of lazy convergence for a globally convergent method is introduced: a globally convergent method is called lazy if the probability of generating a better state from one step to another goes to zero with time. Such issue is the cause of very undesired convergence properties. This paper shows when an optimization method has to be lazy and the presented general results cover, in particular, the class of simulated annealing algorithms and monotone random search. Furthermore, some attention is put on accelerated random search and evolution strategies
Sufficient conditions for the convergence of nonautonomous stochastic search for a global minimum
The majority of stochastic optimization algorithms can be writ-
ten in the general form , where is a sequence of points and parameters which are transformed by the algorithm, are the methods of the algorithm and represent the randomness of the algorithm. We extend the results of papers [11] and [14] to provide some new general conditions under which the algorithm finds a global minimum with probability one
Sufficient conditions for the convergence of nonautonomous stochastic search for a global minimum
The majority of stochastic optimization algorithms can be written in the general form xt+1 = Tt(xt; yt), where xt is a sequence of points and parameters which are transformed by the algorithm, Tt are the methods of the algorithm and yt represent the randomness of the algorithm. We extend the results of papers [11] and [14] to provide some new general conditions under which the algorithm nds a global minimum with probability one