436 research outputs found

    Nonautonomous stochastic search in global optimizatios

    Get PDF

    A short introduction to stochastic optimization

    Get PDF
    We present some typical algorithms used for finding global min­imum/maximum of a function defined on a compact finite dimensional set, discuss commonly observed procedures for assessing and comparing the algo­rithms’ performance and quote theoretical results on convergence of a broad class of stochastic algorithms

    Transport in time-dependent dynamical systems: Finite-time coherent sets

    Full text link
    We study the transport properties of nonautonomous chaotic dynamical systems over a finite time duration. We are particularly interested in those regions that remain coherent and relatively non-dispersive over finite periods of time, despite the chaotic nature of the system. We develop a novel probabilistic methodology based upon transfer operators that automatically detects maximally coherent sets. The approach is very simple to implement, requiring only singular vector computations of a matrix of transitions induced by the dynamics. We illustrate our new methodology on an idealized stratospheric flow and in two and three dimensional analyses of European Centre for Medium Range Weather Forecasting (ECMWF) reanalysis data

    Sufficient conditions for the convergence of nonautonomous stochastic search for a global minimum

    Get PDF
    The majority of stochastic optimization algorithms can be writ- ten in the general form xt+1=Tt(xt,yt)x_{t+1}= T_{t}\left(x_{t},y_{t}\right), where xtx_{t} is a sequence of points and parameters which are transformed by the algorithm, TtT^{_{t}} are the methods of the algorithm and YtY_{t} represent the randomness of the algorithm. We extend the results of papers [11] and [14] to provide some new general conditions under which the algorithm finds a global minimum with probability one

    Convergence of nonautonomous evolutionary algorithm

    Get PDF
    Abstract. We present a general criterion guaranteeing the stochastic con-vergence of a wide class of nonautonomous evolutionary algorithms used for finding the global minimum of a continuous function. This paper is an extension of paper [6], where autonomous case was presented. Our main tool here is a cocycle system defined on the space of probabilistic measures and its stability properties

    Optimal Piecewise-Linear Approximation of the Quadratic Chaotic Dynamics

    Get PDF
    This paper shows the influence of piecewise-linear approximation on the global dynamics associated with autonomous third-order dynamical systems with the quadratic vector fields. The novel method for optimal nonlinear function approximation preserving the system behavior is proposed and experimentally verified. This approach is based on the calculation of the state attractor metric dimension inside a stochastic optimization routine. The approximated systems are compared to the original by means of the numerical integration. Real electronic circuits representing individual dynamical systems are derived using classical as well as integrator-based synthesis and verified by time-domain analysis in Orcad Pspice simulator. The universality of the proposed method is briefly discussed, especially from the viewpoint of the higher-order dynamical systems. Future topics and perspectives are also provide

    Discrete-time recurrent neural networks with time-varying delays: Exponential stability analysis

    Get PDF
    This is the post print version of the article. The official published version can be obtained from the link below - Copyright 2007 Elsevier LtdThis Letter is concerned with the analysis problem of exponential stability for a class of discrete-time recurrent neural networks (DRNNs) with time delays. The delay is of the time-varying nature, and the activation functions are assumed to be neither differentiable nor strict monotonic. Furthermore, the description of the activation functions is more general than the recently commonly used Lipschitz conditions. Under such mild conditions, we first prove the existence of the equilibrium point. Then, by employing a Lyapunov–Krasovskii functional, a unified linear matrix inequality (LMI) approach is developed to establish sufficient conditions for the DRNNs to be globally exponentially stable. It is shown that the delayed DRNNs are globally exponentially stable if a certain LMI is solvable, where the feasibility of such an LMI can be easily checked by using the numerically efficient Matlab LMI Toolbox. A simulation example is presented to show the usefulness of the derived LMI-based stability condition.This work was supported in part by the Engineering and Physical Sciences Research Council (EPSRC) of the UK under Grant GR/S27658/01, the Nuffield Foundation of the UK under Grant NAL/00630/G, the Alexander von Humboldt Foundation of Germany, the Natural Science Foundation of Jiangsu Education Committee of China (05KJB110154), the NSF of Jiangsu Province of China (BK2006064), and the National Natural Science Foundation of China (10471119)

    A Short Introduction to Stochastic Optimization

    Get PDF
    We present some typical algorithms used for finding global minimum/ maximum of a function defined on a compact finite dimensional set, discuss commonly observed procedures for assessing and comparing the algorithms’ performance and quote theoretical results on convergence of a broad class of stochastic algorithms

    On the convergence rate issues of general Markov search for global minimum

    Get PDF
    This paper focuses on the convergence rate problem of general Markov search for global minimum. Many of existing methods are designed for overcoming a very hard problem which is how to efficiently localize and approximate the global minimum of the multimodal function f while all information which can be used are the f-values evaluated for generated points. Because such methods use poor information on f, the following problem may occur: the closer to the optimum, the harder to generate a “better” (in sense of the cost function) state. This paper explores this issue on theoretical basis. To do so the concept of lazy convergence for a globally convergent method is introduced: a globally convergent method is called lazy if the probability of generating a better state from one step to another goes to zero with time. Such issue is the cause of very undesired convergence properties. This paper shows when an optimization method has to be lazy and the presented general results cover, in particular, the class of simulated annealing algorithms and monotone random search. Furthermore, some attention is put on accelerated random search and evolution strategies
    corecore