24,630 research outputs found

    A multi-agent based evolutionary algorithm in non-stationary environments

    Get PDF
    This article is posted here with permission of IEEE - Copyright @ 2008 IEEEIn this paper, a multi-agent based evolutionary algorithm (MAEA) is introduced to solve dynamic optimization problems. The agents simulate living organism features and co-evolve to find optimum. All agents live in a lattice like environment, where each agent is fixed on a lattice point. In order to increase the energy, agents can compete with their neighbors and can also acquire knowledge based on statistic information. In order to maintain the diversity of the population, the random immigrants and adaptive primal dual mapping schemes are used. Simulation experiments on a set of dynamic benchmark problems show that MAEA can obtain a better performance in non-stationary environments in comparison with several peer genetic algorithms.This work was suported by the Key Program of National Natural Science Foundation of China under Grant No. 70431003, the Science Fund for Creative Research Group of the National Natural Science Foundation of China under Grant No. 60521003, the National Science and Technology Support Plan of China under Grant No. 2006BAH02A09, and the Engineering and Physical Sciences Research Council of the United Kingdom under Grant No. EP/E060722/1

    A unified analysis of stochastic momentum methods for deep learning

    Full text link
    © 2018 International Joint Conferences on Artificial Intelligence. All right reserved. Stochastic momentum methods have been widely adopted in training deep neural networks. However, their theoretical analysis of convergence of the training objective and the generalization error for prediction is still under-explored. This paper aims to bridge the gap between practice and theory by analyzing the stochastic gradient (SG) method, and the stochastic momentum methods including two famous variants, i.e., the stochastic heavy-ball (SHB) method and the stochastic variant of Nesterov's accelerated gradient (SNAG) method. We propose a framework that unifies the three variants. We then derive the convergence rates of the norm of gradient for the non-convex optimization problem, and analyze the generalization performance through the uniform stability approach. Particularly, the convergence analysis of the training objective exhibits that SHB and SNAG have no advantage over SG. However, the stability analysis shows that the momentum term can improve the stability of the learned model and hence improve the generalization performance. These theoretical insights verify the common wisdom and are also corroborated by our empirical analysis on deep learning

    Global Hilbert Expansion for the Vlasov-Poisson-Boltzmann System

    Full text link
    We study the Hilbert expansion for small Knudsen number ε\varepsilon for the Vlasov-Boltzmann-Poisson system for an electron gas. The zeroth order term takes the form of local Maxwellian: $ F_{0}(t,x,v)=\frac{\rho_{0}(t,x)}{(2\pi \theta_{0}(t,x))^{3/2}} e^{-|v-u_{0}(t,x)|^{2}/2\theta_{0}(t,x)},\text{\ }\theta_{0}(t,x)=K\rho_{0}^{2/3}(t,x).OurmainresultstatesthatiftheHilbertexpansionisvalidat Our main result states that if the Hilbert expansion is valid at t=0forwellpreparedsmallinitialdatawithirrotationalvelocity for well-prepared small initial data with irrotational velocity u_0,thenitisvalidfor, then it is valid for 0\leq t\leq \varepsilon ^{-{1/2}\frac{2k-3}{2k-2}},where where \rho_{0}(t,x)and and u_{0}(t,x)satisfytheEulerPoissonsystemformonatomicgas satisfy the Euler-Poisson system for monatomic gas \gamma=5/3$
    corecore