25,264 research outputs found

    On nonlinear stabilization of linearly unstable maps

    Full text link
    We examine the phenomenon of nonlinear stabilization, exhibiting a variety of related examples and counterexamples. For G\^ateaux differentiable maps, we discuss a mechanism of nonlinear stabilization, in finite and infinite dimensions, which applies in particular to hyperbolic partial differential equations, and, for Fr\'echet differentiable maps with linearized operators that are normal, we give a sharp criterion for nonlinear exponential instability at the linear rate. These results highlight the fundamental open question whether Fr\'echet differentiability is sufficient for linear exponential instability to imply nonlinear exponential instability, at possibly slower rate.Comment: New section 1.5 and several references added. 20 pages, no figur

    Converse Lyapunov Theorems for Switched Systems in Banach and Hilbert Spaces

    Get PDF
    We consider switched systems on Banach and Hilbert spaces governed by strongly continuous one-parameter semigroups of linear evolution operators. We provide necessary and sufficient conditions for their global exponential stability, uniform with respect to the switching signal, in terms of the existence of a Lyapunov function common to all modes

    Statistics of Rare Events in Disordered Conductors

    Full text link
    Asymptotic behavior of distribution functions of local quantities in disordered conductors is studied in the weak disorder limit by means of an optimal fluctuation method. It is argued that this method is more appropriate for the study of seldom occurring events than the approaches based on nonlinear σ\sigma-models because it is capable of correctly handling fluctuations of the random potential with large amplitude as well as the short-scale structure of the corresponding solutions of the Schr\"{o}dinger equation. For two- and three-dimensional conductors new asymptotics of the distribution functions are obtained which in some cases differ significantly from previously established results.Comment: 17 pages, REVTeX 3.0 and 1 Postscript figur

    Numerical Relativity Using a Generalized Harmonic Decomposition

    Get PDF
    A new numerical scheme to solve the Einstein field equations based upon the generalized harmonic decomposition of the Ricci tensor is introduced. The source functions driving the wave equations that define generalized harmonic coordinates are treated as independent functions, and encode the coordinate freedom of solutions. Techniques are discussed to impose particular gauge conditions through a specification of the source functions. A 3D, free evolution, finite difference code implementing this system of equations with a scalar field matter source is described. The second-order-in-space-and-time partial differential equations are discretized directly without the use first order auxiliary terms, limiting the number of independent functions to fifteen--ten metric quantities, four source functions and the scalar field. This also limits the number of constraint equations, which can only be enforced to within truncation error in a numerical free evolution, to four. The coordinate system is compactified to spatial infinity in order to impose physically motivated, constraint-preserving outer boundary conditions. A variant of the Cartoon method for efficiently simulating axisymmetric spacetimes with a Cartesian code is described that does not use interpolation, and is easier to incorporate into existing adaptive mesh refinement packages. Preliminary test simulations of vacuum black hole evolution and black hole formation via scalar field collapse are described, suggesting that this method may be useful for studying many spacetimes of interest.Comment: 18 pages, 6 figures; updated to coincide with journal version, which includes some expanded discussions and a new appendix with a stability analysis of a simplified problem using the same discretization scheme described in the pape

    Distributed stochastic optimization via matrix exponential learning

    Get PDF
    In this paper, we investigate a distributed learning scheme for a broad class of stochastic optimization problems and games that arise in signal processing and wireless communications. The proposed algorithm relies on the method of matrix exponential learning (MXL) and only requires locally computable gradient observations that are possibly imperfect and/or obsolete. To analyze it, we introduce the notion of a stable Nash equilibrium and we show that the algorithm is globally convergent to such equilibria - or locally convergent when an equilibrium is only locally stable. We also derive an explicit linear bound for the algorithm's convergence speed, which remains valid under measurement errors and uncertainty of arbitrarily high variance. To validate our theoretical analysis, we test the algorithm in realistic multi-carrier/multiple-antenna wireless scenarios where several users seek to maximize their energy efficiency. Our results show that learning allows users to attain a net increase between 100% and 500% in energy efficiency, even under very high uncertainty.Comment: 31 pages, 3 figure

    A continuous-time analysis of distributed stochastic gradient

    Full text link
    We analyze the effect of synchronization on distributed stochastic gradient algorithms. By exploiting an analogy with dynamical models of biological quorum sensing -- where synchronization between agents is induced through communication with a common signal -- we quantify how synchronization can significantly reduce the magnitude of the noise felt by the individual distributed agents and by their spatial mean. This noise reduction is in turn associated with a reduction in the smoothing of the loss function imposed by the stochastic gradient approximation. Through simulations on model non-convex objectives, we demonstrate that coupling can stabilize higher noise levels and improve convergence. We provide a convergence analysis for strongly convex functions by deriving a bound on the expected deviation of the spatial mean of the agents from the global minimizer for an algorithm based on quorum sensing, the same algorithm with momentum, and the Elastic Averaging SGD (EASGD) algorithm. We discuss extensions to new algorithms which allow each agent to broadcast its current measure of success and shape the collective computation accordingly. We supplement our theoretical analysis with numerical experiments on convolutional neural networks trained on the CIFAR-10 dataset, where we note a surprising regularizing property of EASGD even when applied to the non-distributed case. This observation suggests alternative second-order in-time algorithms for non-distributed optimization that are competitive with momentum methods.Comment: 9/14/19 : Final version, accepted for publication in Neural Computation. 4/7/19 : Significant edits: addition of simulations, deep network results, and revisions throughout. 12/28/18: Initial submissio
    • …
    corecore