22,153 research outputs found

    A class of robust numerical methods for solving dynamical systems with multiple time scales

    Get PDF
    In this paper, we develop a class of robust numerical methods for solving dynamical systems with multiple time scales. We first represent the solution of a multiscale dynamical system as a transformation of a slowly varying solution. Then, under the scale separation assumption, we provide a systematic way to construct the transformation map and derive the dynamic equation for the slowly varying solution. We also provide the convergence analysis of the proposed method. Finally, we present several numerical examples, including ODE system with three and four separated time scales to demonstrate the accuracy and efficiency of the proposed method. Numerical results verify that our method is robust in solving ODE systems with multiple time scale, where the time step does not depend on the multiscale parameters

    A class of robust numerical methods for solving dynamical systems with multiple time scales

    Get PDF
    In this paper, we develop a class of robust numerical methods for solving dynamical systems with multiple time scales. We first represent the solution of a multiscale dynamical system as a transformation of a slowly varying solution. Then, under the scale separation assumption, we provide a systematic way to construct the transformation map and derive the dynamic equation for the slowly varying solution. We also provide the convergence analysis of the proposed method. Finally, we present several numerical examples, including ODE system with three and four separated time scales to demonstrate the accuracy and efficiency of the proposed method. Numerical results verify that our method is robust in solving ODE systems with multiple time scale, where the time step does not depend on the multiscale parameters

    A new framework for extracting coarse-grained models from time series with multiscale structure

    Full text link
    In many applications it is desirable to infer coarse-grained models from observational data. The observed process often corresponds only to a few selected degrees of freedom of a high-dimensional dynamical system with multiple time scales. In this work we consider the inference problem of identifying an appropriate coarse-grained model from a single time series of a multiscale system. It is known that estimators such as the maximum likelihood estimator or the quadratic variation of the path estimator can be strongly biased in this setting. Here we present a novel parametric inference methodology for problems with linear parameter dependency that does not suffer from this drawback. Furthermore, we demonstrate through a wide spectrum of examples that our methodology can be used to derive appropriate coarse-grained models from time series of partial observations of a multiscale system in an effective and systematic fashion

    Pulses and Snakes in Ginzburg--Landau Equation

    Get PDF
    Using a variational formulation for partial differential equations (PDEs) combined with numerical simulations on ordinary differential equations (ODEs), we find two categories (pulses and snakes) of dissipative solitons, and analyze the dependence of both their shape and stability on the physical parameters of the cubic-quintic Ginzburg-Landau equation (CGLE). In contrast to the regular solitary waves investigated in numerous integrable and non-integrable systems over the last three decades, these dissipative solitons are not stationary in time. Rather, they are spatially confined pulse-type structures whose envelopes exhibit complicated temporal dynamics. Numerical simulations reveal very interesting bifurcations sequences as the parameters of the CGLE are varied. Our predictions on the variation of the soliton amplitude, width, position, speed and phase of the solutions using the variational formulation agree with simulation results.Comment: 30 pages, 14 figure

    Universal Convexification via Risk-Aversion

    Full text link
    We develop a framework for convexifying a fairly general class of optimization problems. Under additional assumptions, we analyze the suboptimality of the solution to the convexified problem relative to the original nonconvex problem and prove additive approximation guarantees. We then develop algorithms based on stochastic gradient methods to solve the resulting optimization problems and show bounds on convergence rates. %We show a simple application of this framework to supervised learning, where one can perform integration explicitly and can use standard (non-stochastic) optimization algorithms with better convergence guarantees. We then extend this framework to apply to a general class of discrete-time dynamical systems. In this context, our convexification approach falls under the well-studied paradigm of risk-sensitive Markov Decision Processes. We derive the first known model-based and model-free policy gradient optimization algorithms with guaranteed convergence to the optimal solution. Finally, we present numerical results validating our formulation in different applications
    • …
    corecore