189 research outputs found

    A comparative linear mean-square stability analysis of Maruyama- and Milstein-type methods

    Full text link
    In this article we compare the mean-square stability properties of the Theta-Maruyama and Theta-Milstein method that are used to solve stochastic differential equations. For the linear stability analysis, we propose an extension of the standard geometric Brownian motion as a test equation and consider a scalar linear test equation with several multiplicative noise terms. This test equation allows to begin investigating the influence of multi-dimensional noise on the stability behaviour of the methods while the analysis is still tractable. Our findings include: (i) the stability condition for the Theta-Milstein method and thus, for some choices of Theta, the conditions on the step-size, are much more restrictive than those for the Theta-Maruyama method; (ii) the precise stability region of the Theta-Milstein method explicitly depends on the noise terms. Further, we investigate the effect of introducing partially implicitness in the diffusion approximation terms of Milstein-type methods, thus obtaining the possibility to control the stability properties of these methods with a further method parameter Sigma. Numerical examples illustrate the results and provide a comparison of the stability behaviour of the different methods.Comment: 19 pages, 10 figure

    Laws of large numbers and Langevin approximations for stochastic neural field equations

    Full text link
    In this study we consider limit theorems for microscopic stochastic models of neural fields. We show that the Wilson-Cowan equation can be obtained as the limit in probability on compacts for a sequence of microscopic models when the number of neuron populations distributed in space and the number of neurons per population tend to infinity. Though the latter divergence is not necessary. This result also allows to obtain limits for qualitatively different stochastic convergence concepts, e.g., convergence in the mean. Further, we present a central limit theorem for the martingale part of the microscopic models which, suitably rescaled, converges to a centered Gaussian process with independent increments. These two results provide the basis for presenting the neural field Langevin equation, a stochastic differential equation taking values in a Hilbert space, which is the infinite-dimensional analogue of the Chemical Langevin Equation in the present setting. On a technical level we apply recently developed law of large numbers and central limit theorems for piecewise deterministic processes taking values in Hilbert spaces to a master equation formulation of stochastic neuronal network models. These theorems are valid for processes taking values in Hilbert spaces and by this are able to incorporate spatial structures of the underlying model.Comment: 38 page

    Sufficient Conditions for Polynomial Asymptotic Behaviour of the Stochastic Pantograph Equation

    Get PDF
    This paper studies the asymptotic growth and decay properties of solutions of the stochastic pantograph equation with multiplicative noise. We give sufficient conditions on the parameters for solutions to grow at a polynomial rate in pp-th mean and in the almost sure sense. Under stronger conditions the solutions decay to zero with a polynomial rate in pp-th mean and in the almost sure sense. When polynomial bounds cannot be achieved, we show for a different set of parameters that exponential growth bounds of solutions in pp-th mean and an almost sure sense can be obtained. Analogous results are established for pantograph equations with several delays, and for general finite dimensional equations.Comment: 29 pages, to appear Electronic Journal of Qualitative Theory of Differential Equations, Proc. 10th Coll. Qualitative Theory of Diff. Equ. (July 1--4, 2015, Szeged, Hungary

    Spectral Density-Based and Measure-Preserving ABC for partially observed diffusion processes. An illustration on Hamiltonian SDEs

    Get PDF
    Approximate Bayesian Computation (ABC) has become one of the major tools of likelihood-free statistical inference in complex mathematical models. Simultaneously, stochastic differential equations (SDEs) have developed to an established tool for modelling time dependent, real world phenomena with underlying random effects. When applying ABC to stochastic models, two major difficulties arise. First, the derivation of effective summary statistics and proper distances is particularly challenging, since simulations from the stochastic process under the same parameter configuration result in different trajectories. Second, exact simulation schemes to generate trajectories from the stochastic model are rarely available, requiring the derivation of suitable numerical methods for the synthetic data generation. To obtain summaries that are less sensitive to the intrinsic stochasticity of the model, we propose to build up the statistical method (e.g., the choice of the summary statistics) on the underlying structural properties of the model. Here, we focus on the existence of an invariant measure and we map the data to their estimated invariant density and invariant spectral density. Then, to ensure that these model properties are kept in the synthetic data generation, we adopt measure-preserving numerical splitting schemes. The derived property-based and measure-preserving ABC method is illustrated on the broad class of partially observed Hamiltonian type SDEs, both with simulated data and with real electroencephalography (EEG) data. The proposed ingredients can be incorporated into any type of ABC algorithm and directly applied to all SDEs that are characterised by an invariant distribution and for which a measure-preserving numerical method can be derived.Comment: 35 pages, 21 figure

    Introduction to the numerical analysis of stochastic delay differential equations

    Get PDF
    AbstractWe consider the problem of the numerical solution of stochastic delay differential equations of Itô formdX(t)=f(X(t),X(t−τ))dt+g(X(t),X(t−τ))dW(t),t∈[0,T]and X(t)=Ψ(t) for t∈[−τ,0], with given f,g, Wiener noise W and given τ>0, with a prescribed initial function Ψ. We indicate the nature of the equations of interest and give a convergence proof for explicit single-step methods. Some illustrative numerical examples using a strong Euler–Maruyama scheme are provided

    Stochastic Runge-Kutta methods with deterministic high order for ordinary differential equations

    Get PDF
    stochastic Runge-Kutta (SRK) methods for non-commutative stochastic differential equations (SDEs). As a result, we have obtained weak second order SRK methods which have good properties with respect to not only practical errors but also mean square stability. In our stability analysis, as well as a scalar test equation with complex-valued parameters, we have used a multi-dimensional non-commutative test SDE. The performance of our new schemes will be shown through comparisons with an efficient and optimal weak second order scheme proposed by Debrabant and Rößler (Appl. Numer. Math. 59:582–594, 2009)

    Stochastic Runge-Kutta Methods with Deterministic High Order for Ordinary Differential Equations

    Get PDF
    Our aim is to show that the embedding of deterministic Runge-Kutta methods with higher order than necessary order to achieve a weak order can enrich the properties of stochastic Runge-Kutta methods with respect to not only practical errors but also stability. This will be done through the comparisons between our new schemes and an efficient weak second order scheme with minimized error constant proposed by Debrabant and Robler (2009)

    Multi-step Maruyama methods for stochastic delay differential equations

    Get PDF
    In this paper the numerical approximation of solutions of It{\^o} stochastic delay differential equations is considered. We construct stochastic linear multi-step Maruyama methods and develop the fundamental numerical analysis concerning their LpL_p-consistency, numerical LpL_p-stability and LpL_p-convergence. For the special case of two-step Maruyama schemes we derive conditions guaranteeing their mean-square consistency
    • …
    corecore