116 research outputs found

    Real-Time Complex Langevin - A Differential Programming Perspective

    Get PDF
    In this thesis, I aim to find solutions to the NP-hard sign-problem that arises when modeling strongly correlated systems in real-time. I will use the complex Langevin (CLE) method, and address its problem of runaway trajectories and incorrect convergence using an implicit solver and a novel kernel optimization scheme, respectively. The implicit solver stabilizes the numerical solution, making the runaway solution problem a thing of the past. It also acts as a regulator, allowing for simulation along the canonical Schwinger-Keldysh contour. Additionally, our investigation shows that a kernel can act as a regulator as well, resulting in an effective change in the action and integral measure while leaving the path integral measure intact. To restore correct convergence in CLE simulations, we present a novel strategy that involves learning a kernel and utilizing functionals that encode relevant prior information, such as symmetries or Euclidean correlator data. Our approach recovers the correct convergence in the non-interacting theory on the Schwinger-Keldysh contour for any real-time extent. It achieves the correct convergence up to three times the real-time extent of the previous benchmark study for the strongly coupled quantum anharmonic oscillator. Furthermore, we investigate the stability of the CLE by calculating the Lyapunov exponents of the CLE and uncovering that the real-time CLE behaves like a chaotic dynamical system. This has consequences for obtaining a reliable gradient of a loss function that contains a real-time CLE simulation. To address this issue, we adapt the shadowing sensitivity method to a stochastic differential equation (SDE), which allows for calculating a reliable gradient of chaotic SDEs

    Numerical approximation of SDEs and stochastic Swift-Hohenberg equation

    Get PDF
    We consider the numerical approximation of stochastic di®erential equations inter- preted both in the It^o and Stratonovich sense and develop three stochastic time- integration techniques based on the deterministic exponential time di®erencing schemes. Two of the numerical schemes are suited for the simulations of It^o stochastic ordinary di®erential equations (SODEs) and they are referred to as the stochastic exponential time di®erencing schemes, SETD0 and SETD1. The third numerical scheme is a new numerical method we propose for the simulations of Stratonovich SODEs. We call this scheme, the Exponential Stratonovich Integrator (ESI). We investigate numerically the convergence of these three numerical methods, in ad- dition to three standard approximation schemes and also compare the accuracy and e±ciency of these schemes. The e®ect of small noise is also studied. We study the theoretical convergence of the stochastic exponential time di®erencing scheme (SETD0) for parabolic stochastic partial di®erential equations (SPDEs) with in¯nite-dimensional additive noise and one-dimensional multiplicative noise. We ob- tain a strong error temporal estimate of O(¢tµ + ²¢tµ + ²2¢t1=2) for SPDEs forced with a one-dimensional multiplicative noise and also obtain a strong error temporal estimate of O(¢tµ + ²2¢t) for SPDEs forced with an in¯nite-dimensional additive noise. We examine convergence for second-order and fourth-order SPDEs. We consider the e®ects of spatially correlated and uncorrelated noise on bifurcations for SPDEs. In particular, we study a fourth-order SPDE, the Swift-Hohenberg equa- tion, and allow the control parameter to °uctuate. Numerical simulations show a shift in the pinning region with multiplicative noise

    Classication of the asymptotic behaviour of solutions of stochastic differential equations with state independent noise

    Get PDF
    We investigate the asymptotic behaviour of solution of dierential equation with state-independent perturbation. The dierential equation studied is a perturbed version of a globally stable autonomous equation with unique equilibrium where the diffusion coefficient is independent of the state. Perturbed differential equation is widely applied to model natural phenomena, in Finance, Engineering, Physics and other disciplines. Real-world processes are often subjected to interference in the form of random external perturbations. This could lead to a dramatic effect on the behaviour of these processes. Therefore it is important to analyse these equations. We start by considering an additive deterministic perturbation in Chapter 1. It is assumed that the restoring force is asymptotically negligible as the solution becomes large, and that the perturbation tends to zero as time becomes indefinitely large. It is shown that solutions are always locally stable, and that solutions either tend to zero or to infinity as time tends to infinity. In Chapter 2 and 4, we each explore a linear and nonlinear equation with stochastic perturbation in finite dimensions. We find necessary and sufficient conditions on the rate of decay of the noise intensity for the solution of the equations to be globally asymptotically stable, bounded, or unstable. In Chapter 3 we concentrate on a scalar nonlinear stochastic differential equation. As well as the necessary and sufficient condition, we also explore the simple sufficient conditions and the connections between the conditions which characterise the various classes of long-run behaviour. To facilitate the analysis, we investigate using Split-Step method the difference equations both in the scalar case and the finite dimensional case in Chapter 5 and 6. We can mimic the exact asymptotic behaviour of the solution of the stochastic differential equation under the same conditions in discrete time

    Analysis of gradient descents in random energies and heat baths

    Get PDF
    This thesis concerns the mathematical analysis of random gradient descent evolutions as models for rate-independent dissipative systems under the influence of thermal effects. The basic notions of the theory of gradient descents (especially rate-independent evolutions) are reviewed in chapter 2. Chapters 3 and 4 focus on the scaling regime in which the microstructure dominates the thermal effects and comprise a rigorous justification of rateindependent processes in smooth, convex energies as scaling limits of ratedependent gradient descents in energies that have rapidly-oscillating random microstructure: chapter 3 treats the one-dimensional case with quite a broad class of random microstructures; chapter 4 treats a case in which the microstructure is modeled by a sum of “dent functions” that are scattered in Rn using a suitable point process. Chapters 5 and 6 focus on the opposite scaling regime: a gradient descent system (typically a rate-independent process) is placed in contact with a heat bath. The method used to “thermalize” a gradient descent is an interior-point regularization of the Moreau–Yosida incremental problem for the original gradient descent. Chapter 5 treats the heuristics and generalities; chapter 6 treats the case of 1-homogeneous dissipation (rate independence) and shows that the heat bath destroys the rate independence in a controlled and deterministic way, and that the effective dynamics are a gradient descent in the original energetic potential but with respect to a different and non-trivial effective dissipation potential. The appendices contain some auxiliary definitions and results, most of them standard in the literature, that are used in the main text

    Inference on Riemannian Manifolds: Regression and Stochastic Differential Equations

    Get PDF
    Statistical inference for manifolds attracts much attention because of its power of working with more general forms of data or geometric objects. We study regression and stochastic differential equations on manifolds from the intrinsic point of view. Firstly, we are able to provide alternative parametrizations for data that lie on Lie group in the problem of fitting a regression model, by mapping this space intrinsically onto its Lie algebra, while we explore the behaviour of fitted values when this base point is chosen differently. Due to the nature of our data in the application of soft tissue artefacts, we employ two correlation structures, namely Matern and quasi-periodic correlation functions when using the generalized least squares, and show that some patterns of the residuals are removed. Secondly, we construct a generalization of the Ornstein-Uhlenbeck process on the cone of covariance matrices SP(n) endowed with two popular Riemannian metrics, namely Log-Euclidean (LE) and Affine-Invariant (AI) metrics. We show that the Riemannian Brownian motion on SP(n) has infinite explosion time as on the Euclidean space and establish the calculation for the horizontal lifts of smooth curves. Moreover, we provide Bayesian inference for discretely observed diffusion processes of covariance matrices associated with either the LE or the AI metrics, and present a novel diffusion bridge sampling method using guided proposals when equipping SP(n) with the AI metric. The estimation algorithms are illustrated with an application in finance, together with a goodness-of-fit test comparing models associated with different metrics. Furthermore, we explore the multivariate volatility models via simulation study, in which covariance matrices in the models are assumed to be unobservable

    Asymptotic analysis of deep learning algorithms

    Get PDF
    We investigate the asymptotic properties of deep residual networks as the number of layers increases. We first show the existence of scaling regimes for trained weights markedly different from those implicitly assumed in the neural ODE literature. We study the convergence of the hidden state dynamics in these scaling regimes, showing that one may obtain an ODE, a stochastic differential equation (SDE) or neither. Furthermore, we derive the corresponding scaling limits for the backpropagation dynamics. Finally, we prove that in the case of a smooth activation function, the scaling regime arises as a consequence of using gradient descent. In particular, we prove linear convergence of gradient descent to a global minimum for the training of deep residual networks. We also show that if the trained weights, as a function of the layer index, admit a scaling limit as the depth increases, then the limit has finite 2-variation. This work also investigate the mean-field limit of path-homogeneous neural architectures. We prove convergence of the Wasserstein gradient flow to a global minimum, and we derive a generalization bound based on the stability of the optimization algorithm for 2-layer neural networks with ReLU activation
    corecore