276 research outputs found

    Delay geometric Brownian motion in financial option valuation

    Get PDF
    Motivated by influential work on complete stochastic volatility models, such as Hobson and Rogers [11], we introduce a model driven by a delay geometric Brownian motion (DGBM) which is described by the stochastic delay differential equation dSðtÞ ¼ mðSðt 2tÞÞSðtÞdt þ VðSðt 2tÞÞSðtÞdWðtÞ. We show that the equation has a unique positive solution under a very general condition, namely that the volatility function V is a continuous mapping from Rþ to itself. Moreover, we show that the delay effect is not too sensitive to time lag changes. The desirable robustness of the delay effect is demonstrated on several important financial derivatives as well as on the value process of the underlying asset. Finally, we introduce an Euler–Maruyama numerical scheme for our proposed model and show that this numerical method approximates option prices very well. All these features show that the proposedDGBMserves as a rich alternative in modelling financial instruments in a complete market framework

    Necessary and sufficient conditions for the existence of the q-optimal measure

    Get PDF
    This paper presents the general form and essential properties of the q-optimal measure following the approach of Delbaen and Schachermayer (1996) and proves its existence under mild conditions. Most importantly, it states a necessary and sufficient condition for a candidate measure to be the q-optimal measure in the case even of signed measures. Finally, an updated characterization of the q-optimal measure for continuous asset price processes is presented in the light of the counterexample appearing in Cerny and Kallsen (2006) concerning Hobson's (2004) approach

    A note on tamed Euler approximations

    Get PDF
    Strong convergence results on tamed Euler schemes, which approximate stochastic differential equations with superlinearly growing drift coefficients that are locally one-sided Lipschitz continuous, are presented in this article. The diffusion coefficients are assumed to be locally Lipschitz continuous and have at most linear growth. Furthermore, the classical rate of convergence, i.e. one--half, for such schemes is recovered when the local Lipschitz continuity assumptions are replaced by global and, in addition, it is assumed that the drift coefficients satisfy polynomial Lipschitz continuity.Comment: 10 page

    On stochastic gradient Langevin dynamics with dependent data streams: the fully non-convex case

    Get PDF
    We consider the problem of sampling from a target distribution, which is \emph {not necessarily logconcave}, in the context of empirical risk minimization and stochastic optimization as presented in Raginsky et al. (2017). Non-asymptotic analysis results are established in the L1L^1-Wasserstein distance for the behaviour of Stochastic Gradient Langevin Dynamics (SGLD) algorithms. We allow the estimation of gradients to be performed even in the presence of \emph{dependent} data streams. Our convergence estimates are sharper and \emph{uniform} in the number of iterations, in contrast to those in previous studies

    A Note on Euler Approximations for Stochastic Differential Equations with Delay

    Get PDF
    An existence and uniqueness theorem for a class of stochastic delay differential equations is presented, and the convergence of Euler approximations for these equations is proved under general conditions. Moreover, the rate of almost sure convergence is obtained under local Lipschitz and also under monotonicity conditions

    Taming under isoperimetry

    Full text link
    In this article we propose a novel taming Langevin-based scheme called sTULA\mathbf{sTULA} to sample from distributions with superlinearly growing log-gradient which also satisfy a Log-Sobolev inequality. We derive non-asymptotic convergence bounds in KLKL and consequently total variation and Wasserstein-22 distance from the target measure. Non-asymptotic convergence guarantees are provided for the performance of the new algorithm as an optimizer. Finally, some theoretical results on isoperimertic inequalities for distributions with superlinearly growing gradients are provided. Key findings are a Log-Sobolev inequality with constant independent of the dimension, in the presence of a higher order regularization and a Poincare inequality with constant independent of temperature and dimension under a novel non-convex theoretical framework.Comment: 50 page
    corecore