40 research outputs found

    Stability properties of some particle filters

    Full text link
    Under multiplicative drift and other regularity conditions, it is established that the asymptotic variance associated with a particle filter approximation of the prediction filter is bounded uniformly in time, and the nonasymptotic, relative variance associated with a particle approximation of the normalizing constant is bounded linearly in time. The conditions are demonstrated to hold for some hidden Markov models on noncompact state spaces. The particle stability results are obtained by proving vv-norm multiplicative stability and exponential moment results for the underlying Feynman-Kac formulas.Comment: Published in at http://dx.doi.org/10.1214/12-AAP909 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Expected exit time for time-periodic stochastic differential equations and applications to stochastic resonance

    Get PDF
    In this paper, we derive a parabolic partial differential equation for the expected exit time of non-autonomous time-periodic non-degenerate stochastic differential equations. This establishes a Feynman–Kac duality between expected exit time of time-periodic stochastic differential equations and time-periodic solutions of parabolic partial differential equations. Casting the time-periodic solution of the parabolic partial differential equation as a fixed point problem and a convex optimisation problem, we give sufficient conditions in which the partial differential equation is well-posed in a weak and classical sense. With no known closed formulae for the expected exit time, we show our method can be readily implemented by standard numerical schemes. With relatively weak conditions (e.g. locally Lipschitz coefficients), the method in this paper is applicable to wide range of physical systems including weakly dissipative systems. Particular applications towards stochastic resonance will be discussed

    On large lag smoothing for hidden Markov Models

    Get PDF
    In this article we consider the smoothing problem for hidden Markov models. Given a hidden Markov chain {Xn}n0\{X_n\}_{n\geq 0} and observations {Yn}n0\{Y_n\}_{n\geq 0}, our objective is to compute E[φ(X0,,Xk)y0,,yn]\mathbb{E}[\varphi(X_0,\dots,X_k)|y_{0},\dots,y_n] for some real-valued, integrable functional φ\varphi and kk fixed, knk \ll n and for some realization (y0,,yn)(y_0,\dots,y_n) of (Y0,,Yn)(Y_0,\dots,Y_n). We introduce a novel application of the multilevel Monte Carlo method with a coupling based on the Knothe--Rosenblatt rearrangement. We prove that this method can approximate the aforementioned quantity with a mean square error (MSE) of O(ϵ2)\mathcal{O}(\epsilon^2) for arbitrary ϵ>0\epsilon>0 with a cost of O(ϵ2)\mathcal{O}(\epsilon^{-2}). This is in contrast to the same direct Monte Carlo method, which requires a cost of O(nϵ2)\mathcal{O}(n\epsilon^{-2}) for the same MSE. The approach we suggest is, in general, not possible to implement, so the optimal transport methodology of [A. Spantini, D. Bigoni, and Y. Marzouk, J. Mach. Learn. Res., 19 (2018), pp. 2639--2709; M. Parno, T. Moselhy, and Y. Marzouk, SIAM/ASA J. Uncertain. Quantif., 4 (2016), pp. 1160--1190] is used, which directly approximates our strategy. We show that our theoretical improvements are achieved, even under approximation, in several numerical examples

    On the Stability of Sequential Monte Carlo Methods in High Dimensions

    Get PDF
    We investigate the stability of a Sequential Monte Carlo (SMC) method applied to the problem of sampling from a target distribution on Rd for large d. It is well known [Bengtsson, Bickel and Li, In Probability and Statistics: Essays in Honor of David A. Freedman, D. Nolan and T. Speed, eds. (2008) 316–334 IMS; see also Pushing the Limits of Contemporary Statistics (2008) 318–32 9 IMS, Mon. Weather Rev. (2009) 136 (2009) 4629–4640] that using a single importance sampling step, one produces an approximation for the target that deteriorates as the dimension d increases, unless the number of Monte Carlo samples N increases at an exponential rate in d. We show that this degeneracy can be avoided by introducing a sequence of artificial targets, starting from a “simple” density and moving to the one of interest, using an SMC method to sample from the sequence; see, for example, Chopin [Biometrika 89 (2002) 539–551]; see also [J. R. Stat. Soc. Ser. B Stat. Methodol. 68 (2006) 411–436, Phys. Rev. Lett. 78 (1997) 2690–2693, Stat. Comput. 11 (2001) 125–139]. Using this class of SMC methods with a fixed number of samples, one can produce an approximation for which the effective sample size (ESS) converges to a random variable εN as d → ∞ with 1 < εN < N. The convergence is achieved with a computational cost proportional to Nd2. If εN � N, we can raise its value by introducing a number of resampling steps, say m (where m is independent of d). In this case, the ESS converges to a random variable εN,m as d → ∞ and limm→∞ εN,m = N. Also, we show that the Monte Carlo error for estimating a fixed-dimensional marginal expectation is of order √ 1 N uniformly in d. The results imply that, in high dimensions, SMC algorithms can efficiently control the variability of the importance sampling weights and estimate fixed-dimensional marginals at a cost which is less than exponential in d and indicate that resampling leads to a reduction in the Monte Carlo error and increase in the ESS. All of our analysis is made under the assumption that the target density is i.i.d
    corecore