9,449 research outputs found

    Limit Theorems in Hidden Markov Models

    Get PDF
    In this paper, under mild assumptions, we derive a law of large numbers, a central limit theorem with an error estimate, an almost sure invariance principle and a variant of Chernoff bound in finite-state hidden Markov models. These limit theorems are of interest in certain ares in statistics and information theory. Particularly, we apply the limit theorems to derive the rate of convergence of the maximum likelihood estimator in finite-state hidden Markov models.Comment: 35 page

    Sequential Monte Carlo smoothing for general state space hidden Markov models

    Full text link
    Computing smoothing distributions, the distributions of one or more states conditional on past, present, and future observations is a recurring problem when operating on general hidden Markov models. The aim of this paper is to provide a foundation of particle-based approximation of such distributions and to analyze, in a common unifying framework, different schemes producing such approximations. In this setting, general convergence results, including exponential deviation inequalities and central limit theorems, are established. In particular, time uniform bounds on the marginal smoothing error are obtained under appropriate mixing conditions on the transition kernel of the latent chain. In addition, we propose an algorithm approximating the joint smoothing distribution at a cost that grows only linearly with the number of particles.Comment: Published in at http://dx.doi.org/10.1214/10-AAP735 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org). arXiv admin note: text overlap with arXiv:1012.4183 by other author

    Efficient likelihood estimation in state space models

    Full text link
    Motivated by studying asymptotic properties of the maximum likelihood estimator (MLE) in stochastic volatility (SV) models, in this paper we investigate likelihood estimation in state space models. We first prove, under some regularity conditions, there is a consistent sequence of roots of the likelihood equation that is asymptotically normal with the inverse of the Fisher information as its variance. With an extra assumption that the likelihood equation has a unique root for each nn, then there is a consistent sequence of estimators of the unknown parameters. If, in addition, the supremum of the log likelihood function is integrable, the MLE exists and is strongly consistent. Edgeworth expansion of the approximate solution of likelihood equation is also established. Several examples, including Markov switching models, ARMA models, (G)ARCH models and stochastic volatility (SV) models, are given for illustration.Comment: With the comments by Jens Ledet Jensen and reply to the comments. Published at http://dx.doi.org/10.1214/009053606000000614; http://dx.doi.org/10.1214/09-AOS748A; http://dx.doi.org/10.1214/09-AOS748B in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Statistics of sums of correlated variables described by a matrix product ansatz

    Full text link
    We determine the asymptotic distribution of the sum of correlated variables described by a matrix product ansatz with finite matrices, considering variables with finite variances. In cases when the correlation length is finite, the law of large numbers is obeyed, and the rescaled sum converges to a Gaussian distribution. In constrast, when correlation extends over system size, we observe either a breaking of the law of large numbers, with the onset of giant fluctuations, or a generalization of the central limit theorem with a family of nonstandard limit distributions. The corresponding distributions are found as mixtures of delta functions for the generalized law of large numbers, and as mixtures of Gaussian distributions for the generalized central limit theorem. Connections with statistical physics models are emphasized.Comment: 6 pages, 1 figur

    Convergence and Convergence Rate of Stochastic Gradient Search in the Case of Multiple and Non-Isolated Extrema

    Full text link
    The asymptotic behavior of stochastic gradient algorithms is studied. Relying on results from differential geometry (Lojasiewicz gradient inequality), the single limit-point convergence of the algorithm iterates is demonstrated and relatively tight bounds on the convergence rate are derived. In sharp contrast to the existing asymptotic results, the new results presented here allow the objective function to have multiple and non-isolated minima. The new results also offer new insights into the asymptotic properties of several classes of recursive algorithms which are routinely used in engineering, statistics, machine learning and operations research
    • …
    corecore