3,976 research outputs found

    Uniform Markov Renewal Theory and Ruin Probabilities in Markov Random Walks

    Full text link
    Let {X_n,n\geq0} be a Markov chain on a general state space X with transition probability P and stationary probability \pi. Suppose an additive component S_n takes values in the real line R and is adjoined to the chain such that {(X_n,S_n),n\geq0} is a Markov random walk. In this paper, we prove a uniform Markov renewal theorem with an estimate on the rate of convergence. This result is applied to boundary crossing problems for {(X_n,S_n),n\geq0}. To be more precise, for given b\geq0, define the stopping time \tau=\tau(b)=inf{n:S_n>b}. When a drift \mu of the random walk S_n is 0, we derive a one-term Edgeworth type asymptotic expansion for the first passage probabilities P_{\pi}{\tau<m} and P_{\pi}{\tau<m,S_m<c}, where m\leq\infty, c\leq b and P_{\pi} denotes the probability under the initial distribution \pi. When \mu\neq0, Brownian approximations for the first passage probabilities with correction terms are derived

    Efficient likelihood estimation in state space models

    Full text link
    Motivated by studying asymptotic properties of the maximum likelihood estimator (MLE) in stochastic volatility (SV) models, in this paper we investigate likelihood estimation in state space models. We first prove, under some regularity conditions, there is a consistent sequence of roots of the likelihood equation that is asymptotically normal with the inverse of the Fisher information as its variance. With an extra assumption that the likelihood equation has a unique root for each nn, then there is a consistent sequence of estimators of the unknown parameters. If, in addition, the supremum of the log likelihood function is integrable, the MLE exists and is strongly consistent. Edgeworth expansion of the approximate solution of likelihood equation is also established. Several examples, including Markov switching models, ARMA models, (G)ARCH models and stochastic volatility (SV) models, are given for illustration.Comment: With the comments by Jens Ledet Jensen and reply to the comments. Published at http://dx.doi.org/10.1214/009053606000000614; http://dx.doi.org/10.1214/09-AOS748A; http://dx.doi.org/10.1214/09-AOS748B in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Asymptotic operating characteristics of an optimal change point detection in hidden Markov models

    Full text link
    Let \xi_0,\xi_1,...,\xi_{\omega-1} be observations from the hidden Markov model with probability distribution P^{\theta_0}, and let \xi_{\omega},\xi_{\omega+1},... be observations from the hidden Markov model with probability distribution P^{\theta_1}. The parameters \theta_0 and \theta_1 are given, while the change point \omega is unknown. The problem is to raise an alarm as soon as possible after the distribution changes from P^{\theta_0} to P^{\theta_1}, but to avoid false alarms. Specifically, we seek a stopping rule N which allows us to observe the \xi's sequentially, such that E_{\infty}N is large, and subject to this constraint, sup_kE_k(N-k|N\geq k) is as small as possible. Here E_k denotes expectation under the change point k, and E_{\infty} denotes expectation under the hypothesis of no change whatever. In this paper we investigate the performance of the Shiryayev-Roberts-Pollak (SRP) rule for change point detection in the dynamic system of hidden Markov models. By making use of Markov chain representation for the likelihood function, the structure of asymptotically minimax policy and of the Bayes rule, and sequential hypothesis testing theory for Markov random walks, we show that the SRP procedure is asymptotically minimax in the sense of Pollak [Ann. Statist. 13 (1985) 206-227]. Next, we present a second-order asymptotic approximation for the expected stopping time of such a stopping scheme when \omega=1. Motivated by the sequential analysis in hidden Markov models, a nonlinear renewal theory for Markov random walks is also given.Comment: Published at http://dx.doi.org/10.1214/009053604000000580 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Estimation in hidden Markov models via efficient importance sampling

    Full text link
    Given a sequence of observations from a discrete-time, finite-state hidden Markov model, we would like to estimate the sampling distribution of a statistic. The bootstrap method is employed to approximate the confidence regions of a multi-dimensional parameter. We propose an importance sampling formula for efficient simulation in this context. Our approach consists of constructing a locally asymptotically normal (LAN) family of probability distributions around the default resampling rule and then minimizing the asymptotic variance within the LAN family. The solution of this minimization problem characterizes the asymptotically optimal resampling scheme, which is given by a tilting formula. The implementation of the tilting formula is facilitated by solving a Poisson equation. A few numerical examples are given to demonstrate the efficiency of the proposed importance sampling scheme.Comment: Published at http://dx.doi.org/10.3150/07--BEJ5163 in the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Automated dynamic analytical model improvement for damped structures

    Get PDF
    A method is described to improve a linear nonproportionally damped analytical model of a structure. The procedure finds the smallest changes in the analytical model such that the improved model matches the measured modal parameters. Features of the method are: (1) ability to properly treat complex valued modal parameters of a damped system; (2) applicability to realistically large structural models; and (3) computationally efficiency without involving eigensolutions and inversion of a large matrix

    Multi-armed bandit problem with precedence relations

    Full text link
    Consider a multi-phase project management problem where the decision maker needs to deal with two issues: (a) how to allocate resources to projects within each phase, and (b) when to enter the next phase, so that the total expected reward is as large as possible. We formulate the problem as a multi-armed bandit problem with precedence relations. In Chan, Fuh and Hu (2005), a class of asymptotically optimal arm-pulling strategies is constructed to minimize the shortfall from perfect information payoff. Here we further explore optimality properties of the proposed strategies. First, we show that the efficiency benchmark, which is given by the regret lower bound, reduces to those in Lai and Robbins (1985), Hu and Wei (1989), and Fuh and Hu (2000). This implies that the proposed strategy is also optimal under the settings of aforementioned papers. Secondly, we establish the super-efficiency of proposed strategies when the bad set is empty. Thirdly, we show that they are still optimal with constant switching cost between arms. In addition, we prove that the Wald's equation holds for Markov chains under Harris recurrent condition, which is an important tool in studying the efficiency of the proposed strategies.Comment: Published at http://dx.doi.org/10.1214/074921706000001067 in the IMS Lecture Notes Monograph Series (http://www.imstat.org/publications/lecnotes.htm) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore