64 research outputs found

    Uniform Markov Renewal Theory and Ruin Probabilities in Markov Random Walks

    Full text link
    Let {X_n,n\geq0} be a Markov chain on a general state space X with transition probability P and stationary probability \pi. Suppose an additive component S_n takes values in the real line R and is adjoined to the chain such that {(X_n,S_n),n\geq0} is a Markov random walk. In this paper, we prove a uniform Markov renewal theorem with an estimate on the rate of convergence. This result is applied to boundary crossing problems for {(X_n,S_n),n\geq0}. To be more precise, for given b\geq0, define the stopping time \tau=\tau(b)=inf{n:S_n>b}. When a drift \mu of the random walk S_n is 0, we derive a one-term Edgeworth type asymptotic expansion for the first passage probabilities P_{\pi}{\tau<m} and P_{\pi}{\tau<m,S_m<c}, where m\leq\infty, c\leq b and P_{\pi} denotes the probability under the initial distribution \pi. When \mu\neq0, Brownian approximations for the first passage probabilities with correction terms are derived

    Efficient likelihood estimation in state space models

    Full text link
    Motivated by studying asymptotic properties of the maximum likelihood estimator (MLE) in stochastic volatility (SV) models, in this paper we investigate likelihood estimation in state space models. We first prove, under some regularity conditions, there is a consistent sequence of roots of the likelihood equation that is asymptotically normal with the inverse of the Fisher information as its variance. With an extra assumption that the likelihood equation has a unique root for each nn, then there is a consistent sequence of estimators of the unknown parameters. If, in addition, the supremum of the log likelihood function is integrable, the MLE exists and is strongly consistent. Edgeworth expansion of the approximate solution of likelihood equation is also established. Several examples, including Markov switching models, ARMA models, (G)ARCH models and stochastic volatility (SV) models, are given for illustration.Comment: With the comments by Jens Ledet Jensen and reply to the comments. Published at http://dx.doi.org/10.1214/009053606000000614; http://dx.doi.org/10.1214/09-AOS748A; http://dx.doi.org/10.1214/09-AOS748B in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Asymptotic operating characteristics of an optimal change point detection in hidden Markov models

    Full text link
    Let \xi_0,\xi_1,...,\xi_{\omega-1} be observations from the hidden Markov model with probability distribution P^{\theta_0}, and let \xi_{\omega},\xi_{\omega+1},... be observations from the hidden Markov model with probability distribution P^{\theta_1}. The parameters \theta_0 and \theta_1 are given, while the change point \omega is unknown. The problem is to raise an alarm as soon as possible after the distribution changes from P^{\theta_0} to P^{\theta_1}, but to avoid false alarms. Specifically, we seek a stopping rule N which allows us to observe the \xi's sequentially, such that E_{\infty}N is large, and subject to this constraint, sup_kE_k(N-k|N\geq k) is as small as possible. Here E_k denotes expectation under the change point k, and E_{\infty} denotes expectation under the hypothesis of no change whatever. In this paper we investigate the performance of the Shiryayev-Roberts-Pollak (SRP) rule for change point detection in the dynamic system of hidden Markov models. By making use of Markov chain representation for the likelihood function, the structure of asymptotically minimax policy and of the Bayes rule, and sequential hypothesis testing theory for Markov random walks, we show that the SRP procedure is asymptotically minimax in the sense of Pollak [Ann. Statist. 13 (1985) 206-227]. Next, we present a second-order asymptotic approximation for the expected stopping time of such a stopping scheme when \omega=1. Motivated by the sequential analysis in hidden Markov models, a nonlinear renewal theory for Markov random walks is also given.Comment: Published at http://dx.doi.org/10.1214/009053604000000580 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Estimation in hidden Markov models via efficient importance sampling

    Full text link
    Given a sequence of observations from a discrete-time, finite-state hidden Markov model, we would like to estimate the sampling distribution of a statistic. The bootstrap method is employed to approximate the confidence regions of a multi-dimensional parameter. We propose an importance sampling formula for efficient simulation in this context. Our approach consists of constructing a locally asymptotically normal (LAN) family of probability distributions around the default resampling rule and then minimizing the asymptotic variance within the LAN family. The solution of this minimization problem characterizes the asymptotically optimal resampling scheme, which is given by a tilting formula. The implementation of the tilting formula is facilitated by solving a Poisson equation. A few numerical examples are given to demonstrate the efficiency of the proposed importance sampling scheme.Comment: Published at http://dx.doi.org/10.3150/07--BEJ5163 in the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    The bootstrap method for Markov chains

    Get PDF
    Let X[subscript]n; n≥ 0 be a homogeneous ergodic (positive recurrent, irreducible and aperiodic) Markov chain with countable state space S and transition probability matrix P = (p[subscript] ij). The problem of estimating P and the distribution of the hitting time T[delta] of a state [delta] arises in several areas of applied probability. A recent resampling technique called bootstrap, proposed by Efron (1) in 1979, has proved useful in applied Statistics and Probability; The application of the bootstrap method to the finite state Markov chain case originated in Kulperger and Prakasa Rao\u27s paper (2);Suppose x = (x[subscript]0, x[subscript]1, ..., x[subscript] n) is a realization of the Markov chain X[subscript]n; n≥ 0. Let P[subscript] n be the maximum likelihood estimator of P based on the observed data x. The bootstrap method for estimating the sampling distribution H[subscript] n of R(x, P)≡ √n( P[subscript] n - P) can be described as follows: (1) Construct an estimate of the transition probability matrix P, based on the observed realization x, such as the maximum likelihood estimator P[subscript] n. (2) With P[subscript] n as its transition probability, generate a Markov chain realization of N[subscript] n steps x* = (x[subscript]sp0*, x[subscript]sp1*, ..., x[subscript]spN[subscript] n*). Call this the bootstrap sample, and let ~ P[subscript] n be the bootstrap maximum likelihood estimator of P[subscript] n. (3) Approximate the sampling distribution H[subscript] n of R ≡ R(x,P) by the conditional distribution H[subscript]spn* of R[superscript]* ≡ R(x*, P[subscript] n) ≡ √Nn(~ P[subscript] n -~ P[subscript] n) given x;Theoretical justification of the above method is to show that H[subscript]spn* is close to H[subscript] n asymptotically. It is well known that √n( P[subscript] n - P) ↦ N([underline]0,[sigma] P) in distribution, where [sigma][subscript] P is the variance covariance matrix and is continuous as a function of P with respect to the supremum norm on the class of k x k stochastic matrices. Thus, the bootstrap method will be justified if we show that H[subscript]spn* also goes to N([underline]0,[sigma][subscript] P) in distribution. The finite state space case was proved by Kulperger and Prakasa Rao (2). In this paper, we give an alternative proof of this result, and generalize it to the infinite state Markov chain;Next, since P[subscript] n goes to P, the above problem may be approached via the asymptotic behavior of a double array of Markov chains, where the transition probability matrix for the n[superscript]th row converges to a limit. This leads to our third main result which concerns the central limit theorem for a double array of Harris chains. ftn (1) B. Efron. Bootstrap method: another look at the jackknife. Ann. Statist. 7 (1979): 1-26. (2) R. J. Kulperger and B. L. S. Prakasa Rao. Bootstrapping a finite state Markov chain. To appear in Sankhya (1989)

    Efficient Importance Sampling for Rare Event Simulation with Applications

    Get PDF
    [[abstract]]Importance sampling has been known as a powerful tool to reduce the variance of Monte Carlo estimator for rare event simulation. Based on the criterion of minimizing the variance of Monte Carlo estimator, we propose a simple general account for finding the optimal tilting measure. To this end, we first obtain an explicit expression of the optimal alternative distribution, and then propose a recursive approximation algorithm for the tilting measure. The proposed algorithm is quite general to cover many interesting examples and can also be applied to a locally asymptotically normal (LAN) family around the original distribution. To illustrate the broad applicability of our method, we study value-at-risk (VaR) computation in financial risk management, and bootstrap confidence regions in statistical inferences.[[conferencetype]]國內[[conferencedate]]20111220~20111221[[iscallforpapers]]Y[[conferencelocation]]Taichung, Taiwa

    Multi-armed bandit problem with precedence relations

    Full text link
    Consider a multi-phase project management problem where the decision maker needs to deal with two issues: (a) how to allocate resources to projects within each phase, and (b) when to enter the next phase, so that the total expected reward is as large as possible. We formulate the problem as a multi-armed bandit problem with precedence relations. In Chan, Fuh and Hu (2005), a class of asymptotically optimal arm-pulling strategies is constructed to minimize the shortfall from perfect information payoff. Here we further explore optimality properties of the proposed strategies. First, we show that the efficiency benchmark, which is given by the regret lower bound, reduces to those in Lai and Robbins (1985), Hu and Wei (1989), and Fuh and Hu (2000). This implies that the proposed strategy is also optimal under the settings of aforementioned papers. Secondly, we establish the super-efficiency of proposed strategies when the bad set is empty. Thirdly, we show that they are still optimal with constant switching cost between arms. In addition, we prove that the Wald's equation holds for Markov chains under Harris recurrent condition, which is an important tool in studying the efficiency of the proposed strategies.Comment: Published at http://dx.doi.org/10.1214/074921706000001067 in the IMS Lecture Notes Monograph Series (http://www.imstat.org/publications/lecnotes.htm) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore