5 research outputs found

    Coding with partially hidden Markov models

    Get PDF

    Consistent order estimation and minimal penalties

    Full text link
    Consider an i.i.d. sequence of random variables whose distribution f* lies in one of a nested family of models M_q, q>=1. The smallest index q* such that M_{q*} contains f* is called the model order. We establish strong consistency of the penalized likelihood order estimator in a general setting with penalties of order \eta(q) log log n, where \eta(q) is a dimensional quantity. Moreover, such penalties are shown to be minimal. In contrast to previous work, an a priori upper bound on the model order is not assumed. The results rely on a sharp characterization of the pathwise fluctuations of the generalized likelihood ratio statistic under entropy assumptions on the model classes. Our results are applied to the geometrically complex problem of location mixture order estimation, which is widely used but poorly understood.Comment: 26 page

    Relevant States and Memory in Markov Chain Bootstrapping and Simulation

    Get PDF
    Markov chain theory is proving to be a powerful approach to bootstrap highly nonlinear time series. In this work we provide a method to estimate the memory of a Markov chain (i.e. its order) and to identify its relevant states. In particular the choice of memory lags and the aggregation of irrelevant states are obtained by looking for regularities in the transition probabilities. Our approach is based on an optimization model. More specifically we consider two competing objectives that a researcher will in general pursue when dealing with bootstrapping: preserving the “structural” similarity between the original and the simulated series and assuring a controlled diversification of the latter. A discussion based on information theory is developed to define the desirable properties for such optimal criteria. Two numerical tests are developed to verify the effectiveness of the method proposed here
    corecore