376 research outputs found

    Stationary distributions of continuous-time Markov chains: a review of theory and truncation-based approximations

    Get PDF
    Computing the stationary distributions of a continuous-time Markov chain involves solving a set of linear equations. In most cases of interest, the number of equations is infinite or too large, and cannot be solved analytically or numerically. Several approximation schemes overcome this issue by truncating the state space to a manageable size. In this review, we first give a comprehensive theoretical account of the stationary distributions and their relation to the long-term behaviour of the Markov chain, which is readily accessible to non-experts and free of irreducibility assumptions made in standard texts. We then review truncation-based approximation schemes paying particular attention to their convergence and to the errors they introduce, and we illustrate their performance with an example of a stochastic reaction network of relevance in biology and chemistry. We conclude by elaborating on computational trade-offs associated with error control and some open questions

    Bayesian Estimation of Inequalities with Non-Rectangular Censored Survey Data

    Full text link
    Synthetic indices are used in Economics to measure various aspects of monetary inequalities. These scalar indices take as input the distribution over a finite population, for example the population of a specific country. In this article we consider the case of the French 2004 Wealth survey. We have at hand a partial measurement on the distribution of interest consisting of bracketed and sometimes missing data, over a subsample of the population of interest. We present in this article the statistical methodology used to obtain point and interval estimates taking into account the various uncertainties. The inequality indices being nonlinear in the input distribution, we rely on a simulation based approach where the model for the wealth per household is multivariate. Using the survey data as well as matched auxiliary tax declarations data, we have at hand a quite intricate non-rectangle multidimensional censoring. For practical issues we use a Bayesian approach. Inference using Monte-Carlo approximations relies on a Monte-Carlo Markov chain algorithm namely the Gibbs sampler. The quantities interesting to the decision maker are taken to be the various inequality indices for the French population. Their distribution conditional on the data of the subsample are assumed to be normal centered on the design-based estimates with variance computed through linearization and taking into account the sample design and total nonresponse. Exogeneous selection of the subsample, in particular the nonresponse mechanism, is assumed and we condition on the adequate covariates

    Data augmentation for models based on rejection sampling

    Full text link
    We present a data augmentation scheme to perform Markov chain Monte Carlo inference for models where data generation involves a rejection sampling algorithm. Our idea, which seems to be missing in the literature, is a simple scheme to instantiate the rejected proposals preceding each data point. The resulting joint probability over observed and rejected variables can be much simpler than the marginal distribution over the observed variables, which often involves intractable integrals. We consider three problems, the first being the modeling of flow-cytometry measurements subject to truncation. The second is a Bayesian analysis of the matrix Langevin distribution on the Stiefel manifold, and the third, Bayesian inference for a nonparametric Gaussian process density model. The latter two are instances of problems where Markov chain Monte Carlo inference is doubly-intractable. Our experiments demonstrate superior performance over state-of-the-art sampling algorithms for such problems.Comment: 6 figures. arXiv admin note: text overlap with arXiv:1311.090

    A Bayes method for a monotone hazard rate via S-paths

    Full text link
    A class of random hazard rates, which is defined as a mixture of an indicator kernel convolved with a completely random measure, is of interest. We provide an explicit characterization of the posterior distribution of this mixture hazard rate model via a finite mixture of S-paths. A closed and tractable Bayes estimator for the hazard rate is derived to be a finite sum over S-paths. The path characterization or the estimator is proved to be a Rao--Blackwellization of an existing partition characterization or partition-sum estimator. This accentuates the importance of S-paths in Bayesian modeling of monotone hazard rates. An efficient Markov chain Monte Carlo (MCMC) method is proposed to approximate this class of estimates. It is shown that S-path characterization also exists in modeling with covariates by a proportional hazard model, and the proposed algorithm again applies. Numerical results of the method are given to demonstrate its practicality and effectiveness.Comment: Published at http://dx.doi.org/10.1214/009053606000000047 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    How to Bootstrap Aalen-Johansen Processes for Competing Risks? Handicaps, Solutions and Limitations

    Full text link
    Statistical inference in competing risks models is often based on the famous Aalen-Johansen estimator. Since the corresponding limit process lacks independent increments, it is typically applied together with Lin's (1997) resampling technique involving standard normal multipliers. Recently, it has been seen that this approach can be interpreted as a wild bootstrap technique and that other multipliers, as e.g. centered Poissons, may lead to better finite sample performances, see Beyersmann et al. (2013). Since the latter is closely related to Efron's classical bootstrap, the question arises whether this or more general weighted bootstrap versions of Aalen-Johansen processes lead to valid results. Here we analyze their asymptotic behaviour and it turns out that such weighted bootstrap versions in general possess the wrong covariance structure in the limit. However, we explain that the weighted bootstrap can nevertheless be applied for specific null hypotheses of interest and also discuss its limitations for statistical inference. To this end, we introduce different consistent weighted bootstrap tests for the null hypothesis of stochastically ordered cumulative incidence functions and compare their finite sample performance in a simulation study.Comment: Keywords: Aalen-Johansen Estimator; Bootstrap; Competing risk; Counting processes; Cumulative incidence function; Left-truncation; Right-censoring; Weighted Bootstra

    Suppressing escape events in maps of the unit interval with demographic noise

    Full text link
    We explore the properties of discrete-time stochastic processes with a bounded state space, whose deterministic limit is given by a map of the unit interval. We find that, in the mesoscopic description of the system, the large jumps between successive iterates of the process can result in probability leaking out of the unit interval, despite the fact that the noise is multiplicative and vanishes at the boundaries. By including higher-order terms in the mesoscopic expansion, we are able to capture the non-Gaussian nature of the noise distribution near the boundaries, but this does not preclude the possibility of a trajectory leaving the interval. We propose a number of prescriptions for treating these escape events, and we compare the results with those obtained for the metastable behavior of the microscopic model, where escape events are not possible. We find that, rather than truncating the noise distribution, censoring this distribution to prevent escape events leads to results which are more consistent with the microscopic model. The addition of higher moments to the noise distribution does not increase the accuracy of the final results, and it can be replaced by the simpler Gaussian noise.Comment: 14 pages, 13 figure

    Nonparametric inference for Markov processes with missing absorbing state

    Get PDF
    This study examines nonparametric estimations of a transition proba- bility matrix of a nonhomogeneous Markov process with a nite state space and a partially observed absorbing state. We impose a missing-at-random assumption and propose a computationally e cient nonparametric maximum pseudolikelihood estimator (NPMPLE). The estimator depends on a parametric model that is used to estimate the probability of each absorbing state for the missing observations based, potentially, on auxiliary data. For the latter model, we propose a formal goodness- of- t test based on a residual process. Using modern empirical process theory, we show that the estimator is uniformly consistent and converges weakly to a tight mean-zero Gaussian random eld. We also provide a methodology for constructing simultaneous con dence bands. Simulation studies show that the NPMPLE works well with small sample sizes and that it is robust against some degree of misspec- i cation of the parametric model for the missing absorbing states. The method is illustrated using HIV data from sub-Saharan Africa to estimate the transition probabilities of death and disengagement from HIV care
    corecore