4,213 research outputs found

    Full Open Population Capture-Recapture Models with Individual Covariates

    Full text link
    Traditional analyses of capture-recapture data are based on likelihood functions that explicitly integrate out all missing data. We use a complete data likelihood (CDL) to show how a wide range of capture-recapture models can be easily fitted using readily available software JAGS/BUGS even when there are individual-specific time-varying covariates. The models we describe extend those that condition on first capture to include abundance parameters, or parameters related to abundance, such as population size, birth rates or lifetime. The use of a CDL means that any missing data, including uncertain individual covariates, can be included in models without the need for customized likelihood functions. This approach also facilitates modeling processes of demographic interest rather than the complexities caused by non-ignorable missing data. We illustrate using two examples, (i) open population modeling in the presence of a censored time-varying individual covariate in a full robust-design, and (ii) full open population multi-state modeling in the presence of a partially observed categorical variable

    On some inferential problems with recurrent event models

    Get PDF
    Recurrent events (RE) occur in many disciplines, such as biomedical, engineering, actuarial science, sociology, economy to name a few. It is then important to develop dynamic models for their modeling and analysis. Of interest with data collected in a RE monitoring are inferential problems pertaining to the distribution function F of the time between occurrences, or that of the distribution function G of the monitoring window, and their functionals such as quantiles, mean. These problems include, but not limited to: estimating F parametrically or nonparametrically; goodness of fit tests on an hypothesized family of distributions; efficient of tests; regression-type models, or validation of models that arise in the modeling and analysis of RE. This dissertation work focuses on several inferential problems of significant importance with these types of data. The first one we dealt with is the problem of informative monitoring. Informative monitoring occurs when G contains information about F, and the information is accounted for in the inferential process through a Lehman-type model, 1 - G= (1 -F )ß, so called generalized Koziol-Green model in the literature. We propose a class of inferential procedures for validating the model. The research work proceeds with the development of a flexible, random cells based chi-square goodness of fit test for an hypothesized family of distributions with unknown parameter. The cells are random in the sense that they are cut free, are function of the data, and are not predetermined in advance as is done in standard chi-square type tests. A minimum chi-square estimator is used to construct the test statistic whose power is assessed against a sequence of Pitman-like alternatives. The last problem we considered is that of an efficiency, optimality, and comparison of various statistical tests on RE that are derived in this work and existed in the literature. The efficiency and optimality are obtained by extending the theory of Bahadur and Wieand to RE. Asymptotic properties of the different estimators and or statistics are presented via empirical processes tools. Small sample results using intensive simulation study of the various procedures are presented, and these show good approximation of the truth. Real recurrent event data from the engineering and biomedical studies are utilized to illustrate the various methods --Abstract, page iv

    Managing Well Integrity using Reliability Based Models

    Get PDF
    Imperial Users onl

    Dynamic Modeling and Statistical Analysis of Event Times

    Get PDF
    This review article provides an overview of recent work in the modeling and analysis of recurrent events arising in engineering, reliability, public health, biomedicine and other areas. Recurrent event modeling possesses unique facets making it different and more difficult to handle than single event settings. For instance, the impact of an increasing number of event occurrences needs to be taken into account, the effects of covariates should be considered, potential association among the interevent times within a unit cannot be ignored, and the effects of performed interventions after each event occurrence need to be factored in. A recent general class of models for recurrent events which simultaneously accommodates these aspects is described. Statistical inference methods for this class of models are presented and illustrated through applications to real data sets. Some existing open research problems are described.Comment: Published at http://dx.doi.org/10.1214/088342306000000349 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    New approaches to parameter estimation with statistical censoring by means of the CEV algorithm: Characterization of its properties for high-performance normal processes

    Full text link
    [EN] The process of parameter estimation in order to characterize a population using algorithms is in constant development and perfection. Recent years show that data-based decision-making is complex when there is uncertainty generated by statistical censoring. The purpose of this article is to evaluate the effect of statistical censoring on the normal distribution, which is common in many processes. Parameter estimation properties will be characterized with the conditional expected value algorithm, using different censoring percentages and sample sizes. The estimation properties chosen for the study will focus on the monitoring and decision-making related to industrial processes with the presence of censoring.Neira Rueda, J.; Carrión García, A. (2023). New approaches to parameter estimation with statistical censoring by means of the CEV algorithm: Characterization of its properties for high-performance normal processes. Communication in Statistics- Theory and Methods. 52(10):3557-3573. https://doi.org/10.1080/03610926.2021.197732335573573521

    Multi-State Models for Panel Data: The msm Package for R

    Get PDF
    Panel data are observations of a continuous-time process at arbitrary times, for example, visits to a hospital to diagnose disease status. Multi-state models for such data are generally based on the Markov assumption. This article reviews the range of Markov models and their extensions which can be fitted to panel-observed data, and their implementation in the msm package for R. Transition intensities may vary between individuals, or with piecewise-constant time-dependent covariates, giving an inhomogeneous Markov model. Hidden Markov models can be used for multi-state processes which are misclassified or observed only through a noisy marker. The package is intended to be straightforward to use, flexible and comprehensively documented. Worked examples are given of the use of msm to model chronic disease progression and screening. Assessment of model fit, and potential future developments of the software, are also discussed.

    Modeling repairable system failure data using NHPP realiability growth mode.

    Get PDF
    Stochastic point processes have been widely used to describe the behaviour of repairable systems. The Crow nonhomogeneous Poisson process (NHPP) often known as the Power Law model is regarded as one of the best models for repairable systems. The goodness-of-fit test rejects the intensity function of the power law model, and so the log-linear model was fitted and tested for goodness-of-fit. The Weibull Time to Failure recurrent neural network (WTTE-RNN) framework, a probabilistic deep learning model for failure data, is also explored. However, we find that the WTTE-RNN framework is only appropriate failure data with independent and identically distributed interarrival times of successive failures, and so cannot be applied to nonhomogeneous Poisson process
    corecore