100 research outputs found

    Analysis of the impact of lockdown on the reproduction number of the SARS-Cov-2 in Spain

    Get PDF
    The late 2019 COVID-19 outbreak has put the health systems of many countries to the limit of their capacity. The most affected European countries are, so far, Italy and Spain. In both countries (and others), the authorities decreed a lockdown, with local specificities. The objective of this work is to evaluate the impact of the measures undertaken in Spain to deal with the pandemic

    miRecSurv package: Prentice-Williams-Peterson models with multiple imputation of unknown number of previous episodes

    Get PDF
    Left censoring can occur with relative frequency when analysing recurrent events in epi demiological studies, especially observational ones. Concretely, the inclusion of individuals that were already at risk before the effective initiation in a cohort study, may cause the unawareness of prior episodes that have already been experienced, and this will easily lead to biased and inefficient estimates. The miRecSurv package is based on the use of models with specific baseline hazard, with multiple imputation of the number of prior episodes when unknown by means of the COMPoisson distribution, a very flexible count distribution that can handle over-, suband equidispersion, with a stratified model depending on whether the individual had or had not previously been at risk, and the use of a frailty term. The usage of the package is illustrated by means of a real data example based on a occupational cohort study and a simulation study

    Correction : Analysis of zero inflated dichotomous variables from a Bayesian perspective: application to occupational health

    Get PDF
    Zero-inflated models are generally aimed to addressing the problem that arises from having two different sources that generate the zero values observed in a distribution. In practice, this is due to the fact that the population studied actually consists of two subpopulations: one in which the value zero is by default (structural zero) and the other is circumstantial (sample zero). This work proposes a new methodology to fit zero inflated Bernoulli data from a Bayesian approach, able to distinguish between two potential sources of zeros (structural and non-structural). The proposed methodology performance has been evaluated through a comprehensive simulation study, and it has been compiled as an R package freely available to the community. Its usage is illustrated by means of a real example from the field of occupational health as the phenomenon of sickness presenteeism, in which it is reasonable to think that some individuals will never be at risk of suffering it because they have not been sick in the period of study (structural zeros). Without separating structural and non-structural zeros one would be studying jointly the general health status and the presenteeism itself, and therefore obtaining potentially biased estimates as the phenomenon is being implicitly underestimated by diluting it into the general health status. The proposed methodology is able to distinguish two different sources of zeros (structural and non-structural) from dichotomous data with or without covariates in a Bayesian framework, and has been made available to any interested researcher in the form of the bayesZIB R package (https://cran.r-project.org/package=bayesZIB)

    Analysis of zero inflated dichotomous variables from a Bayesian perspective : application to occupational health

    Get PDF
    Background: Zero-inflated models are generally aimed to addressing the problem that arises from having two different sources that generate the zero values observed in a distribution. In practice, this is due to the fact that the population studied actually consists of two subpopulations: one in which the value zero is by default (structural zero) and the other is circumstantial (sample zero). Methods: This work proposes a new methodology to fit zero inflated Bernoulli data from a Bayesian approach, able to distinguish between two potential sources of zeros (structural and non-structural). Results: The proposed methodology performance has been evaluated through a comprehensive simulation study, and it has been compiled as an R package freely available to the community. Its usage is illustrated by means of a real example from the field of occupational health as the phenomenon of sickness presenteeism, in which it is reasonable to think that some individuals will never be at risk of suffering it because they have not been sick in the period of study (structural zeros). Without separating structural and non-structural zeros one would be studying jointly the general health status and the presenteeism itself, and therefore obtaining potentially biased estimates as the phenomenon is being implicitly underestimated by diluting it into the general health status. Conclusions: The proposed methodology is able to distinguish two different sources of zeros (structural and non-structural) from dichotomous data with or without covariates in a Bayesian framework, and has been made available to any interested researcher in the form of the bayesZIB R package (https://cran.r-project.org/package=bayesZIB)

    Left-censored recurrent event analysis in epidemiological studies: a proposal when the number of previous episodes is unknown

    Get PDF
    Left censoring can occur with relative frequency when analysing recurrent events in epidemiological studies, especially observational ones. Concretely, the inclusion of individuals that were already at risk before the effective initiation in a cohort study, may cause the unawareness of prior episodes that have already been experienced, and this will easily lead to biased and inefficient estimates. The objective of this paper is to propose a statistical method that performs successfully in these circumstances. Our proposal is based on the use of models with specific baseline hazard, imputing the number of prior episodes when unknown, with a stratified model depending on whether the individual had or had not previously been at risk, and the use of a frailty term. The performance is examined in different scenarios through a comprehensive simulation study.The proposed method achieves notable performance even when the percentage of subjects at risk before the beginning of the follow-up is very elevated, with biases that are often under 10\% and coverages of around 95\%, sometimes somewhat conservative. If the baseline hazard is constant, it seems to be that the ``Gap Time'' approach is better; if it is not constant, the ``Counting Process'' seems to be a better choice. Because of the lack of knowledge of the prior episodes that have been experienced by a part (or all) of subjects, the use of common baseline methods is not advised. Our proposal seems to perform acceptably in the majority of the scenarios proposed, becoming an interesting alternative in this context.Comment: 1 table, 2 supplementary tables, 4 figure

    Probability estimation of a Carrington-like geomagnetic storm

    Get PDF
    Intense geomagnetic storms can cause severe damage to electrical systems and communications. This work proposes a counting process with Weibull inter-occurrence times in order to estimate the probability of extreme geomagnetic events. It is found that the scale parameter of the inter-occurrence time distribution grows exponentially with the absolute value of the intensity threshold defining the storm, whereas the shape parameter keeps rather constant. The model is able to forecast the probability of occurrence of an event for a given intensity threshold; in particular, the probability of occurrence on the next decade of an extreme event of a magnitude comparable or larger than the well-known Carrington event of 1859 is explored, and estimated to be between 0.46% and 1.88% (with a 95% confidence), a much lower value than those reported in the existing literature

    Impact of model calibration on cost-effectiveness analysis of cervical cancer prevention

    Get PDF
    Markov chain models are commonly used to simulate the natural history of human papillomavirus infection and subsequent cervical lesions with the aim of predicting future benefits of health interventions. Developing and calibrating these models entails making a number of critical decisions that will influence the ability of the model to reflect real conditions and predict future situations. Accuracy of selected inputs and calibration procedures are two of the crucial aspects for model performance and understanding their influence is essential, especially when involves policy decisions. The aim of this work is to assess the health and economic impact on cervical cancer prevention strategies currently under discussion according to the most common methods of model calibration combined with different accuracy degree of initial inputs. Model results show large differences on the goodness of fit and cost-effectiveness outcomes depending on the calibration approach used, and these variations may affect health policy decisions. Our findings strengthen the importance of obtaining good calibrated probability matrices to get reliable health and cost outcomes, and are directly generalizable to any cost-effectiveness analysis based on Markov chain models

    Generalized Hermite Distribution Modelling with the R Package hermite

    Get PDF
    The Generalized Hermite distribution (and the Hermite distribution as a particular case) is often used for fitting count data in the presence of overdispersion or multimodality. Despite this, to our knowledge, no standard software packages have implemented specific functions to compute basic probabilities and make simple statistical inference based on these distributions. We present here a set of computational tools that allows the user to face these difficulties by modelling with the Generalized Hermite distribution using the R package hermite. The package can also be used to generate random deviates from a Generalized Hermite distribution and to use basic functions to compute probabilities (density, cumulative density and quantile functions are available), to estimate parameters using the maximum likelihood method and to perform the likelihood ratio test for Poisson assumption against a Generalized Hermite alternative. In order to improve the density and quantile functions performance when the parameters are large, Edgeworth and Cornish-Fisher expansions have been used. Hermite regression is also a useful tool for modeling inflated count data, so its inclusion to a commonly used software like R will make this tool available to a wide range of potential users. Some examples of usage in several fields of application are also givenThis work was partially funded by the grant MTM2012-31118, by the grant UNAB10-4E-378 co-funded by FEDER “A way to build Europe” and by the grant MTM2013-41383P from the Spanish Ministry of Economy and Competitiveness co-funded by the European Regional Development Fund (EDRF)S

    Analyzing recurrent events when the history of previous episodes is unknown or not taken into account: proceed with caution

    Get PDF
    Objective: Researchers in public health are often interested in examining the effect of several exposures on the incidence of a recurrent event. The aim of the present study is to assess how well the common baseline hazard models perform to estimate the effect of multiple exposures on the hazard of presenting an episode of a recurrent event, in presence of event dependence and when the history of prior-episodes is unknown or is not taken into account. Methods: Through a comprehensive simulation study, using specific-baseline hazard models as the reference, we evaluate the performance of common-baseline hazard models by means of several criteria: bias, mean squared error, coverage, confidence intervals mean length and compliance with the assumption of proportional hazards. Results: Results indicate that the bias worsen as event dependence increases, leading to a considerable overestimation of the exposure effect; coverage levels and compliance with the proportional hazards assumption are low or extremely low, worsening with increasing event dependence, effects to be estimated, and sample sizes. Conclusions: Common-baseline hazard models cannot be recommended when we analyse recurrent events in the presence of event dependence. It is important to have access to the history of prior-episodes per subject, it can permit to obtain better estimations of the effects of the exposures
    corecore