37 research outputs found

    Adjusting for misclassification of an exposure in an individual participant data meta-analysis

    Get PDF
    A common problem in the analysis of multiple data sources, including individual participant data meta-analysis (IPD-MA), is the misclassification of binary variables. Misclassification may lead to biased estimators of model parameters, even when the misclassification is entirely random. We aimed to develop statistical methods that facilitate unbiased estimation of adjusted and unadjusted exposure-outcome associations and between-study heterogeneity in IPD-MA, where the extent and nature of exposure misclassification may vary across studies. We present Bayesian methods that allow misclassification of binary exposure variables to depend on study- and participant-level characteristics. In an example of the differential diagnosis of dengue using two variables, where the gold standard measurement for the exposure variable was unavailable for some studies which only measured a surrogate prone to misclassification, our methods yielded more accurate estimates than analyses naive with regard to misclassification or based on gold standard measurements alone. In a simulation study, the evaluated misclassification model yielded valid estimates of the exposure-outcome association, and was more accurate than analyses restricted to gold standard measurements. Our proposed framework can appropriately account for the presence of binary exposure misclassification in IPD-MA. It requires that some studies supply IPD for the surrogate and gold standard exposure, and allows misclassification to follow a random effects distribution across studies conditional on observed covariates (and outcome). The proposed methods are most beneficial when few large studies that measured the gold standard are available, and when misclassification is frequent

    Multiple imputation of incomplete multilevel data using Heckman selection models

    Get PDF
    Missing data is a common problem in medical research, and is commonly addressed using multiple imputation. Although traditional imputation methods allow for valid statistical inference when data are missing at random (MAR), their implementation is problematic when the presence of missingness depends on unobserved variables, that is, the data are missing not at random (MNAR). Unfortunately, this MNAR situation is rather common, in observational studies, registries and other sources of real-world data. While several imputation methods have been proposed for addressing individual studies when data are MNAR, their application and validity in large datasets with multilevel structure remains unclear. We therefore explored the consequence of MNAR data in hierarchical data in-depth, and proposed a novel multilevel imputation method for common missing patterns in clustered datasets. This method is based on the principles of Heckman selection models and adopts a two-stage meta-analysis approach to impute binary and continuous variables that may be outcomes or predictors and that are systematically or sporadically missing. After evaluating the proposed imputation model in simulated scenarios, we illustrate it use in a cross-sectional community survey to estimate the prevalence of malaria parasitemia in children aged 2-10 years in five regions in Uganda

    Multivariate meta-analysis of individual participant data helped externally validate the performance and implementation of a prediction model

    Get PDF
    Objectives Our aim was to improve meta-analysis methods for summarizing a prediction model's performance when individual participant data are available from multiple studies for external validation. Study Design and Setting We suggest multivariate meta-analysis for jointly synthesizing calibration and discrimination performance, while accounting for their correlation. The approach estimates a prediction model's average performance, the heterogeneity in performance across populations, and the probability of "good" performance in new populations. This allows different implementation strategies (e.g., recalibration) to be compared. Application is made to a diagnostic model for deep vein thrombosis (DVT) and a prognostic model for breast cancer mortality. Results In both examples, multivariate meta-analysis reveals that calibration performance is excellent on average but highly heterogeneous across populations unless the model's intercept (baseline hazard) is recalibrated. For the cancer model, the probability of "good" performance (defined by C statistic ≥ 0.7 and calibration slope between 0.9 and 1.1) in a new population was 0.67 with recalibration but 0.22 without recalibration. For the DVT model, even with recalibration, there was only a 0.03 probability of "good" performance. Conclusion Multivariate meta-analysis can be used to externally validate a prediction model's calibration and discrimination performance across multiple populations and to evaluate different implementation strategies

    BAYESIAN ADJUSTMENT FOR PREFERENTIAL TESTING IN ESTIMATING INFECTION FATALITY RATES, AS MOTIVATED BY THE COVID-19 PANDEMIC

    Get PDF
    A key challenge in estimating the infection fatality rate (IFR), along with its relation with various factors of interest, is determining the total number of cases. The total number of cases is not known not only because not everyone is tested but also, more importantly, because tested individuals are not representative of the population at large. We refer to the phenomenon whereby infected individuals are more likely to be tested than noninfected individuals as “preferential testing.” An open question is whether or not it is possible to reliably estimate the IFR without any specific knowledge about the degree to which the data are biased by preferential testing. In this paper we take a partial identifiability approach, formulating clearly where deliberate prior assumptions can be made and presenting a Bayesian model which pools information from different samples. When the model is fit to European data obtained from seroprevalence studies and national official COVID-19 statistics, we estimate the overall COVID-19 IFR for Europe to be 0.53%, 95% C.I. =[0.38%, 0.70%]

    Propensity-based standardization to enhance the validation and interpretation of prediction model discrimination for a target population

    Get PDF
    External validation of the discriminative ability of prediction models is of key importance. However, the interpretation of such evaluations is challenging, as the ability to discriminate depends on both the sample characteristics (ie, case-mix) and the generalizability of predictor coefficients, but most discrimination indices do not provide any insight into their respective contributions. To disentangle differences in discriminative ability across external validation samples due to a lack of model generalizability from differences in sample characteristics, we propose propensity-weighted measures of discrimination. These weighted metrics, which are derived from propensity scores for sample membership, are standardized for case-mix differences between the model development and validation samples, allowing for a fair comparison of discriminative ability in terms of model characteristics in a target population of interest. We illustrate our methods with the validation of eight prediction models for deep vein thrombosis in 12 external validation data sets and assess our methods in a simulation study. In the illustrative example, propensity score standardization reduced between-study heterogeneity of discrimination, indicating that between-study variability was partially attributable to case-mix. The simulation study showed that only flexible propensity-score methods (allowing for non-linear effects) produced unbiased estimates of model discrimination in the target population, and only when the positivity assumption was met. Propensity score-based standardization may facilitate the interpretation of (heterogeneity in) discriminative ability of a prediction model as observed across multiple studies, and may guide model updating strategies for a particular target population. Careful propensity score modeling with attention for non-linear relations is recommended

    Evaluation of clinical prediction models (part 3): calculating the sample size required for an external validation study

    Get PDF
    An external validation study evaluates the performance of a prediction model in new data, but many of these studies are too small to provide reliable answers. In the third article of their series on model evaluation, Riley and colleagues describe how to calculate the sample size required for external validation studies, and propose to avoid rules of thumb by tailoring calculations to the model and setting at hand

    Practical implications of using real-world evidence (RWE) in comparative effectiveness research: Learnings from IMI-GetReal

    Get PDF
    In light of increasing attention towards the use of real-world evidence (RWE) in decision making in recent years, this commentary aims to reflect on the experiences gained in accessing and using RWE for comparative effectiveness research as a part of the Innovative Medicines Initiative GetReal Consortium and discuss their implications for RWE use in decision-making

    The use of imputation in clinical decision support systems: a cardiovascular risk management pilot vignette study among clinicians

    Get PDF
    Aims: A major challenge of the use of prediction models in clinical care is missing data. Real-time imputation may alleviate this. However, to what extent clinicians accept this solution remains unknown. We aimed to assess acceptance of real-time imputation for missing patient data in a clinical decision support system (CDSS) including 10-year cardiovascular absolute risk for the individual patient. Methods and results: We performed a vignette study extending an existing CDSS with the real-time imputation method joint modelling imputation (JMI). We included 17 clinicians to use the CDSS with three different vignettes, describing potential use cases (missing data, no risk estimate; imputed values, risk estimate based on imputed data; complete information). In each vignette, missing data were introduced to mimic a situation as could occur in clinical practice. Acceptance of end-users was assessed on three different axes: clinical realism, comfortableness, and added clinical value. Overall, the imputed predictor values were found to be clinically reasonable and according to the expectations. However, for binary variables, use of a probability scale to express uncertainty was deemed inconvenient. The perceived comfortableness with imputed risk prediction was low, and confidence intervals were deemed too wide for reliable decision-making. The clinicians acknowledged added value for using JMI in clinical practice when used for educational, research, or informative purposes. Conclusion: Handling missing data in CDSS via JMI is useful, but more accurate imputations are needed to generate comfort in clinicians for use in routine care. Only then can CDSS create clinical value by improving decision-making

    Practical implications of using real-world evidence (RWE) in comparative effectiveness research: learnings from IMI-GetReal

    Get PDF
    In light of increasing attention towards the use of real-world evidence (RWE) in decision making in recent years, this commentary aims to reflect on the experiences gained in accessing and using RWE for comparative effectiveness research as a part of the Innovative Medicines Initiative GetReal Consortium and discuss their implications for RWE use in decision-making

    Transparent reporting of multivariable prediction models for individual prognosis or diagnosis: checklist for systematic reviews and meta-analyses (TRIPOD-SRMA)

    Get PDF
    Most clinical specialties have a plethora of studies that develop or validate one or more prediction models, for example, to inform diagnosis or prognosis. Having many prediction model studies in a particular clinical field motivates the need for systematic reviews and meta-analyses, to evaluate and summarise the overall evidence available from prediction model studies, in particular about the predictive performance of existing models. Such reviews are fast emerging, and should be reported completely, transparently, and accurately. To help ensure this type of reporting, this article describes a new reporting guideline for systematic reviews and meta-analyses of prediction model research
    corecore