100 research outputs found

    Coronary–aortic interaction during ventricular isovolumic contraction

    Get PDF
    In earlier work, we suggested that the start of the isovolumic contraction period could be detected in arterial pressure waveforms as the start of a temporary pre-systolic pressure perturbation (AICstart, start of the Arterially detected Isovolumic Contraction), and proposed the retrograde coronary blood volume flow in combination with a backwards traveling pressure wave as its most likely origin. In this study, we tested this hypothesis by means of a coronary artery occlusion protocol. In six Yorkshire × Landrace swine, we simultaneously occluded the left anterior descending (LAD) and left circumflex (LCx) artery for 5 s followed by a 20-s reperfusion period and repeated this sequence at least two more times. A similar procedure was used to occlude only the right coronary artery (RCA) and finally all three main coronary arteries simultaneously. None of the occlusion protocols caused a decrease in the arterial pressure perturbation in the aorta during occlusion (P > 0.20) nor an increase during reactive hyperemia (P > 0.22), despite a higher deceleration of coronary blood volume flow (P = 0.03) or increased coronary conductance (P = 0.04) during hyperemia. These results show that the pre-systolic aortic pressure perturbation does not originate from the coronary arteries

    Combining estimates of interest in prognostic modelling studies after multiple imputation: current practice and guidelines

    Get PDF
    Background: Multiple imputation (MI) provides an effective approach to handle missing covariate data within prognostic modelling studies, as it can properly account for the missing data uncertainty. The multiply imputed datasets are each analysed using standard prognostic modelling techniques to obtain the estimates of interest. The estimates from each imputed dataset are then combined into one overall estimate and variance, incorporating both the within and between imputation variability. Rubin's rules for combining these multiply imputed estimates are based on asymptotic theory. The resulting combined estimates may be more accurate if the posterior distribution of the population parameter of interest is better approximated by the normal distribution. However, the normality assumption may not be appropriate for all the parameters of interest when analysing prognostic modelling studies, such as predicted survival probabilities and model performance measures. Methods: Guidelines for combining the estimates of interest when analysing prognostic modelling studies are provided. A literature review is performed to identify current practice for combining such estimates in prognostic modelling studies. Results: Methods for combining all reported estimates after MI were not well reported in the current literature. Rubin's rules without applying any transformations were the standard approach used, when any method was stated. Conclusion: The proposed simple guidelines for combining estimates after MI may lead to a wider and more appropriate use of MI in future prognostic modelling studies

    Reducing the probability of false positive research findings by pre-publication validation – Experience with a large multiple sclerosis database

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Published false positive research findings are a major problem in the process of scientific discovery. There is a high rate of lack of replication of results in clinical research in general, multiple sclerosis research being no exception. Our aim was to develop and implement a policy that reduces the probability of publishing false positive research findings.</p> <p>We have assessed the utility to work with a pre-publication validation policy after several years of research in the context of a large multiple sclerosis database.</p> <p>Methods</p> <p>The large database of the Sylvia Lawry Centre for Multiple Sclerosis Research was split in two parts: one for hypothesis generation and a validation part for confirmation of selected results. We present case studies from 5 finalized projects that have used the validation policy and results from a simulation study.</p> <p>Results</p> <p>In one project, the "relapse and disability" project as described in section II (example 3), findings could not be confirmed in the validation part of the database. The simulation study showed that the percentage of false positive findings can exceed 20% depending on variable selection.</p> <p>Conclusion</p> <p>We conclude that the validation policy has prevented the publication of at least one research finding that could not be validated in an independent data set (and probably would have been a "true" false-positive finding) over the past three years, and has led to improved data analysis, statistical programming, and selection of hypotheses. The advantages outweigh the lost statistical power inherent in the process.</p

    Minimum sample size for external validation of a clinical prediction model with a binary outcome

    Get PDF
    In prediction model research, external validation is needed to examine an existing model's performance using data independent to that for model development. Current external validation studies often suffer from small sample sizes and consequently imprecise predictive performance estimates. To address this, we propose how to determine the minimum sample size needed for a new external validation study of a prediction model for a binary outcome. Our calculations aim to precisely estimate calibration (Observed/Expected and calibration slope), discrimination (C-statistic), and clinical utility (net benefit). For each measure, we propose closed-form and iterative solutions for calculating the minimum sample size required. These require specifying: (i) target SEs (confidence interval widths) for each estimate of interest, (ii) the anticipated outcome event proportion in the validation population, (iii) the prediction model's anticipated (mis)calibration and variance of linear predictor values in the validation population, and (iv) potential risk thresholds for clinical decision-making. The calculations can also be used to inform whether the sample size of an existing (already collected) dataset is adequate for external validation. We illustrate our proposal for external validation of a prediction model for mechanical heart valve failure with an expected outcome event proportion of 0.018. Calculations suggest at least 9835 participants (177 events) are required to precisely estimate the calibration and discrimination measures, with this number driven by the calibration slope criterion, which we anticipate will often be the case. Also, 6443 participants (116 events) are required to precisely estimate net benefit at a risk threshold of 8%. Software code is provided.</p

    Preoperative predictors for residual tumor after surgery in patients with ovarian carcinoma

    Get PDF
    Objectives: Suboptimal debulking (>1 cm residual tumor) results in poor survival rates for patients with an advanced stage of ovarian cancer. The purpose of this study was to develop a prediction model, based on simple preoperative parameters, for patients with an advanced stage of ovarian cancer who are at risk of suboptimal cytoreduction despite maximal surgical effort. Methods: Retrospective analysis of 187 consecutive patients with a suspected clinical diagnosis of advanced-stage ovarian cancer undergoing upfront debulking between January 1998 and December 2003. Preoperative parameters were Karnofsky performance status, ascites and serum concentrations of CA 125, hemoglobin, albumin, LDH and blood platelets. The main outcome parameter was residual tumor >1 cm. Univariate and multivariate logistic regression was employed for testing possible prediction models. A clinically applicable graphic model (nomogram) for this prediction was to be developed. Results: Serum concentrations of CA 125 and blood platelets in the group with residual tumor >1 cm were higher in comparison to the optimally cytoreduced group (p 1 cm based on serum levels of CA 125 and albumin was established. Conclusion: Postoperative residual tumor despite maximal surgical effort can be predicted by preoperative CA 125 and serum albumin levels. With a nomogram based on these two parameters, probability of postoperative residual tumor in each individual patient can be predicted. This proposed nomogram may be valuable in daily routine practice for counseling and to select treatment modality. Copyrigh

    Meta-analysis of binary outcomes via generalized linear mixed models: a simulation study

    Get PDF
    Background: Systematic reviews and meta-analyses of binary outcomes are widespread in all areas of application. The odds ratio, in particular, is by far the most popular effect measure. However, the standard meta-analysis of odds ratios using a random-effects model has a number of potential problems. An attractive alternative approach for the meta-analysis of binary outcomes uses a class of generalized linear mixed models (GLMMs). GLMMs are believed to overcome the problems of the standard random-effects model because they use a correct binomial-normal likelihood. However, this belief is based on theoretical considerations, and no sufficient simulations have assessed the performance of GLMMs in meta-analysis. This gap may be due to the computational complexity of these models and the resulting considerable time requirements. Methods: The present study is the first to provide extensive simulations on the performance of four GLMM methods (models with fixed and random study effects and two conditional methods) for meta-analysis of odds ratios in comparison to the standard random effects model. Results: In our simulations, the hypergeometric-normal model provided less biased estimation of the heterogeneity variance than the standard random-effects meta-analysis using the restricted maximum likelihood (REML) estimation when the data were sparse, but the REML method performed similarly for the point estimation of the odds ratio, and better for the interval estimation. Conclusions: It is difficult to recommend the use of GLMMs in the practice of meta-analysis. The problem of finding uniformly good methods of the meta-analysis for binary outcomes is still open
    corecore