404 research outputs found

    Using Evidence to Combat Overdiagnosis and Overtreatment:Evaluating Treatments, Tests, and Disease Definitions in the Time of Too Much

    Get PDF
    Ray Moynihan and colleagues outline suggestions for improving the way that medical evidence is produced, analysed, and interpreted to avoid problems of overdiagnosis and overtreatment. Please see later in the article for the Editors' Summar

    Valuing Healthcare Goods and Services: A Systematic Review and Meta-Analysis on the WTA-WTP Disparity

    Get PDF
    Objective: The objective of this systematic review was to review the available evidence on the disparity between willingness to accept (WTA) and willingness to pay (WTP) for healthcare goods and services. Methods: A tiered approach consisting of (1) a systematic review, (2) an aggregate data meta-analysis, and (3) an individual participant data meta-analysis was used. MEDLINE, EMBASE, Scopus, Scisearch, and Econlit were searched for articles reporting both WTA and WTP for healthcare goods and services. Individual participant data were requested from the authors of the included studies. Results: Thirteen papers, reporting WTA and WTP from 19 experiments/subgroups, were included in the review. The WTA/WTP ratios reported in these papers, varied from 0.60 to 4.01, with means of 1.73 (median 1.31) for 15 estimates of the mean and 1.58 (median 1.00) for nine estimates of the median. Individual data obtained from six papers, covering 71.2% of the subjects included in the review, yielded an unadjusted WTA/WTP ratio of 1.86 (95% confidence interval 1.52–2.28) and a WTA/WTP ratio adjusted for age, sex, and income of 1.70 (95% confidence interval 1.42–2.02). Income category and age had a statistically significant effect on the WTA/WTP ratio. The approach to handling zero WTA and WTP values has a considerable impact on the WTA/WTP ratio found. Conclusions and Implications: The results of this study imply that losses in healthcare goods and services are valued differently from gains (ratio > 1), but that the degree of disparity found depends on the method used to obtain the WTA/WTP ratio, including the approach to zero responses. Irrespective of the method used, the ratios found in our meta-analysis are smaller than the ratios found in previous meta-analyses

    Performance of binary prediction models in high-correlation low-dimensional settings:a comparison of methods

    Get PDF
    BACKGROUND: Clinical prediction models are developed widely across medical disciplines. When predictors in such models are highly collinear, unexpected or spurious predictor-outcome associations may occur, thereby potentially reducing face-validity of the prediction model. Collinearity can be dealt with by exclusion of collinear predictors, but when there is no a priori motivation (besides collinearity) to include or exclude specific predictors, such an approach is arbitrary and possibly inappropriate. METHODS: We compare different methods to address collinearity, including shrinkage, dimensionality reduction, and constrained optimization. The effectiveness of these methods is illustrated via simulations. RESULTS: In the conducted simulations, no effect of collinearity was observed on predictive outcomes (AUC, R(2), Intercept, Slope) across methods. However, a negative effect of collinearity on the stability of predictor selection was found, affecting all compared methods, but in particular methods that perform strong predictor selection (e.g., Lasso). Methods for which the included set of predictors remained most stable under increased collinearity were Ridge, PCLR, LAELR, and Dropout. CONCLUSIONS: Based on the results, we would recommend refraining from data-driven predictor selection approaches in the presence of high collinearity, because of the increased instability of predictor selection, even in relatively high events-per-variable settings. The selection of certain predictors over others may disproportionally give the impression that included predictors have a stronger association with the outcome than excluded predictors. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s41512-021-00115-5

    Самоподобие массивов сетевых публикаций по компьютерной вирусологии

    Get PDF
    Описан подход к организации анализа потока тематических публикаций по компьютерной вирусологии, представленных в web-пространстве. Обоснована фрактальная природа информационных потоков, описаны основные алгоритмы, применяемые в процессе исследований, а также приведены прогнозные выводы на основе свойств персистентности временных рядов.Описано підхід до організації аналізу потоку тематичних публікацій з комп’ютерної вірусології, які наведені у web-просторі. Обґрунтовано фрактальну природу інформаційних потоків, описано основні алгоритми, що застосовуються в процесі досліджень, а також наведено прогнозні висновки на базі властивостей персистентності часових рядів.An approach to the organization of the analysis of a thematic publications stream on computer virology, submitted in web-space, is described. The fractal nature of information streams is proved, the basic algorithms used during researches are described and forecasts conclusions on the basis of persistent properties of time series are given

    Validation of two age dependent D-dimer cut-off values for exclusion of deep vein thrombosis in suspected elderly patients in primary care: retrospective, cross sectional, diagnostic analysis

    Get PDF
    Objective To determine whether the use of age adapted D-dimer cut-off values can be translated to primary care patients who are suspected of deep vein thrombosis

    Prognosis research strategy (PROGRESS) 1: a framework for researching clinical outcomes.

    Get PDF
    The PROGRESS series (www.progress-partnership.org) sets out a framework of four interlinked prognosis research themes and provides examples from several disease fields to show why evidence from prognosis research is crucial to inform all points in the translation of biomedical and health related research into better patient outcomes. Recommendations are made in each of the four papers to improve current research standards What is prognosis research? Prognosis research seeks to understand and improve future outcomes in people with a given disease or health condition. However, there is increasing evidence that prognosis research standards need to be improved Why is prognosis research important? More people now live with disease and conditions that impair health than at any other time in history; prognosis research provides crucial evidence for translating findings from the laboratory to humans, and from clinical research to clinical practice This first article introduces the framework of four interlinked prognosis research themes and then focuses on the first of the themes - fundamental prognosis research, studies that aim to describe and explain future outcomes in relation to current diagnostic and treatment practices, often in relation to quality of care Fundamental prognosis research provides evidence informing healthcare and public health policy, the design and interpretation of randomised trials, and the impact of diagnostic tests on future outcome. It can inform new definitions of disease, may identify unanticipated benefits or harms of interventions, and clarify where new interventions are required to improve prognosis

    Key challenges in normal tissue complication probability model development and validation:towards a comprehensive strategy

    Get PDF
    Normal Tissue Complication Probability (NTCP) models can be used for treatment plan optimisation and patient selection for emerging treatment techniques. We discuss and suggest methodological approaches to address key challenges in NTCP model development and validation, including: missing data, non-linear response relationships, multicollinearity between predictors, overfitting, generalisability and the prediction of multiple complication grades at multiple time points. The methodological approaches chosen are aimed to improve the accuracy, transparency and robustness of future NTCP-models. We demonstrate our methodological approaches using clinical data

    Integrated management of atrial fibrillation in primary care:results of the ALL-IN cluster randomized trial

    Get PDF
    Aims To evaluate whether integrated care for atrial. fibrillation (AF) can be safely orchestrated in primary care. Methods and results The ALL-IN trial was a cluster randomized, open-label, pragmatic non-inferiority trial performed in primary care practices in the Netherlands. We randomized 26 practices: 15 to the integrated care intervention and 11 to usual care. The integrated care intervention consisted of (i) quarterly AF check-ups by trained nurses in primary care, also focusing on possibly interfering comorbidities, (ii) monitoring of anticoagulation therapy in primary care, and finally (iii) easy-access availability of consultations from cardiologists and anticoagulation clinics. The primary endpoint was all-cause mortality during 2 years of follow-up. In the intervention arm, 527 out of 941 eligible AF patients aged >65 years provided informed consent to undergo the intervention. These 527 patients were compared with 713 AF patients in the control arm receiving usual care. Median age was 77 (interquartile range 72-83) years. The all-cause mortality rate was 3.5 per 100 patient-years in the intervention arm vs. 6.7 per 100 patient-years in the control arm [adjusted hazard ratio (HR) 0.55; 95% confidence interval (CI) 0.37-0.82]. For non cardiovascular mortality, the adjusted HR was 0.47 (95% CI 0.27-0.82). For other adverse events, no statistically significant differences were observed. Conclusion In this cluster randomized trial, integrated care for elderly AF patients in primary care showed a 45% reduction in all-cause mortality when compared with usual care
    corecore