633 research outputs found

    Diagnostic Research: theory and application

    Get PDF
    To set a diagnosis in a patient is one of the key challenges in medical practice and forms the basis for clinical care. Diagnosis is not an aim in itself but is relevant in as far as it directs treatment and indicates the prognosis of the patient. Diagnosis amounts to an estimation of the probability of the presence of a particular disease in view of all diagnostic information (patient history, physical examination and test results) in order to decide whether treatment should be initiated or not. A diagnosis is rarely based on one single variable or test and therefore is a multivariable concern per se. However, most diagnostic studies or studies in which diagnostic tests are evaluated still follow a univariable approach. This means that a diagnostic test is evaluated in isolation without explicit regard to the clinical context in which the test is applied. In this respect, clinical practice and diagnostic research frequently do not cohere. In applied medical research of the last decades, little attention has been paid to the principles of diagnostic studies compared to, for example, etiologic studies and studies of treatment efficacy

    Using Evidence to Combat Overdiagnosis and Overtreatment:Evaluating Treatments, Tests, and Disease Definitions in the Time of Too Much

    Get PDF
    Ray Moynihan and colleagues outline suggestions for improving the way that medical evidence is produced, analysed, and interpreted to avoid problems of overdiagnosis and overtreatment. Please see later in the article for the Editors' Summar

    Intensive care performance: how should we monitor performance in the future?

    Get PDF
    Abstract Intensive care faces economic challenges. Therefore evidence proving both effectiveness and efficiency, i.e. cost-effectiveness, of delivered care is needed. Today, the quality of care is an important issue in the health care debate. How do we measure quality of care, and how accurate and representative is this measurement? In the fol

    Impact of provision of cardiovascular disease risk estimates to healthcare professionals and patients: a systematic review.

    Get PDF
    OBJECTIVE: To systematically review whether the provision of information on cardiovascular disease (CVD) risk to healthcare professionals and patients impacts their decision-making, behaviour and ultimately patient health. DESIGN: A systematic review. DATA SOURCES: An electronic literature search of MEDLINE and PubMed from 01/01/2004 to 01/06/2013 with no language restriction and manual screening of reference lists of systematic reviews on similar topics and all included papers. ELIGIBILITY CRITERIA FOR SELECTING STUDIES: (1) Primary research published in a peer-reviewed journal; (2) inclusion of participants with no history of CVD; (3) intervention strategy consisted of provision of a CVD risk model estimate to either professionals or patients; and (4) the only difference between the intervention group and control group (or the only intervention in the case of before-after studies) was the provision of a CVD risk model estimate. RESULTS: After duplicates were removed, the initial electronic search identified 9671 papers. We screened 196 papers at title and abstract level and included 17 studies. The heterogeneity of the studies limited the analysis, but together they showed that provision of risk information to patients improved the accuracy of risk perception without decreasing quality of life or increasing anxiety, but had little effect on lifestyle. Providing risk information to physicians increased prescribing of lipid-lowering and blood pressure medication, with greatest effects in those with CVD risk >20% (relative risk for change in prescribing 2.13 (1.02 to 4.63) and 2.38 (1.11 to 5.10) respectively). Overall, there was a trend towards reductions in cholesterol and blood pressure and a statistically significant reduction in modelled CVD risk (-0.39% (-0.71 to -0.07)) after, on average, 12ā€…months. CONCLUSIONS: There seems evidence that providing CVD risk model estimates to professionals and patients improves perceived CVD risk and medical prescribing, with little evidence of harm on psychological well-being.BS was supported by the European Commission Framework 7, EPIC-CVD: Individualised CVD risk assessment: tailoring targeted and cost-effective approaches to Europe's diverse populations, Grant agreement no: 279233. JUS was supported by a National Institute of Health Research (NIHR) Clinical Lectureship.This is the final version of the article. It first appeared from BMJ via http://dx.doi.org/10.1136/bmjopen-2015-00871

    Prediction models for clustered data: comparison of a random intercept and standard regression model

    Get PDF
    BACKGROUND: When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. METHODS: Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. RESULTS: The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (ā‰„15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. CONCLUSION: The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters

    [Targeted therapy:the benefit of new oncological tests].

    Get PDF
    Voor vele kankervormen komen doelgerichte behandelingen beschikbaar, waarvoor op grond van tumoreigenschappen ook rationele keuzes gemaakt kunnen worden.Er is grote behoefte aan adequate biomarkers die het effect van doelgerichte therapie bij individuele kankerpatiƫnten kunnen voorspellen, om daarmee de juiste oncologische behandeling voor de juiste patiƫnt te kunnen bepalen. Zo kunnen nutteloze behandelingen en onnodige bijwerkingen vermeden worden, en kosten worden gereduceerd.Bij borstkanker zijn de oestrogeenreceptor (ER) en de humane epidermale groeifactorreceptor 2 (HER2) voorbeelden van gestandaardiseerde, in gerandomiseerde onderzoeken gevalideerde, predictieve testen van behandeleffecten.Voor genexpressieprofielen die samenhangen met tumorgroei worden ook gerandomiseerde onderzoeksdata verwacht.Het kwantificeren van de predictieve waarde van testen op verwachte behandeleffecten in gerandomiseerde studies is kostbaar en tijdrovend. Gezien de toename van doelgerichte medicijnen en diagnostische en prognostische technieken, wordt in allerlei domeinen gezocht naar alternatieve onderzoeksopzetten die kunnen leiden tot snellere en efficiƫntere bewijsvoering.An increasing number of targeted drug treatments are becoming available for many types of cancer. There is a great need for adequate biomarkers that can predict the effect of targeted therapy in individual cancer patients, in order to determine the correct oncological treatment per patient. This way, non-effective treatments can be spared, side-effects avoided, and costs reduced. Oestrogen receptor (ER) and the human epidermal growth factor receptor 2 (HER2) are examples of standardized tests for breast cancer that have been validated in randomised studies. Data from randomised studies is also expected for gene expression profiles that correlate with tumour growth. Quantifying the predictive value of tests for anticipated treatment effects is costly and time-consuming. Given the increasing availability of targeted agents and diagnostic and prognostic techniques, alternative clinical study designs that can lead to quicker and more efficient verification are being sought in many different domains.</p

    Incorporating published univariable associations in diagnostic and prognostic modeling

    Get PDF
    Background: Diagnostic and prognostic literature is overwhelmed with studies reporting univariable predictor-outcome associations. Currently, methods to incorporate such information in the construction of a prediction model are underdeveloped and unfamiliar to many researchers. Methods. This article aims to improve upon an adaptation method originally proposed by Greenland (1987) and Steyerberg (2000) to incorporate previously published univariable associations in the construction of a novel prediction model. The proposed method improves upon the variance estimation component by reconfiguring the adaptation process in established theory and making it more robust. Different variants of the proposed method were tested in a simulation study, where performance was measured by comparing estimated associations with their predefined values according to the Mean Squared Error and coverage of the 90% confidence intervals. Results: Results demonstrate that performance of estimated multivariable associations considerably improves for small datasets where external evidence is included. Although the error of estimated associations decreases with increasing amount of individual participant data, it does not disappear completely, even in very large datasets. Conclusions: The proposed method to aggregate previously published univariable associations with individual participant data in the construction of a novel prediction models outperforms established approaches and is especially worthwhile when relatively limited individual participant data are available

    Developing and validating risk prediction models in an individual participant data meta-analysis

    Get PDF
    BACKGROUND: Risk prediction models estimate the risk of developing future outcomes for individuals based on one or more underlying characteristics (predictors). We review how researchers develop and validate risk prediction models within an individual participant data (IPD) meta-analysis, in order to assess the feasibility and conduct of the approach. METHODS: A qualitative review of the aims, methodology, and reporting in 15 articles that developed a risk prediction model using IPD from multiple studies. RESULTS: The IPD approach offers many opportunities but methodological challenges exist, including: unavailability of requested IPD, missing patient data and predictors, and between-study heterogeneity in methods of measurement, outcome definitions and predictor effects. Most articles develop their model using IPD from all available studies and perform only an internal validation (on the same set of data). Ten of the 15 articles did not allow for any study differences in baseline risk (intercepts), potentially limiting their modelā€™s applicability and performance in some populations. Only two articles used external validation (on different data), including a novel method which develops the model on all but one of the IPD studies, tests performance in the excluded study, and repeats by rotating the omitted study. CONCLUSIONS: An IPD meta-analysis offers unique opportunities for risk prediction research. Researchers can make more of this by allowing separate model intercept terms for each study (population) to improve generalisability, and by using ā€˜internal-external cross-validationā€™ to simultaneously develop and validate their model. Methodological challenges can be reduced by prospectively planned collaborations that share IPD for risk prediction
    • ā€¦
    corecore