35 research outputs found

    STARD 2015: An Updated List of Essential Items for Reporting Diagnostic Accuracy Studies.

    Get PDF
    Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of diagnostic accuracy studies, the Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a diagnostic accuracy study. This update incorporates recent evidence about sources of bias and variability in diagnostic accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of diagnostic accuracy studies

    Empirical Evidence of Design-Related Bias in Studies of Diagnostic Tests

    No full text

    Diagnostic accuracy of the Montreal Cognitive Assessment (MoCA) for cognitive screening in old age psychiatry: Determining cutoff scores in clinical practice. Avoiding spectrum bias caused by healthy controls

    No full text
    Objective/methods: The Montreal Cognitive Assessment (MoCA) is an increasingly used screening tool for cognitive impairment. While it has been validated in multiple settings and languages, most studies have used a biased case-control design including healthy controls as comparisons not representing a clinical setting. The purpose of the present cross-sectional study is to test the criterion validity of the MoCA for mild cognitive impairment (MCI) and mild dementia (MD) in an old age psychiatry cohort (n = 710). The reference standard consists of a multidisciplinary, consensus-based diagnosis in accordance with international criteria. As a secondary outcome, the use of healthy community older adults as additional comparisons allowed us to underscore the effects of case-control spectrum-bias. Results: The criterion validity of the MoCA for cognitive impairment (MCI + MD) in a case-control design, using healthy controls, was satisfactory (area under the curve [AUC] 0.93; specificity of 73% less than 26), but declined in the cross-sectional design using referred but not cognitive impaired as comparisons (AUC 0.77; specificity of 37% less than 26). In an old age psychiatry setting, the MoCA is valuable for confirming normal cognition (greater than or equal to 26, 95% sensitivity), excluding MD (greater than or equal to 21; negative predictive value [NPV] 98%) and excluding MCI (greater than or equal to 26;NPV 94%); but not for diagnosing MD (less than 21; positive predictive value [PPV] 31%) or MCI (less than 26; PPV 33%). Conclusions: This study shows that validating the MoCA using healthy controls overestimates specificity. Taking clinical and demographic characteristics into account, the MoCA is a suitable screening tool—in an old age psychiatry setting—for distinguishing between those in need of further diagnostic investigations and those who are not but not for diagnosing cognitive impairment

    Various randomized designs can be used to evaluate medical tests

    No full text
    Objective: To explore designs for evaluating the prognostic and predictive value of medical tests and their effect on patient outcome. Study design: Theoretical analysis with examples from the medical literature. Results: For evaluating the prognostic value of a test, one can include the test at baseline in prognostic studies. To evaluate the value of test in predicting treatment outcome, the test results can be used as baseline information in randomized controlled trials of treatment. To compare the prognostic or predictive value of two or more tests, the test result combinations can be used as baseline information. To evaluate the effect on patient outcome, randomized controlled trials of test strategies are an option. Randomization can apply to all tested or be restricted to specific subgroups, such as those with discordant test results, to increase the efficiency of trials. Conclusion: The prognostic and predictive value of medical tests can and should be evaluated, to demonstrate the test's ability to guide clinical decision making and to improve patient outcome. Various randomized designs can be used to evaluate the effects on testing on patient outcome. (C) 2009 Elsevier Inc. All rights reserve

    Proposals for a phased evaluation of medical tests

    No full text
    BACKGROUND: In drug development, a 4-phase hierarchical model for the clinical evaluation of new pharmaceuticals is well known. Several comparable phased evaluation schemes have been proposed for medical tests. PURPOSE: To perform a systematic search of the literature, a synthesis, and a critical review of phased evaluation schemes for medical tests. Data Sources. Literature databases of Medline, Web of Science, and Embase. Study Selection and Data Extraction. Two authors separately evaluated potentially eligible papers and independently extracted data. RESULTS: We identified 19 schemes, published between 1978 and 2007. Despite their variability, these models show substantial similarity. Common phases are evaluations of technical efficacy, diagnostic accuracy, diagnostic thinking efficacy, therapeutic efficacy, patient outcome, and societal aspects. CONCLUSIONS: The evaluation frameworks can be useful to distinguish between study types, but they cannot be seen as a necessary sequence of evaluations. The evaluation of tests is most likely not a linear but a cyclic and repetitive proces

    Exploring sources of heterogeneity in systematic reviews of diagnostic tests

    No full text
    It is indispensable for any meta-analysis that potential sources of heterogeneity are examined, before one considers pooling the results of primary studies into summary estimates with enhanced precision. In reviews of studies on the diagnostic accuracy of tests, variability beyond chance can be attributed to between-study differences in the selected cutpoint for positivity, in patient selection and clinical setting, in the type of test used, in the type of reference standard, or any combination of these factors. In addition, heterogeneity in Study results can also be caused by flaws in study design. This paper critically examines some of the potential reasons for heterogeneity and the methods to explore them. Empirical support for the existence of different sources of variation is reviewed. Incorporation of sources of variability explicitly into systematic reviews on diagnostic accuracy is demonstrated with data From a recent review. Application of regression techniques in meta-analysis of diagnostic tests can provide relevant additional information. Results of such analyses will help understand problems with the transferability of diagnostic tests and to point out flaws in primary studies. As such, they can guide the design of future studies. Copyright (C) 2002 John Wiley Sons, Lt

    The Montreal Cognitive Assessment (MoCA) with a double threshold: Improving the MoCA for triaging patients in need of a neuropsychological assessment.

    No full text
    Objectives: Diagnosis of patients suspected of mild dementia (MD) is a challenge and patient numbers continue to rise. A short test triaging patients in need of a neuropsychological assessment (NPA) is welcome. The Montreal cognitive assessment (MoCA) has high sensitivity at the original cutoff <26 for MD, but results in too many false-positive (FP) referrals in clinical practice (low specificity). A cutoff that finds all patients at high risk of MD without referring to many patients not (yet) in need of an NPA is needed. A difficulty is who is to be considered at risk, as definitions for disease (e.g. MD) do not always define health at the same time and thereby create subthreshold disorders. Design: In this study, we compared different selection strategies to efficiently identify patients in need of an NPA. Using the MoCA with a double threshold tackles the dilemma of increasing the specificity without decreasing the sensitivity and creates the opportunity to distinguish the clinical (MD) and subclinical (MCI) state and hence to get their appropriate policy. Setting/participants: Patients referred to old-age psychiatry suspected of cognitive impairment that could benefit from an NPA (n = 693). Results: The optimal strategy was a two-stage selection process using the MoCA with a double threshold as an add-on after initial assessment. By selecting who is likely to have dementia and should be assessed further (MoCA<21), who should be discharged (≥26), and who's course should be monitored actively as they are at increased risk (21<26). Conclusion: By using two cutoffs, the clinical value of the MoCA improved for triaging. A double-threshold MoCA not only gave the best results; accuracy, PPV, NPV, and reducing FP referrals by 65%, still correctly triaging most MD patients. It also identified most MCIs whose intermediate state justifies active monitoring

    Characteristics of good diagnostic studies

    No full text
    Whether or not patients are better off from undergoing a diagnostic test win depend on how test information is used to guide subsequent decisions on starting, stopping, or modifying treatment. Consequently, the practical value of a diagnostic test can only be assessed by taking into account subsequent health outcomes. In the appraisal of diagnostic test studies, it is essential to discriminate between studies that report on the accuracy of a diagnostic test and studies that report on health outcomes of strategies that incorporate diagnostic tests. In a study that reports on diagnostic accuracy, a cohort of patients is subjected to at least two diagnostic tests: the index test and the reference test, the latter usually being the best method available to detect the target condition. The accuracy of the index test can be expressed in terms of sensitivity, specificity, or likelihood ratios. Studies that compare two or more strategies that incorporate diagnostic tests as well as therapeutic interventions should be approached differently. Such studies do not require expression of test accuracy in terms of sensitivity and specificity The merit of diagnostic tests evaluated in such studies can be expressed by comparing relevant outcomes of both strategies. The effectiveness of such strategies can be compared similarly as the effectiveness of treatment. However, due to the fact that the effect of a diagnostic test on health outcome is not as direct as the effect of treatment on health outcome, the design of outcome studies reporting on diagnostic tests requires special attention. It is important to establish a clear link between the result of the test under study and subsequent therapeutic management. Furthermore, trial efficiency can be improved by moving the point of randomization from the decision point, whether or not to test, to the point where a decision has to be made regarding what to do with the positive test result
    corecore