45 research outputs found

    Hybrid copula mixed models for combining case-control and cohort studies in meta-analysis of diagnostic tests

    Get PDF
    Copula mixed models for trivariate (or bivariate) meta-analysis of diagnostic test accuracy studies accounting (or not) for disease prevalence have been proposed in the biostatistics literature to synthesize information. However, many systematic reviews often include case-control and cohort studies, so one can either focus on the bivariate meta-analysis of the case-control studies or the trivariate meta-analysis of the cohort studies, as only the latter contains information on disease prevalence. In order to remedy this situation of wasting data we propose a hybrid copula mixed model via a combination of the bivariate and trivariate copula mixed model for the data from the case-control studies and cohort studies, respectively. Hence, this hybrid model can account for study design and also due to its generality can deal with dependence in the joint tails. We apply the proposed hybrid copula mixed model to a review of the performance of contemporary diagnostic imaging modalities for detecting metastases in patients with melanoma

    A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence

    Get PDF
    A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality

    Electronic and animal noses for detecting SARS-CoV-2 infection (Protocol)

    Get PDF
    This is a protocol for a Cochrane Review (diagnostic). The objectives are as follows: 1. To assess the diagnostic test accuracy of eNoses to screen for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection in public places, such as airports. 2. To assess the diagnostic test accuracy of sniffer animals, and more specifically dogs, to screen for SARS-CoV-2 infection in public places, such as airports. 3. To assess the diagnostic test accuracy of eNoses for SARS-CoV-2 infection or COVID-19 in symptomatic people presenting in the community, or in secondary care. 4. To assess the diagnostic test accuracy of sniffer animals, and more specifically dogs, for SARS-CoV-2 infection or COVID-19 in symptomatic people presenting in the community, or in secondary care

    Guidance for the design and reporting of studies evaluating the clinical performance of tests for present or past SARS-CoV-2 infection

    Get PDF
    Testing for SARS-CoV-2 infection is key in managing the current pandemic. More than 1700 preprints and peer reviewed journal articles evaluating tests for SARS-CoV-2 infection have been published as of January 2021. However, evaluations of these studies have identified many methodological issues, leading to a high risk of bias and difficulties applying the results in practice. Better guidance is urgently needed on the conduct and interpretation of these studies. This article outlines the principles for defining the intended purpose of the test; study population selection; reference standard, test timing; and other critical considerations for the design, reporting, and interpretation of diagnostic accuracy studies. The implementation and accuracy of SARS-CoV-2 tests have major implications for individuals and communities, balancing the potential consequences of continued infection against the need for public health measures, such as the restriction of movements and social activities. Decision making in the current pandemic requires a clear understanding of the clinical performance and limitations of testing. This article provides guidance to assist researchers design robust diagnostic accuracy studies, assist publishers and peer reviewers to assess such studies, and support clinicians and policy makers in their evaluation of the evidence on SARS-CoV-2 testing for clinical and public health decisions. The guidance aims to ensure that studies evaluating the diagnostic accuracy of SARS-CoV-2 tests are conducted as rigorously as possible, in an efficient and timely way

    Chapter 5: Assessing Risk of Bias as a Domain of Quality in Medical Test Studies

    Get PDF
    Assessing methodological quality is a necessary activity for any systematic review, including those evaluating the evidence for studies of medical test performance. Judging the overall quality of an individual study involves examining the size of the study, the direction and degree of findings, the relevance of the study, and the risk of bias in the form of systematic error, internal validity, and other study limitations. In this chapter of the Methods Guide for Medical Test Reviews, we focus on the evaluation of risk of bias in the form of systematic error in an individual study as a distinctly important component of quality in studies of medical test performance, specifically in the context of estimating test performance (sensitivity and specificity). We make the following recommendations to systematic reviewers: 1) When assessing study limitations that are relevant to the test under evaluation, reviewers should select validated criteria that examine the risk of systematic error, 2) categorizing the risk of bias for individual studies as “low,” “medium,” or “high” is a useful way to proceed, and 3) methods for determining an overall categorization for the study limitations should be established a priori and documented clearly

    Should methodological filters for diagnostic test accuracy studies be used in systematic reviews of psychometric instruments? a case study involving screening for postnatal depression

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Challenges exist when searching for diagnostic test accuracy (DTA) studies that include the design of DTA search strategies and selection of appropriate filters. This paper compares the performance of three MEDLINE search strategies for psychometric diagnostic test accuracy (DTA) studies in postnatal depression.</p> <p>Methods</p> <p>A reference set of six relevant studies was derived from a forward citation search via Web of Knowledge. The performance of the 'target condition and index test' method recommended by the Cochrane DTA Group was compared to two alternative strategies which included methodological filters. Outcome measures were total citations retrieved, sensitivity, precision and associated 95% confidence intervals (95%CI).</p> <p>Results</p> <p>The Cochrane recommended strategy and one of the filtered search strategies were equivalent in performance and both retrieved a total of 105 citations, sensitivity was 100% (95% CI 61%, 100%) and precision was 5.2% (2.6%, 11.9%). The second filtered search retrieved a total of 31 citations, sensitivity was 66.6% (30%, 90%) and precision was 12.9% (5.1%, 28.6%). This search missed the DTA study with most relevance to the DTA review.</p> <p>Conclusions</p> <p>The Cochrane recommended search strategy, 'target condition and index test', method was pragmatic and sensitive. It was considered the optimum method for retrieval of relevant studies for a psychometric DTA review (in this case for postnatal depression). Potential limitations of using filtered searches during a psychometric mental health DTA review should be considered.</p

    Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies: The PRISMA-DTA Statement.

    Get PDF
    This is the final version of the article. Available from American Medical Association via the DOI in this record.Importance: Systematic reviews of diagnostic test accuracy synthesize data from primary diagnostic studies that have evaluated the accuracy of 1 or more index tests against a reference standard, provide estimates of test performance, allow comparisons of the accuracy of different tests, and facilitate the identification of sources of variability in test accuracy. Objective: To develop the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) diagnostic test accuracy guideline as a stand-alone extension of the PRISMA statement. Modifications to the PRISMA statement reflect the specific requirements for reporting of systematic reviews and meta-analyses of diagnostic test accuracy studies and the abstracts for these reviews. Design: Established standards from the Enhancing the Quality and Transparency of Health Research (EQUATOR) Network were followed for the development of the guideline. The original PRISMA statement was used as a framework on which to modify and add items. A group of 24 multidisciplinary experts used a systematic review of articles on existing reporting guidelines and methods, a 3-round Delphi process, a consensus meeting, pilot testing, and iterative refinement to develop the PRISMA diagnostic test accuracy guideline. The final version of the PRISMA diagnostic test accuracy guideline checklist was approved by the group. Findings: The systematic review (produced 64 items) and the Delphi process (provided feedback on 7 proposed items; 1 item was later split into 2 items) identified 71 potentially relevant items for consideration. The Delphi process reduced these to 60 items that were discussed at the consensus meeting. Following the meeting, pilot testing and iterative feedback were used to generate the 27-item PRISMA diagnostic test accuracy checklist. To reflect specific or optimal contemporary systematic review methods for diagnostic test accuracy, 8 of the 27 original PRISMA items were left unchanged, 17 were modified, 2 were added, and 2 were omitted. Conclusions and Relevance: The 27-item PRISMA diagnostic test accuracy checklist provides specific guidance for reporting of systematic reviews. The PRISMA diagnostic test accuracy guideline can facilitate the transparent reporting of reviews, and may assist in the evaluation of validity and applicability, enhance replicability of reviews, and make the results from systematic reviews of diagnostic test accuracy studies more useful.The research was supported by grant 375751 from the Canadian Institute for Health Research; funding from the Canadian Agency for Drugs and Technologies in Health; funding from the Standards for Reporting of Diagnostic Accuracy Studies Group; funding from the University of Ottawa Department of Radiology Research Stipend Program; and funding from the National Institute for Health Research Collaboration for Leadership in Applied Health Research and Care South West Peninsula

    Application of GRADE: Making evidence-based recommendations about diagnostic tests in clinical practice guidelines

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Accurate diagnosis is a fundamental aspect of appropriate healthcare. However, clinicians need guidance when implementing diagnostic tests given the number of tests available and resource constraints in healthcare. Practitioners of health often feel compelled to implement recommendations in guidelines, including recommendations about the use of diagnostic tests. However, the understanding about diagnostic tests by guideline panels and the methodology for developing recommendations is far from completely explored. Therefore, we evaluated the factors that guideline developers and users need to consider for the development of implementable recommendations about diagnostic tests.</p> <p>Methods</p> <p>Using a critical analysis of the process, we present the results of a case study using the Grading of Recommendations Applicability, Development and Evaluation (GRADE) approach to develop a clinical practice guideline for the diagnosis of Cow Milk Allergy with the World Allergy Organization.</p> <p>Results</p> <p>To ensure that guideline panels can develop informed recommendations about diagnostic tests, it appears that more emphasis needs to be placed on group processes, including question formulation, defining patient-important outcomes for diagnostic tests, and summarizing evidence. Explicit consideration of concepts of diagnosis from evidence-based medicine, such as pre-test probability and treatment threshold, is required to facilitate the work of a guideline panel and to formulate implementable recommendations.</p> <p>Discussion</p> <p>This case study provides useful guidance for guideline developers and clinicians about what they ought to demand from clinical practice guidelines to facilitate implementation and strengthen confidence in recommendations about diagnostic tests. Applying a structured framework like the GRADE approach with its requirement for transparency in the description of the evidence and factors that influence recommendations facilitates laying out the process and decision factors that are required for the development, interpretation, and implementation of recommendations about diagnostic tests.</p
    corecore