Improving the methodological framework for diagnostic studies

Abstract

The salient question to ask when evaluating a diagnostic test is “Does the use of this test lead to improved health outcomes?”. Tests lead to improved health outcomes by identifying patients who will benefit from clinical action. Therefore, to improve health outcomes, a test must be able to distinguish between people with and without the disease of interest with a certain degree of accuracy. The current methodological framework for estimating diagnostic accuracy assumes that there is a “gold” reference standard to which the test(s) under study can be compared. This framework is often challenged by the fact that there is no such test or method that classifies disease status without error and that can be performed in all study participants. The first of the two aims of this thesis is to explore and improve methods for dealing with the challenge of a lack of a “gold” standard in diagnostic accuracy research. We address this challenge by describing the corresponding methods to deal with it and, more importantly, by providing recommendations on their optimal use. We cover methods for dealing with imperfect reference standards, namely composite reference standards, panel diagnosis, and latent class analysis as well address the situation of not being able to perform the preferred reference standard in all study participants. The ability to obtain unbiased estimates of test accuracy in a single study is foundational to answering the question whether a test leads to improved health outcomes. The next level of evidence comes from systematic reviews in which the results of multiple primary studies are summarized. Therefore, the second aim of this thesis is to explore and improve the study of diagnostic tests on an aggregate level. We focus on how variability in results between studies is currently being investigated and, based on these findings, provide further guidance. Although a precise and accurate estimate of test performance is often helpful in deciding whether to implement a test, it is not always essential. As we argue in the general discussion, failure to think outside the diagnostic accuracy framework can even lead to adverse clinical outcomes in some situations, particularly when evaluating new highly sensitive tests that challenge the existing reference standards. We conclude this thesis by summarizing alternative methods for evaluating the clinical benefit of such tests

    Similar works

    Full text

    thumbnail-image

    Available Versions

    Last time updated on 15/05/2019