Evaluation of prediction models and diagnostic tests

Abstract

Evaluation of diagnostic tests may seem a straightforward practice at first sight, but unfortunately complex methods are often required. The goal of this thesis is to assess methodological challenges surrounding the evaluation and impact of diagnostic tests and prediction models, and to propose alternative approaches to reduce bias, miscommunication, and research waste. The first part addresses problems which may occur when expert panels are used as a reference standard in diagnostic evaluation research. There we found that dichotomous target disease classification by expert panels leads to incorrect estimation of diagnostic accuracy of the test under evaluation. Alternatives, such as obtaining the probability of target disease presence from experts panels for each individual, could partially resolve this, but are strongly dependent on assumptions that are made. In the second part of this thesis we explore the use of terminology related to overdiagnosis, overtesting, and overmedicalisation in scientific literature. We found that these are described across virtually all clinical domains, however they are used inconsistently. In response, we created a framework that can be used to describe overdiagnosis and related concepts within clinical domains, and provide strategies for reducing these. The last part of this thesis focusses on evaluation of impact of diagnostic tests and prediction models on health and monetary outcomes. It has been shown that many prediction models don’t result in their anticipated impact. We demonstrate how decision analytic models can be used to identify specific factors and determine how they influence potential outcomes before conducting a clinical study

    Similar works

    Full text

    thumbnail-image

    Available Versions

    Last time updated on 29/05/2021