111 research outputs found

    A new method for determining physician decision thresholds using empiric, uncertain recommendations

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The concept of risk thresholds has been studied in medical decision making for over 30 years. During that time, physicians have been shown to be poor at estimating the probabilities required to use this method. To better assess physician risk thresholds and to more closely model medical decision making, we set out to design and test a method that derives thresholds from actual physician treatment recommendations. Such an approach would avoid the need to ask physicians for estimates of patient risk when trying to determine individual thresholds for treatment. Assessments of physician decision making are increasingly relevant as new data are generated from clinical research. For example, recommendations made in the setting of ocular hypertension are of interest as a large clinical trial has identified new risk factors that should be considered by physicians. Precisely how physicians use this new information when making treatment recommendations has not yet been determined.</p> <p>Results</p> <p>We derived a new method for estimating treatment thresholds using ordinal logistic regression and tested it by asking ophthalmologists to review cases of ocular hypertension before expressing how likely they would be to recommend treatment. Fifty-eight physicians were recruited from the American Glaucoma Society. Demographic information was collected from the participating physicians and the treatment threshold for each physician was estimated. The method was validated by showing that while treatment thresholds varied over a wide range, the most common values were consistent with the 10-15% 5-year risk of glaucoma suggested by expert opinion and decision analysis.</p> <p>Conclusions</p> <p>This method has advantages over prior means of assessing treatment thresholds. It does not require physicians to explicitly estimate patient risk and it allows for uncertainty in the recommendations. These advantages will make it possible to use this method when assessing interventions intended to alter clinical decision making.</p

    Extensions to decision curve analysis, a novel method for evaluating diagnostic tests, prediction models and molecular markers

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques.</p> <p>Methods</p> <p>In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques.</p> <p>Results</p> <p>Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve.</p> <p>Conclusion</p> <p>Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided.</p

    Medicine in words and numbers: a cross-sectional survey comparing probability assessment scales

    Get PDF
    Contains fulltext : 56355.pdf ( ) (Open Access)Background / In the complex domain of medical decision making, reasoning under uncertainty can benefit from supporting tools. Automated decision support tools often build upon mathematical models, such as Bayesian networks. These networks require probabilities which often have to be assessed by experts in the domain of application. Probability response scales can be used to support the assessment process. We compare assessments obtained with different types of response scale. Methods / General practitioners (GPs) gave assessments on and preferences for three different probability response scales: a numerical scale, a scale with only verbal labels, and a combined verbal-numerical scale we had designed ourselves. Standard analyses of variance were performed. Results / No differences in assessments over the three response scales were found. Preferences for type of scale differed: the less experienced GPs preferred the verbal scale, the most experienced preferred the numerical scale, with the groups in between having a preference for the combined verbal-numerical scale. Conclusion / We conclude that all three response scales are equally suitable for supporting probability assessment. The combined verbal-numerical scale is a good choice for aiding the process, since it offers numerical labels to those who prefer numbers and verbal labels to those who prefer words, and accommodates both more and less experienced professionals.8 p

    Pretest probability assessment derived from attribute matching

    Get PDF
    BACKGROUND: Pretest probability (PTP) assessment plays a central role in diagnosis. This report compares a novel attribute-matching method to generate a PTP for acute coronary syndrome (ACS). We compare the new method with a validated logistic regression equation (LRE). METHODS: Eight clinical variables (attributes) were chosen by classification and regression tree analysis of a prospectively collected reference database of 14,796 emergency department (ED) patients evaluated for possible ACS. For attribute matching, a computer program identifies patients within the database who have the exact profile defined by clinician input of the eight attributes. The novel method was compared with the LRE for ability to produce PTP estimation <2% in a validation set of 8,120 patients evaluated for possible ACS and did not have ST segment elevation on ECG. 1,061 patients were excluded prior to validation analysis because of ST-segment elevation (713), missing data (77) or being lost to follow-up (271). RESULTS: In the validation set, attribute matching produced 267 unique PTP estimates [median PTP value 6%, 1(st)–3(rd )quartile 1–10%] compared with the LRE, which produced 96 unique PTP estimates [median 24%, 1(st)–3(rd )quartile 10–30%]. The areas under the receiver operating characteristic curves were 0.74 (95% CI 0.65 to 0.82) for the attribute matching curve and 0.68 (95% CI 0.62 to 0.77) for LRE. The attribute matching system categorized 1,670 (24%, 95% CI = 23–25%) patients as having a PTP < 2.0%; 28 developed ACS (1.7% 95% CI = 1.1–2.4%). The LRE categorized 244 (4%, 95% CI = 3–4%) with PTP < 2.0%; four developed ACS (1.6%, 95% CI = 0.4–4.1%). CONCLUSION: Attribute matching estimated a very low PTP for ACS in a significantly larger proportion of ED patients compared with a validated LRE

    Gut Feelings as a Third Track in General Practitioners’ Diagnostic Reasoning

    Get PDF
    BACKGROUND: General practitioners (GPs) are often faced with complicated, vague problems in situations of uncertainty that they have to solve at short notice. In such situations, gut feelings seem to play a substantial role in their diagnostic process. Qualitative research distinguished a sense of alarm and a sense of reassurance. However, not every GP trusted their gut feelings, since a scientific explanation is lacking. OBJECTIVE: This paper explains how gut feelings arise and function in GPs' diagnostic reasoning. APPROACH: The paper reviews literature from medical, psychological and neuroscientific perspectives. CONCLUSIONS: Gut feelings in general practice are based on the interaction between patient information and a GP's knowledge and experience. This is visualized in a knowledge-based model of GPs' diagnostic reasoning emphasizing that this complex task combines analytical and non-analytical cognitive processes. The model integrates the two well-known diagnostic reasoning tracks of medical decision-making and medical problem-solving, and adds gut feelings as a third track. Analytical and non-analytical diagnostic reasoning interacts continuously, and GPs use elements of all three tracks, depending on the task and the situation. In this dual process theory, gut feelings emerge as a consequence of non-analytical processing of the available information and knowledge, either reassuring GPs or alerting them that something is wrong and action is required. The role of affect as a heuristic within the physician's knowledge network explains how gut feelings may help GPs to navigate in a mostly efficient way in the often complex and uncertain diagnostic situations of general practice. Emotion research and neuroscientific data support the unmistakable role of affect in the process of making decisions and explain the bodily sensation of gut feelings.The implications for health care practice and medical education are discussed
    corecore