268 research outputs found

    Cognitive debiasing 2: Impediments to and strategies for change

    Get PDF
    In a companion paper, we proposed that cognitive debiasing is a skill essential in developing sound clinical reasoning to mitigate the incidence of diagnostic failure. We reviewed the origins of cognitive biases and some proposed mechanisms for how debiasing processes might work. In this paper, we first outline a general schema of how cognitive change occurs and the constraints that may apply. We review a variety of individual factors, many of them biases themselves, which may be impediments to change. We then examine the major strategies that have been developed in the social sciences and in medicine to achieve cognitive and affective debiasing, including the important concept of forcing functions. The abundance and rich variety of approaches that exist in the literature and in individual clinical domains illustrate the difficulties inherent in achieving cognitive change, and also the need for such interventions. Ongoing cognitive debiasing is arguably the most important feature of the critical thinker and the well-calibrated mind. We outline three groups of suggested interventions going forward: educational strategies, workplace strategies and forcing functions. We stress the importance of ambient and contextual influences on the quality of individual decision making and the need to address factors known to impair calibration of the decision maker. We also emphasise the importance of introducing these concepts and corollary development of training in critical thinking in the undergraduate level in medical education

    Paramedic clinical decision making during high acuity emergency calls: design and methodology of a Delphi study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The scope of practice of paramedics in Canada has steadily evolved to include increasingly complex interventions in the prehospital setting, which likely have repercussions on clinical outcome and patient safety. Clinical decision making has been evaluated in several health professions, but there is a paucity of work in this area on paramedics. This study will utilize the Delphi technique to establish consensus on the most important instances of paramedic clinical decision making during high acuity emergency calls, as they relate to clinical outcome and patient safety.</p> <p>Methods and design</p> <p>Participants in this multi-round survey study will be paramedic leaders and emergency medical services medical directors/physicians from across Canada. In the first round, participants will identify instances of clinical decision making they feel are important for patient outcome and safety. On the second round, the panel will rank each instance of clinical decision making in terms of its importance. On the third and potentially fourth round, participants will have the opportunity to revise the ranking they assigned to each instance of clinical decision making. Consensus will be considered achieved for the most important instances if 80% of the panel ranks it as important or extremely important. The most important instances of clinical decision making will be plotted on a process analysis map.</p> <p>Discussion</p> <p>The process analysis map that results from this Delphi study will enable the gaps in research, knowledge and practice to be identified.</p

    Implementation science: a role for parallel dual processing models of reasoning?

    Get PDF
    BACKGROUND: A better theoretical base for understanding professional behaviour change is needed to support evidence-based changes in medical practice. Traditionally strategies to encourage changes in clinical practices have been guided empirically, without explicit consideration of underlying theoretical rationales for such strategies. This paper considers a theoretical framework for reasoning from within psychology for identifying individual differences in cognitive processing between doctors that could moderate the decision to incorporate new evidence into their clinical decision-making. DISCUSSION: Parallel dual processing models of reasoning posit two cognitive modes of information processing that are in constant operation as humans reason. One mode has been described as experiential, fast and heuristic; the other as rational, conscious and rule based. Within such models, the uptake of new research evidence can be represented by the latter mode; it is reflective, explicit and intentional. On the other hand, well practiced clinical judgments can be positioned in the experiential mode, being automatic, reflexive and swift. Research suggests that individual differences between people in both cognitive capacity (e.g., intelligence) and cognitive processing (e.g., thinking styles) influence how both reasoning modes interact. This being so, it is proposed that these same differences between doctors may moderate the uptake of new research evidence. Such dispositional characteristics have largely been ignored in research investigating effective strategies in implementing research evidence. Whilst medical decision-making occurs in a complex social environment with multiple influences and decision makers, it remains true that an individual doctor's judgment still retains a key position in terms of diagnostic and treatment decisions for individual patients. This paper argues therefore, that individual differences between doctors in terms of reasoning are important considerations in any discussion relating to changing clinical practice. SUMMARY: It is imperative that change strategies in healthcare consider relevant theoretical frameworks from other disciplines such as psychology. Generic dual processing models of reasoning are proposed as potentially useful in identifying factors within doctors that may moderate their individual uptake of evidence into clinical decision-making. Such factors can then inform strategies to change practice

    A cognitive forcing tool to mitigate cognitive bias:A randomised control trial

    Get PDF
    Abstract Background Cognitive bias is an important source of diagnostic error yet is a challenging area to understand and teach. Our aim was to determine whether a cognitive forcing tool can reduce the rates of error in clinical decision making. A secondary objective was to understand the process by which this effect might occur. Methods We hypothesised that using a cognitive forcing tool would reduce diagnostic error rates. To test this hypothesis, a novel online case-based approach was used to conduct a single blinded randomized clinical trial conducted from January 2017 to September 2018. In addition, a qualitative series of ā€œthink aloudā€ interviews were conducted with 20 doctors from a UK teaching hospital in 2018. The primary outcome was the diagnostic error rate when solving bias inducing clinical vignettes. A volunteer sample of medical professionals from across the UK, Republic of Ireland and North America. They ranged in seniority from medical student to Attending Physician. Results Seventy six participants were included in the study. The data showed doctors of all grades routinely made errors related to cognitive bias. There was no difference in error rates between groups (mean 2.8 cases correct in intervention vs 3.1 in control group, 95% CI -0.94 ā€“ 0.45 Pā€‰=ā€‰0.49). The qualitative protocol revealed that the cognitive forcing strategy was well received and a produced a subjectively positive impact on doctorsā€™ accuracy and thoughtfulness in clinical cases. Conclusions The quantitative data failed to show an improvement in accuracy despite a positive qualitative experience. There is insufficient evidence to recommend this tool in clinical practice, however the qualitative data suggests such an approach has some merit and face validity to users

    Measurement properties of the Inventory of Cognitive Bias in Medicine (ICBM)

    Get PDF
    Ā© 2008 Sladek et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.Background Understanding how doctors think may inform both undergraduate and postgraduate medical education. Developing such an understanding requires valid and reliable measurement tools. We examined the measurement properties of the Inventory of Cognitive Bias in Medicine (ICBM), designed to tap this domain with specific reference to medicine, but with previously questionable measurement properties. Methods First year postgraduate entry medical students at Flinders University, and trainees (postgraduate doctors in any specialty) and consultants (N = 348) based at two teaching hospitals in Adelaide, Australia, completed the ICBM and a questionnaire measuring thinking styles (Rational Experiential Inventory). Results Questions with the lowest item-total correlation were deleted from the original 22 item ICBM, although the resultant 17 item scale only marginally improved internal consistency (Cronbach's Ī± = 0.61 compared with 0.57). A factor analysis identified two scales, both achieving only Ī± = 0.58. Construct validity was assessed by correlating Rational Experiential Inventory scores with the ICBM, with some positive correlations noted for students only, suggesting that those who are naĆÆve to the knowledge base required to "successfully" respond to the ICBM may profit by a thinking style in tune with logical reasoning. Conclusion The ICBM failed to demonstrate adequate content validity, internal consistency and construct validity. It is unlikely that improvements can be achieved without considered attention to both the audience for which it is designed and its item content. The latter may need to involve both removal of some items deemed to measure multiple biases and the addition of new items in the attempt to survey the range of biases that may compromise medical decision making

    Online patient simulation training to improve clinical reasoning: a feasibility randomised controlled trial

    Get PDF
    Background Online patient simulations (OPS) are a novel method for teaching clinical reasoning skills to students and could contribute to reducing diagnostic errors. However, little is known about how best to implement and evaluate OPS in medical curricula. The aim of this study was to assess the feasibility, acceptability and potential effects of eCREST ā€” the electronic Clinical Reasoning Educational Simulation Tool. Methods A feasibility randomised controlled trial was conducted with final year undergraduate students from three UK medical schools in academic year 2016/2017 (cohort one) and 2017/2018 (cohort two). Student volunteers were recruited in cohort one via email and on teaching days, and in cohort two eCREST was also integrated into a relevant module in the curriculum. The intervention group received three patient cases and the control group received teaching as usual; allocation ratio was 1:1. Researchers were blind to allocation. Clinical reasoning skills were measured using a survey after 1 week and a patient case after 1 month. Results Across schools, 264 students participated (18.2% of all eligible). Cohort two had greater uptake (183/833, 22%) than cohort one (81/621, 13%). After 1 week, 99/137 (72%) of the intervention and 86/127 (68%) of the control group remained in the study. eCREST improved studentsā€™ ability to gather essential information from patients over controls (ORā€‰=ā€‰1.4; 95% CI 1.1ā€“1.7, nā€‰=ā€‰148). Of the intervention group, most (80/98, 82%) agreed eCREST helped them to learn clinical reasoning skills. Conclusions eCREST was highly acceptable and improved data gathering skills that could reduce diagnostic errors. Uptake was low but improved when integrated into course delivery. A summative trial is needed to estimate effectiveness

    Why?

    Full text link
    • ā€¦
    corecore