147 research outputs found

    Development and validation of the ACE tool: Assessing medical trainees' competency in evidence based medicine

    Get PDF
    BACKGROUND: While a variety of instruments have been developed to assess knowledge and skills in evidence based medicine (EBM), few assess all aspects of EBM - including knowledge, skills attitudes and behaviour - or have been psychometrically evaluated. The aim of this study was to develop and validate an instrument that evaluates medical trainees’ competency in EBM across knowledge, skills and attitude. METHODS: The ‘Assessing Competency in EBM’ (ACE) tool was developed by the authors, with content and face validity assessed by expert opinion. A cross-sectional sample of 342 medical trainees representing ‘novice’, ‘intermediate’ and ‘advanced’ EBM trainees were recruited to complete the ACE tool. Construct validity, item difficulty, internal reliability and item discrimination were analysed. RESULTS: We recruited 98 EBM-novice, 108 EBM-intermediate and 136 EBM-advanced participants. A statistically significant difference in the total ACE score was observed and corresponded to the level of training: on a 0-15-point test, the mean ACE scores were 8.6 for EBM-novice; 9.5 for EBM-intermediate; and 10.4 for EBM-advanced (p < 0.0001). Individual item discrimination was excellent (Item Discrimination Index ranging from 0.37 to 0.84), with internal reliability consistent across all but three items (Item Total Correlations were all positive ranging from 0.14 to 0.20). CONCLUSION: The 15-item ACE tool is a reliable and valid instrument to assess medical trainees’ competency in EBM. The ACE tool provides a novel assessment that measures user performance across the four main steps of EBM. To provide a complete suite of instruments to assess EBM competency across various patient scenarios, future refinement of the ACE instrument should include further scenarios across harm, diagnosis and prognosis

    Evidence-based practice educational intervention studies: A systematic review of what is taught and how it is measured

    Get PDF
    Abstract Background Despite the established interest in evidence-based practice (EBP) as a core competence for clinicians, evidence for how best to teach and evaluate EBP remains weak. We sought to systematically assess coverage of the five EBP steps, review the outcome domains measured, and assess the properties of the instruments used in studies evaluating EBP educational interventions. Methods We conducted a systematic review of controlled studies (i.e. studies with a separate control group) which had investigated the effect of EBP educational interventions. We used citation analysis technique and tracked the forward and backward citations of the index articles (i.e. the systematic reviews and primary studies included in an overview of the effect of EBP teaching) using Web of Science until May 2017. We extracted information on intervention content (grouped into the five EBP steps), and the outcome domains assessed. We also searched the literature for published reliability and validity data of the EBP instruments used. Results Of 1831 records identified, 302 full-text articles were screened, and 85 included. Of these, 46 (54%) studies were randomised trials, 51 (60%) included postgraduate level participants, and 63 (75%) taught medical professionals. EBP Step 3 (critical appraisal) was the most frequently taught step (63 studies; 74%). Only 10 (12%) of the studies taught content which addressed all five EBP steps. Of the 85 studies, 52 (61%) evaluated EBP skills, 39 (46%) knowledge, 35 (41%) attitudes, 19 (22%) behaviours, 15 (18%) self-efficacy, and 7 (8%) measured reactions to EBP teaching delivery. Of the 24 instruments used in the included studies, 6 were high-quality (achieved ≥3 types of established validity evidence) and these were used in 14 (29%) of the 52 studies that measured EBP skills; 14 (41%) of the 39 studies that measured EBP knowledge; and 8 (26%) of the 35 studies that measured EBP attitude. Conclusions Most EBP educational interventions which have been evaluated in controlled studies focus on teaching only some of the EBP steps (predominantly critically appraisal of evidence) and did not use high-quality instruments to measure outcomes. Educational packages and instruments which address all EBP steps are needed to improve EBP teaching

    Maternal and perinatal guideline development in hospitals in South East Asia: the experience of the SEA-ORCHID project

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Clinical practice guidelines (CPGs) are commonly used to support practitioners to improve practice. However many studies have raised concerns about guideline quality. The reasons why guidelines are not developed following the established development methods are not clear.</p> <p>The SEA-ORCHID project aims to increase the generation and use of locally relevant research and improve clinical practice in maternal and perinatal care in four countries in South East Asia. Baseline data highlighted that development of evidence-based CPGs according to recommended processes was very rare in the SEA-ORCHID hospitals. The project investigators suggested that there were aspects of the recommended development process that made it very difficult in the participating hospitals.</p> <p>We therefore aimed to explore the experience of guideline development and particularly the enablers of and barriers to developing evidence-based guidelines in the nine hospitals in South East Asia participating in the SEA-ORCHID project, so as to better understand how evidence-based guideline development could be facilitated in these settings.</p> <p>Methods</p> <p>Semi-structured, face-to-face interviews were undertaken with senior and junior healthcare providers (nurses, midwives, doctors) from the maternal and neonatal services at each of the nine participating hospitals. Interviews were audio-recorded, transcribed and a thematic analysis undertaken.</p> <p>Results</p> <p>Seventy-five individual, 25 pair and eleven group interviews were conducted. Participants clearly valued evidence-based guidelines. However they also identified several major barriers to guideline development including time, lack of awareness of process, difficulties searching for evidence and arranging guideline development group meetings, issues with achieving multi-disciplinarity and consumer involvement. They also highlighted the central importance of keeping guidelines up-to-date.</p> <p>Conclusion</p> <p>Healthcare providers in the SEA-ORCHID hospitals face a series of barriers to developing evidence-based guidelines. At present, in many hospitals, several of these barriers are insurmountable, and as a result, rigorous, evidence-based guidelines are not being developed. Given the acknowledged benefits of evidence-based guidelines, perhaps a new approach to supporting their development in these contexts is needed.</p

    Quality and methods of developing practice guidelines

    Get PDF
    BACKGROUND: It is not known whether there are differences in the quality and recommendations between evidence-based (EB) and consensus-based (CB) guidelines. We used breast cancer guidelines as a case study to assess for these differences. METHODS: Five different instruments to evaluate the quality of guidelines were identified by a literature search. We also searched MEDLINE and the Internet to locate 8 breast cancer guidelines. These guidelines were classified in three categories: evidence based, consensus based and consensus based with no explicit consideration of evidence (CB-EB). Each guideline was evaluated by three of the authors using each of the instruments. For each guideline we assessed the agreement among 14 decision points which were selected from the NCCN (National Cancer Comprehensive Network) guidelines algorithm. For each decision point we recorded the level of the quality of the information used to support it. A regression analysis was performed to assess if the percentage of high quality evidence used in the guidelines development was related to the overall quality of the guidelines. RESULTS: Three guidelines were classified as EB, three as CB-EB and two as CB. The EB guidelines scored better than CB, with the CB-EB scoring in the middle among all instruments for guidelines quality assessment. No major disagreement in recommendations was detected among the guidelines regardless of the method used for development, but the EB guidelines had a better agreement with the benchmark guideline for any decision point. When the source of evidence used to support decision were of high quality, we found a higher level of full agreement among the guidelines' recommendations. Up to 94% of variation in the quality score among guidelines could be explained by the quality of evidence used for guidelines development. CONCLUSION: EB guidelines have a better quality than CB guidelines and CB-EB guidelines. Explicit use of high quality evidence can lead to a better agreement among recommendations. However, no major disagreement among guidelines was noted regardless of the method for their development

    How are "teaching the teachers" courses in evidence based medicine evaluated? A systematic review

    Get PDF
    Background Teaching of evidence-based medicine (EBM) has become widespread in medical education. Teaching the teachers (TTT) courses address the increased teaching demand and the need to improve effectiveness of EBM teaching. We conducted a systematic review of assessment tools for EBM TTT courses. To summarise and appraise existing assessment methods for teaching the teachers courses in EBM by a systematic review. Methods We searched PubMed, BioMed, EmBase, Cochrane and Eric databases without language restrictions and included articles that assessed its participants. Study selection and data extraction were conducted independently by two reviewers. Results Of 1230 potentially relevant studies, five papers met the selection criteria. There were no specific assessment tools for evaluating effectiveness of EBM TTT courses. Some of the material available might be useful in initiating the development of such an assessment tool. Conclusion There is a need for the development of educationally sound assessment tools for teaching the teachers courses in EBM, without which it would be impossible to ascertain if such courses have the desired effect

    Assessing competency in Evidence Based Practice: strengths and limitations of current tools in practice

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Evidence Based Practice (EBP) involves making clinical decisions informed by the most relevant and valid evidence available. Competence can broadly be defined as a concept that incorporates a variety of domains including knowledge, skills and attitudes. Adopting an evidence-based approach to practice requires differing competencies across various domains including literature searching, critical appraisal and communication. This paper examines the current tools available to assess EBP competence and compares their applicability to existing assessment techniques used in medicine, nursing and health sciences.</p> <p>Discussion</p> <p>Only two validated assessment tools have been developed to specifically assess all aspects of EBP competence. Of the two tools (<it>Berlin </it>and <it>Fresno </it>tools), only the <it>Fresno </it>tool comprehensively assesses EBP competency across all relevant domains. However, both tools focus on assessing EBP competency in medical students; therefore neither can be used for assessing EBP competency across different health disciplines. The Objective Structured Clinical Exam (OSCE) has been demonstrated as a reliable and versatile tool to assess clinical competencies, practical and communication skills. The OSCE has scope as an alternate method for assessing EBP competency, since it combines assessment of cognitive skills including knowledge, reasoning and communication. However, further research is needed to develop the OSCE as a viable method for assessing EBP competency.</p> <p>Summary</p> <p>Demonstrating EBP competence is a complex task – therefore no single assessment method can adequately provide all of the necessary data to assess complete EBP competence. There is a need for further research to explore how EBP competence is best assessed; be it in written formats, such as the <it>Fresno </it>tool, or another format, such as the OSCE. Future tools must also incorporate measures of assessing how EBP competence affects clinician behaviour and attitudes as well as clinical outcomes in real-time situations. This research should also be conducted across a variety of health disciplines to best inform practice.</p
    corecore