151 research outputs found

    Development and validation of the ACE tool: Assessing medical trainees' competency in evidence based medicine

    Get PDF
    BACKGROUND: While a variety of instruments have been developed to assess knowledge and skills in evidence based medicine (EBM), few assess all aspects of EBM - including knowledge, skills attitudes and behaviour - or have been psychometrically evaluated. The aim of this study was to develop and validate an instrument that evaluates medical trainees’ competency in EBM across knowledge, skills and attitude. METHODS: The ‘Assessing Competency in EBM’ (ACE) tool was developed by the authors, with content and face validity assessed by expert opinion. A cross-sectional sample of 342 medical trainees representing ‘novice’, ‘intermediate’ and ‘advanced’ EBM trainees were recruited to complete the ACE tool. Construct validity, item difficulty, internal reliability and item discrimination were analysed. RESULTS: We recruited 98 EBM-novice, 108 EBM-intermediate and 136 EBM-advanced participants. A statistically significant difference in the total ACE score was observed and corresponded to the level of training: on a 0-15-point test, the mean ACE scores were 8.6 for EBM-novice; 9.5 for EBM-intermediate; and 10.4 for EBM-advanced (p < 0.0001). Individual item discrimination was excellent (Item Discrimination Index ranging from 0.37 to 0.84), with internal reliability consistent across all but three items (Item Total Correlations were all positive ranging from 0.14 to 0.20). CONCLUSION: The 15-item ACE tool is a reliable and valid instrument to assess medical trainees’ competency in EBM. The ACE tool provides a novel assessment that measures user performance across the four main steps of EBM. To provide a complete suite of instruments to assess EBM competency across various patient scenarios, future refinement of the ACE instrument should include further scenarios across harm, diagnosis and prognosis

    Evidence-based practice educational intervention studies: A systematic review of what is taught and how it is measured

    Get PDF
    Abstract Background Despite the established interest in evidence-based practice (EBP) as a core competence for clinicians, evidence for how best to teach and evaluate EBP remains weak. We sought to systematically assess coverage of the five EBP steps, review the outcome domains measured, and assess the properties of the instruments used in studies evaluating EBP educational interventions. Methods We conducted a systematic review of controlled studies (i.e. studies with a separate control group) which had investigated the effect of EBP educational interventions. We used citation analysis technique and tracked the forward and backward citations of the index articles (i.e. the systematic reviews and primary studies included in an overview of the effect of EBP teaching) using Web of Science until May 2017. We extracted information on intervention content (grouped into the five EBP steps), and the outcome domains assessed. We also searched the literature for published reliability and validity data of the EBP instruments used. Results Of 1831 records identified, 302 full-text articles were screened, and 85 included. Of these, 46 (54%) studies were randomised trials, 51 (60%) included postgraduate level participants, and 63 (75%) taught medical professionals. EBP Step 3 (critical appraisal) was the most frequently taught step (63 studies; 74%). Only 10 (12%) of the studies taught content which addressed all five EBP steps. Of the 85 studies, 52 (61%) evaluated EBP skills, 39 (46%) knowledge, 35 (41%) attitudes, 19 (22%) behaviours, 15 (18%) self-efficacy, and 7 (8%) measured reactions to EBP teaching delivery. Of the 24 instruments used in the included studies, 6 were high-quality (achieved ≥3 types of established validity evidence) and these were used in 14 (29%) of the 52 studies that measured EBP skills; 14 (41%) of the 39 studies that measured EBP knowledge; and 8 (26%) of the 35 studies that measured EBP attitude. Conclusions Most EBP educational interventions which have been evaluated in controlled studies focus on teaching only some of the EBP steps (predominantly critically appraisal of evidence) and did not use high-quality instruments to measure outcomes. Educational packages and instruments which address all EBP steps are needed to improve EBP teaching

    How are "teaching the teachers" courses in evidence based medicine evaluated? A systematic review

    Get PDF
    Background Teaching of evidence-based medicine (EBM) has become widespread in medical education. Teaching the teachers (TTT) courses address the increased teaching demand and the need to improve effectiveness of EBM teaching. We conducted a systematic review of assessment tools for EBM TTT courses. To summarise and appraise existing assessment methods for teaching the teachers courses in EBM by a systematic review. Methods We searched PubMed, BioMed, EmBase, Cochrane and Eric databases without language restrictions and included articles that assessed its participants. Study selection and data extraction were conducted independently by two reviewers. Results Of 1230 potentially relevant studies, five papers met the selection criteria. There were no specific assessment tools for evaluating effectiveness of EBM TTT courses. Some of the material available might be useful in initiating the development of such an assessment tool. Conclusion There is a need for the development of educationally sound assessment tools for teaching the teachers courses in EBM, without which it would be impossible to ascertain if such courses have the desired effect

    Guidelines on Chemotherapy in Advanced Stage Gynecological Malignancies: An Evaluation of 224 Professional Societies and Organizations

    Get PDF
    BACKGROUND: Clinical practice guidelines are important for guiding practice, but it is unclear if they are commensurate with the available evidence. METHODS: We examined guidelines produced by cancer and gynecological societies and organizations and evaluated their coverage of and stance towards chemotherapy for advanced stage disease among 4 gynecological malignancies (breast, ovarian, cervical, endometrial cancer) where the evidence for the use of chemotherapy is very different (substantial and conclusive for breast and ovarian cancer, limited and suggesting no major benefit for cervical and endometrial cancer). Eligible societies and organizations were identified through systematic internet searches (last update June 2009). Pertinent websites were scrutinized for presence of clinical practice guidelines, and relative guidelines were analyzed. RESULTS: Among 224 identified eligible societies and organizations, 69 (31%) provided any sort of guidelines, while recommendations for chemotherapy on advanced stage gynecological malignancies were available in 20 of them. Only 14 had developed their own guideline, and only 5 had developed guidelines for all 4 malignancies. Use of levels of evidence and grades of recommendations, and aspects of the production, implementation, and timeliness of the guidelines did not differ significantly across malignancies. Guidelines on breast and ovarian cancer utilized significantly more randomized trials and meta-analyses. Guidelines differed across malignancies on their coverage of disease-free survival (p = 0.033), response rates (p = 0.024), symptoms relief (p = 0.005), quality of life (p = 0.001) and toxicity (p = 0.039), with breast and ovarian cancer guidelines typically covering more frequently these outcomes. All guidelines explicitly or implicitly endorsed the use of chemotherapy. CONCLUSIONS: Clinical practice guidelines are provided by the minority of professional societies and organizations. Available guidelines tend to recommend chemotherapy even for diseases where the effect of chemotherapy is controversial and recommendations are based on scant evidence

    Psychometric properties of a test in evidence based practice: the Spanish version of the Fresno test

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Validated instruments are needed to evaluate the programmatic impact of Evidence Based Practice (EBP) training and to document the competence of individual trainees. This study aimed to translate the Fresno test into Spanish and subsequently validate it, in order to ensure the equivalence of the Spanish version against the original English version.</p> <p>Methods</p> <p>Before and after study performed between October 2007 and June 2008. Three groups of participants: (a) Mentors of family medicine residents (expert group) (n = 56); (b) Family medicine physicians (intermediate experience group) (n = 17); (c) Family medicine residents (novice group) (n = 202); Medical residents attended an EBP course, and two sets of the test were administered before and after the course. The Fresno test is a performance based measure for use in medical education that assesses EBP skills. The outcome measures were: inter-rater and intra-rater reliability, internal consistency, item analyses, construct validity, feasibility of administration, and responsiveness.</p> <p>Results</p> <p>Inter-rater correlations were 0.95 and 0.85 in the pre-test and the post-test respectively. The overall intra-rater reliability was 0.71 and 0.81 in the pre-test and post-test questionnaire, respectively. Cronbach's alpha was 0.88 and 0.77, respectively. 152 residents (75.2%) returned both sets of the questionnaire. The observed effect size for the residents was 1.77 (CI 95%: 1.57-1.95), the standardised response mean was 1.65 (CI 95%:1.47-1.82).</p> <p>Conclusions</p> <p>The Spanish version of the Fresno test is a useful tool in assessing the knowledge and skills of EBP in Spanish-speaking residents of Family Medicine.</p

    Maternal and perinatal guideline development in hospitals in South East Asia: the experience of the SEA-ORCHID project

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Clinical practice guidelines (CPGs) are commonly used to support practitioners to improve practice. However many studies have raised concerns about guideline quality. The reasons why guidelines are not developed following the established development methods are not clear.</p> <p>The SEA-ORCHID project aims to increase the generation and use of locally relevant research and improve clinical practice in maternal and perinatal care in four countries in South East Asia. Baseline data highlighted that development of evidence-based CPGs according to recommended processes was very rare in the SEA-ORCHID hospitals. The project investigators suggested that there were aspects of the recommended development process that made it very difficult in the participating hospitals.</p> <p>We therefore aimed to explore the experience of guideline development and particularly the enablers of and barriers to developing evidence-based guidelines in the nine hospitals in South East Asia participating in the SEA-ORCHID project, so as to better understand how evidence-based guideline development could be facilitated in these settings.</p> <p>Methods</p> <p>Semi-structured, face-to-face interviews were undertaken with senior and junior healthcare providers (nurses, midwives, doctors) from the maternal and neonatal services at each of the nine participating hospitals. Interviews were audio-recorded, transcribed and a thematic analysis undertaken.</p> <p>Results</p> <p>Seventy-five individual, 25 pair and eleven group interviews were conducted. Participants clearly valued evidence-based guidelines. However they also identified several major barriers to guideline development including time, lack of awareness of process, difficulties searching for evidence and arranging guideline development group meetings, issues with achieving multi-disciplinarity and consumer involvement. They also highlighted the central importance of keeping guidelines up-to-date.</p> <p>Conclusion</p> <p>Healthcare providers in the SEA-ORCHID hospitals face a series of barriers to developing evidence-based guidelines. At present, in many hospitals, several of these barriers are insurmountable, and as a result, rigorous, evidence-based guidelines are not being developed. Given the acknowledged benefits of evidence-based guidelines, perhaps a new approach to supporting their development in these contexts is needed.</p

    Validation of the modified Fresno Test: assessing physical therapists' evidence based practice knowledge and skills

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Health care educators need valid and reliable tools to assess evidence based practice (EBP) knowledge and skills. Such instruments have yet to be developed for use among physical therapists. The Fresno Test (FT) has been validated only among general practitioners and occupational therapists and does not assess integration of research evidence with patient perspectives and clinical expertise. The purpose of this study was to develop and validate a modified FT to assess EBP knowledge and skills relevant to physical therapist (PT) practice.</p> <p>Methods</p> <p>The FT was modified to include PT-specific content and two new questions to assess integration of patient perspectives and clinical expertise with research evidence. An expert panel reviewed the test for content validity. A cross-sectional cohort representing three training levels (EBP-novice students, EBP-trained students, EBP-expert faculty) completed the test. Two blinded raters, not involved in test development, independently scored each test. Construct validity was assessed through analysis of variance for linear trends among known groups. Inter and intra-rater reliability, internal consistency, item discrimination index, item total correlation, and difficulty were analyzed.</p> <p>Results</p> <p>Among 108 participants (31 EBP-novice students, 50 EBP-trained students, and 27 EBP-expert faculty), there was a statistically significant (p < 0.0001) difference in total score corresponding to training level. Total score reliability and psychometric properties of items modified for discipline-specific content were excellent [inter-rater (ICC (2,1)] = 0.91); intra-rater (ICC (2,1)] = 0.95, 0.96)]. Cronbach's α was 0.78. Of the two new items, only one had strong psychometric properties.</p> <p>Conclusions</p> <p>The 13-item modified FT presented here is a valid, reliable assessment of physical therapists' EBP knowledge and skills. One new item assesses integration of patient perspective as part of the EBP model. Educators and researchers may use the 13-item modified FT to evaluate PT EBP curricula and physical therapists' EBP knowledge and skills.</p

    Self-perceived competence correlates poorly with objectively measured competence in Evidence Based Medicine among medical students

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Previous studies report various degrees of agreement between self-perceived competence and objectively measured competence in medical students. There is still a paucity of evidence on how the two correlate in the field of Evidence Based Medicine (EBM). We undertook a cross-sectional study to evaluate the self-perceived competence in EBM of senior medical students in Malaysia, and assessed its correlation to their objectively measured competence in EBM.</p> <p>Methods</p> <p>We recruited a group of medical students in their final six months of training between March and August 2006. The students were receiving a clinically-integrated EBM training program within their curriculum. We evaluated the students' self-perceived competence in two EBM domains ("searching for evidence" and "appraising the evidence") by piloting a questionnaire containing 16 relevant items, and objectively assessed their competence in EBM using an adapted version of the Fresno test, a validated tool. We correlated the matching components between our questionnaire and the Fresno test using Pearson's product-moment correlation.</p> <p>Results</p> <p>Forty-five out of 72 students in the cohort (62.5%) participated by completing the questionnaire and the adapted Fresno test concurrently. In general, our students perceived themselves as moderately competent in most items of the questionnaire. They rated themselves on average 6.34 out of 10 (63.4%) in "searching" and 44.41 out of 57 (77.9%) in "appraising". They scored on average 26.15 out of 60 (43.6%) in the "searching" domain and 57.02 out of 116 (49.2%) in the "appraising" domain in the Fresno test. The correlations between the students' self-rating and their performance in the Fresno test were poor in both the "searching" domain (r = 0.13, p = 0.4) and the "appraising" domain (r = 0.24, p = 0.1).</p> <p>Conclusions</p> <p>This study provides supporting evidence that at the undergraduate level, self-perceived competence in EBM, as measured using our questionnaire, does not correlate well with objectively assessed EBM competence measured using the adapted Fresno test.</p> <p>Study registration</p> <p>International Medical University, Malaysia, research ID: IMU 110/06</p

    Clinical practice guidelines for the foot and ankle in rheumatoid arthritis: a critical appraisal

    Get PDF
    Background: Clinical practice guidelines are recommendations systematically developed to assist clinical decision-making and inform healthcare. In current rheumatoid arthritis (RA) guidelines, management of the foot and ankle is under-represented and the quality of recommendation is uncertain. This study aimed to identify and critically appraise clinical practice guidelines for foot and ankle management in RA. Methods: Guidelines were identified electronically and through hand searching. Search terms 'rheumatoid arthritis', 'clinical practice guidelines' and related synonyms were used. Critical appraisal and quality rating were conducted using the Appraisal of Guidelines for Research and Evaluation (AGREE) II instrument. Results: Twenty-four guidelines were included. Five guidelines were high quality and recommended for use. Five high quality and seven low quality guidelines were recommended for use with modifications. Seven guidelines were low quality and not recommended for use. Five early and twelve established RA guidelines were recommended for use. Only two guidelines were foot and ankle specific. Five recommendation domains were identified in both early and established RA guidelines. These were multidisciplinary team care, foot healthcare access, foot health assessment/review, orthoses/insoles/splints, and therapeutic footwear. Established RA guidelines also had an 'other foot care treatments' domain. Conclusions: Foot and ankle management for RA features in many clinical practice guidelines recommended for use. Unfortunately, supporting evidence in the guidelines is low quality. Agreement levels are predominantly 'expert opinion' or 'good clinical practice'. More research investigating foot and ankle management for RA is needed prior to inclusion in clinical practice guidelines
    corecore