9 research outputs found

    Editorial

    Get PDF
    No abstract

    Development and evaluation of a spiral model of assessing EBM competency using OSCEs in undergraduate medical education

    Get PDF
    Background: Medical students often struggle to understand the relevance of Evidence Based Medicine (EBM) to their clinical practice, yet it is a competence that all students must develop prior to graduation. Objective structured clinical examinations (OSCEs) are a valued assessment tool to assess critical components of EBM competency, particularly different levels of mastery as they progress through the course. This study developed and evaluated EBM based OSCE stations with an aim to establish a spiral approach for EBM OSCE stations for undergraduate medical students. Methods: OSCE stations were developed with increasingly complex EBM tasks. OSCE stations were classified according to the classification rubric for EBP assessment tools (CREATE) framework and mapped against the recently published core competencies for evidence-based practice (EBP). Performance data evaluation was undertaken using Classical Test Theory analysing mean scores, pass rates, and station item total correlation (ITC) using SPSS. Results: Six EBM based OSCE stations assessing various stages of EBM were created for use in high stakes summative OSCEs for different year groups across the undergraduate medical degree. All OSCE stations, except for one, had excellent correlation coefficients and hence a high reliability, ranging from 0.21–0.49. The domain mean score ranged from 13.33 to 16.83 out of 20. High reliability was demonstrated for the each of the summative OSCE circuits (Cronbach’s alpha = 0.67–0.85). In the CREATE framework these stations assessed knowledge, skills, and behaviour of medical students in asking, searching, appraising, and integrating evidence in practice. The OSCE stations were useful in assessing six core evidence-based practice competencies, which are meant to be practiced with exercises. A spiral model of OSCEs of increasing complexity was proposed to assess EBM competency as students progressed through the MBChB course. Conclusions: The use of the OSCEs is a feasible method of authentically assessing leaner EBM performance and behaviour in a high stakes assessment setting. Use of valid and reliable EBM-based OSCE stations provide evidence for continued development of a hierarchy of assessing scaffolded learning and mastery of EBM competency. Further work is needed to assess their predictive validity

    A systematic review and taxonomy of tools for evaluating evidence-based medicine teaching in medical education

    Get PDF
    Background: The importance of teaching the skills and practice of evidence-based medicine (EBM) for medical professionals has steadily grown in recent years. Alongside this growth is a need to evaluate the effectiveness of EBM curriculum as assessed by competency in the five ‘A’s’: asking, acquiring, appraising, applying and assessing (impact and performance). EBM educators in medical education will benefit from a compendium of existing assessment tools for assessing EBM competencies in their settings. The purpose of this review is to provide a systematic review and taxonomy of validated tools that evaluate EBM teaching in medical education. Methods: We searched MEDLINE, EMBASE, Cochrane library, Educational Resources Information Centre (ERIC), Best Evidence Medical Education (BEME) databases and references of retrieved articles published between January 2005 and March 2019. We have presented the identified tools along with their psychometric properties including validity, reliability and relevance to the five domains of EBM practice and dimensions of EBM learning. We also assessed the quality of the tools to identify high quality tools as those supported by established interrater reliability (if applicable), objective (non-self-reported) outcome measures and achieved ≥ 3 types of established validity evidence. We have reported our study in accordance with the PRISMA guidelines. Results: We identified 1719 potentially relevant articles of which 63 full text articles were assessed for eligibility against inclusion and exclusion criteria. Twelve articles each with a unique and newly identified tool were included in the final analysis. Of the twelve tools, all of them assessed the third step of EBM practice (appraise) and four assessed just that one step. None of the twelve tools assessed the last step of EBM practice (assess). Of the seven domains of EBM learning, ten tools assessed knowledge gain, nine assessed skills and-one assessed attitude. None addressed reaction to EBM teaching, self-efficacy, behaviours or patient benefit. Of the twelve tools identified, six were high quality. We have also provided a taxonomy of tools using the CREATE framework, for EBM teachers in medical education. Conclusions: Six tools of reasonable validity are available for evaluating most steps of EBM and some domains of EBM learning. Further development and validation of tools that evaluate all the steps in EBM and all educational outcome domains are needed

    Schwartz rounds in undergraduate medical education facilitates active reflection and individual identification of learning need

    Get PDF
    Strategies applying Schwartz Rounds to improve wellbeing of medical students has focused on the clinical years of study. This pilot study investigates whether Schwartz Rounds could be effective in developing students’ reflective practice in Year 2 undergraduates. Engagement with the Schwartz Round was high with over 50% of the students identifying learning needs through reflection on the Round. Schwartz Rounds promoted recognition of the value of reflective practice and increased self-awareness of student needs

    Do referral-management schemes reduce hospital outpatient attendances? Time-series evaluation of primary care referral management

    Get PDF
    BACKGROUND: Ninety-one per cent of primary care trusts were using some form of referral management in 2009, although evidence for its effectiveness is limited. AIM: To assess the impact of three referral-management centres (RMCs) and two internal peer-review approaches to referral management on hospital outpatient attendance rates. DESIGN AND SETTING: A retrospective time-series analysis of 376 000 outpatient attendances over 3 years from 85 practices divided into five groups, with 714 000 registered patients in one English primary care trust. METHOD: The age-standardised GP-referred first outpatient monthly attendance rate was calculated for each group from April 2009 to March 2012. This was divided by the equivalent monthly England rate, to derive a rate ratio. Linear regression tested for association between the introduction of referral management and change in the outpatient attendance rate and rate ratio. Annual group budgets for referral management were obtained. RESULTS: Referral management was not associated with a reduction in the outpatient attendance rate in any group. There was a statistically significant increase in attendance rate in one group (a RMC), which had an increase of 1.05 attendances per 1000 persons per month (95% confidence interval = 0.46 to 1.64; attendance rate ratio increase of 0.07) after adjustment for autocorrelation. Mean annual budgets ranged from £0.55 to £6.23 per registered patient in 2011/2012. RMCs were more expensive (mean annual budget £5.18 per registered patient) than internal peer-review approaches (mean annual budget £0.97 per registered patient). CONCLUSION: Referral-management schemes did not reduce outpatient attendance rates. RMCs were more expensive than internal peer review
    corecore