14 research outputs found

    Identifying educator behaviours for high quality verbal feedback in health professions education: literature review and expert refinement

    Get PDF
    Background Health professions education is characterised by work-based learning and relies on effective verbal feedback. However the literature reports problems in feedback practice, including lack of both learner engagement and explicit strategies for improving performance. It is not clear what constitutes high quality, learner-centred feedback or how educators can promote it. We hoped to enhance feedback in clinical practice by distinguishing the elements of an educator’s role in feedback considered to influence learner outcomes, then develop descriptions of observable educator behaviours that exemplify them. Methods An extensive literature review was conducted to identify i) information substantiating specific components of an educator’s role in feedback asserted to have an important influence on learner outcomes and ii) verbal feedback instruments in health professions education, that may describe important educator activities in effective feedback. This information was used to construct a list of elements thought to be important in effective feedback. Based on these elements, descriptions of observable educator behaviours that represent effective feedback were developed and refined during three rounds of a Delphi process and a face-to-face meeting with experts across the health professions and education. Results The review identified more than 170 relevant articles (involving health professions, education, psychology and business literature) and ten verbal feedback instruments in health professions education (plus modified versions). Eighteen distinct elements of an educator’s role in effective feedback were delineated. Twenty five descriptions of educator behaviours that align with the elements were ratified by the expert panel. Conclusions This research clarifies the distinct elements of an educator’s role in feedback considered to enhance learner outcomes. The corresponding set of observable educator behaviours aim to describe how an educator could engage, motivate and enable a learner to improve. This creates the foundation for developing a method to systematically evaluate the impact of verbal feedback on learner performance

    Quality of written narrative feedback and reflection in a modified mini-clinical evaluation exercise: an observational study

    Get PDF
    Contains fulltext : 109243.pdf (publisher's version ) (Open Access)ABSTRACT: BACKGROUND: Research has shown that narrative feedback, (self) reflections and a plan to undertake and evaluate improvements are key factors for effective feedback on clinical performance. We investigated the quantity of narrative comments comprising feedback (by trainers), self-reflections (by trainees) and action plans (by trainer and trainee) entered on a mini-CEX form that was modified for use in general practice training and to encourage trainers and trainees to provide narrative comments. In view of the importance of specificity as an indicator of feedback quality, we additionally examined the specificity of the comments. METHOD: We collected and analysed modified mini-CEX forms completed by GP trainers and trainees. Since each trainee has the same trainer for the duration of one year, we used trainer-trainee pairs as the unit of analysis. We determined for all forms the frequency of the different types of narrative comments and rated their specificity on a three-point scale: specific, moderately specific, not specific. Specificity was compared between trainee-trainer pairs. RESULTS: We collected 485 completed modified mini-CEX forms from 54 trainees (mean of 8.8 forms per trainee; range 1-23; SD 5.6). Trainer feedback was more frequently provided than trainee self-reflections, and action plans were very rare. The comments were generally specific, but showed large differences between trainee-trainer pairs. CONCLUSION: The frequency of self-reflection and action plans varied, all comments were generally specific and there were substantial and consistent differences between trainee-trainer pairs in the specificity of comments. We therefore conclude that feedback is not so much determined by the instrument as by the users. Interventions to improve the educational effects of the feedback procedure should therefore focus more on the users than on the instruments

    A laboratory study on the reliability estimations of the mini-CEX

    Get PDF
    Item does not contain fulltextReliability estimations of workplace-based assessments with the mini-CEX are typically based on real-life data. Estimations are based on the assumption of local independence: the object of the measurement should not be influenced by the measurement itself and samples should be completely independent. This is difficult to achieve. Furthermore, the variance caused by the case/patient or by assessor is completely confounded. We have no idea how much each of these factors contribute to the noise in the measurement. The aim of this study was to use a controlled setup that overcomes these difficulties and to estimate the reproducibility of the mini-CEX. Three encounters were videotaped from 21 residents. The patients were the same for all residents. Each encounter was assessed by 3 assessors who assessed all encounters for all residents. This delivered a fully crossed (all random) two-facet generalizability design. A quarter of the total variance was associated with universe score variance (28%). The largest source of variance was the general error term (34%) followed by the main effect of assessors (18%). Generalizability coefficients indicated that an approximate sample of 9 encounters was needed assuming a single different assessor per encounter and assuming different cases per encounter (the usual situation in real practice), 4 encounters when 2 raters were used and 3 encounters when 3 raters are used. Unexplained general error and the leniency/stringency of assessors are the major causes for unreliability in mini-CEX. To optimize reliability rater training might have an effect

    Programmatic assessment of competency-based workplace learning: when theory meets practice

    Get PDF
    Contains fulltext : 125808.pdf (publisher's version ) (Open Access)BACKGROUND: In competency-based medical education emphasis has shifted towards outcomes, capabilities, and learner-centeredness. Together with a focus on sustained evidence of professional competence this calls for new methods of teaching and assessment. Recently, medical educators advocated the use of a holistic, programmatic approach towards assessment. Besides maximum facilitation of learning it should improve the validity and reliability of measurements and documentation of competence development. We explored how, in a competency-based curriculum, current theories on programmatic assessment interacted with educational practice. METHODS: In a development study including evaluation, we investigated the implementation of a theory-based programme of assessment. Between April 2011 and May 2012 quantitative evaluation data were collected and used to guide group interviews that explored the experiences of students and clinical supervisors with the assessment programme. We coded the transcripts and emerging topics were organised into a list of lessons learned. RESULTS: The programme mainly focuses on the integration of learning and assessment by motivating and supporting students to seek and accumulate feedback. The assessment instruments were aligned to cover predefined competencies to enable aggregation of information in a structured and meaningful way. Assessments that were designed as formative learning experiences were increasingly perceived as summative by students. Peer feedback was experienced as a valuable method for formative feedback. Social interaction and external guidance seemed to be of crucial importance to scaffold self-directed learning. Aggregating data from individual assessments into a holistic portfolio judgement required expertise and extensive training and supervision of judges. CONCLUSIONS: A programme of assessment with low-stakes assessments providing simultaneously formative feedback and input for summative decisions proved not easy to implement. Careful preparation and guidance of the implementation process was crucial. Assessment for learning requires meaningful feedback with each assessment. Special attention should be paid to the quality of feedback at individual assessment moments. Comprehensive attention for faculty development and training for students is essential for the successful implementation of an assessment programme
    corecore