20 research outputs found

    The role of deliberate practice in the acquisition of clinical skills

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The role of deliberate practice in medical students' development from novice to expert was examined for preclinical skill training.</p> <p>Methods</p> <p>Students in years 1-3 completed 34 Likert type items, adapted from a questionnaire about the use of deliberate practice in cognitive learning. Exploratory factor analysis and reliability analysis were used to validate the questionnaire. Analysis of variance examined differences between years and regression analysis the relationship between deliberate practice and skill test results.</p> <p>Results</p> <p>875 students participated (90%). Factor analysis yielded four factors: planning, concentration/dedication, repetition/revision, study style/self reflection. Student scores on 'Planning' increased over time, score on sub-scale 'repetition/revision' decreased. Student results on the clinical skill test correlated positively with scores on subscales 'planning' and 'concentration/dedication' in years 1 and 3, and with scores on subscale 'repetition/revision' in year 1.</p> <p>Conclusions</p> <p>The positive effects on test results suggest that the role of deliberate practice in medical education merits further study. The cross-sectional design is a limitation, the large representative sample a strength of the study. The vanishing effect of repetition/revision may be attributable to inadequate feedback. Deliberate practice advocates sustained practice to address weaknesses, identified by (self-)assessment and stimulated by feedback. Further studies should use a longitudinal prospective design and extend the scope to expertise development during residency and beyond.</p

    Ottawa 2020 consensus statement for programmatic assessment-1. Agreement on the principles

    Get PDF
    INTRODUCTION: In the Ottawa 2018 Consensus framework for good assessment, a set of criteria was presented for systems of assessment. Currently, programmatic assessment is being established in an increasing number of programmes. In this Ottawa 2020 consensus statement for programmatic assessment insights from practice and research are used to define the principles of programmatic assessment. METHODS: For fifteen programmes in health professions education affiliated with members of an expert group (n = 20), an inventory was completed for the perceived components, rationale, and importance of a programmatic assessment design. Input from attendees of a programmatic assessment workshop and symposium at the 2020 Ottawa conference was included. The outcome is discussed in concurrence with current theory and research. RESULTS AND DISCUSSION: Twelve principles are presented that are considered as important and recognisable facets of programmatic assessment. Overall these principles were used in the curriculum and assessment design, albeit with a range of approaches and rigor, suggesting that programmatic assessment is an achievable education and assessment model, embedded both in practice and research. Knowledge on and sharing how programmatic assessment is being operationalized may help support educators charting their own implementation journey of programmatic assessment in their respective programmes

    Assessment and feedback to facilitate self-directed learning in clinical practice of midwifery students

    No full text
    BACKGROUND: Clinical workplaces are hectic and dynamic learning environments, which require students to take charge of their own learning. Competency development during clinical internships is a continuous process that is facilitated and guided by feedback. Limited feedback, lack of supervision and problematic assessment of clinical competencies make the development of learning instruments to support self-directed learning necessary. AIMS: To explore students' perceptions about a newly introduced integrated feedback and assessment instrument to support self-directed learning in clinical practice. Students collected feedback from clinical supervisors and wrote it on a competency-based format. This feedback was used for self-assessment, which had to be completed before the final assessment. METHODS: Four focus group discussions were conducted with second and last year Midwifery students. Focus groups were audiotaped, transcribed verbatim and analysed in a thematic way using ATLAS.ti for qualitative data analysis. RESULTS: The analysis of the transcripts suggested that integrating feedback and assessment supports participation and active involvement in learning by collecting, writing, asking, reading and rereading feedback. Under the condition of training and dedicated time, these learning activities stimulate reflection and facilitate the development of strategies for improvement. The integration supports self-assessment and formative assessment but the value for summative assessment is contested. The quality of feedback and empowerment by motivated supervisors are essential to maximise the learning effects. CONCLUSIONS: The integrated Midwifery Assessment and Feedback Instrument is a valuable tool for supporting formative learning and assessment in clinical practice, but its effect on students' self-directed learning depends on the feedback and support from supervisors

    Quality assurance of assessment during major disruptions

    No full text
    In this book we have presented discussions of the latest thinking on a range of issues that are relevant to the quality assurance of assessment in health professional education. The timing was interesting, because the book was commissioned just before the COVID-19 pandemic, and most chapters were written during the first wave of the pandemic. Chapters cover the rationale for assessment QA, the roles and responsibilities of QA assessors, assessment of knowledge and clinical competence, workplace-based assessment, programmatic assessment, use of technology and standardisation. The publisher challenged us with the question: how meaningful are these topics in an era of major disruption to ‘normal business’? Therefore, this final Chapter draws together elements of these discussions in the context of the changes forced by the major disruption experienced by all. Are there lessons to be learned from these experiences? We address the question: to what degree will the disruption lead to longer-term changes to assessment practices and how they are quality assured? We begin with the recent pandemic as a case study and then present some comments on the international impact and responses

    Programmatic assessment of competency-based workplace learning: when theory meets practice

    Get PDF
    Contains fulltext : 125808.pdf (publisher's version ) (Open Access)BACKGROUND: In competency-based medical education emphasis has shifted towards outcomes, capabilities, and learner-centeredness. Together with a focus on sustained evidence of professional competence this calls for new methods of teaching and assessment. Recently, medical educators advocated the use of a holistic, programmatic approach towards assessment. Besides maximum facilitation of learning it should improve the validity and reliability of measurements and documentation of competence development. We explored how, in a competency-based curriculum, current theories on programmatic assessment interacted with educational practice. METHODS: In a development study including evaluation, we investigated the implementation of a theory-based programme of assessment. Between April 2011 and May 2012 quantitative evaluation data were collected and used to guide group interviews that explored the experiences of students and clinical supervisors with the assessment programme. We coded the transcripts and emerging topics were organised into a list of lessons learned. RESULTS: The programme mainly focuses on the integration of learning and assessment by motivating and supporting students to seek and accumulate feedback. The assessment instruments were aligned to cover predefined competencies to enable aggregation of information in a structured and meaningful way. Assessments that were designed as formative learning experiences were increasingly perceived as summative by students. Peer feedback was experienced as a valuable method for formative feedback. Social interaction and external guidance seemed to be of crucial importance to scaffold self-directed learning. Aggregating data from individual assessments into a holistic portfolio judgement required expertise and extensive training and supervision of judges. CONCLUSIONS: A programme of assessment with low-stakes assessments providing simultaneously formative feedback and input for summative decisions proved not easy to implement. Careful preparation and guidance of the implementation process was crucial. Assessment for learning requires meaningful feedback with each assessment. Special attention should be paid to the quality of feedback at individual assessment moments. Comprehensive attention for faculty development and training for students is essential for the successful implementation of an assessment programme

    Expert validation of fit-for-purpose guidelines for designing programmes of assessment.

    Get PDF
    Contains fulltext : 107830.pdf (publisher's version ) (Open Access)ABSTRACT: BACKGROUND: An assessment programme, a purposeful mix of assessment activities, is necessary to achieve a complete picture of assessee competence. High quality assessment programmes exist, however, design requirements for such programmes are still unclear. We developed guidelines for design based on an earlier developed framework which identified areas to be covered. A fitness-for-purpose approach defining quality was adopted to develop and validate guidelines. METHODS: First, in a brainstorm, ideas were generated, followed by structured interviews with 9 international assessment experts. Then, guidelines were fine-tuned through analysis of the interviews. Finally, validation was based on expert consensus via member checking. RESULTS: In total 72 guidelines were developed and in this paper the most salient guidelines are discussed. The guidelines are related and grouped per layer of the framework. Some guidelines were so generic that these are applicable in any design consideration. These are: the principle of proportionality, rationales should underpin each decisions, and requirement of expertise. Logically, many guidelines focus on practical aspects of assessment. Some guidelines were found to be clear and concrete, others were less straightforward and were phrased more as issues for contemplation. CONCLUSIONS: The set of guidelines is comprehensive and not bound to a specific context or educational approach. From the fitness-for-purpose principle, guidelines are eclectic, requiring expertise judgement to use them appropriately in different contexts. Further validation studies to test practicality are required
    corecore