17 research outputs found

    Learning while evaluating: the use of an electronic evaluation portfolio in a geriatric medicine clerkship

    Get PDF
    BACKGROUND: Electronic evaluation portfolios may play a role in learning and evaluation in clinical settings and may complement other traditional evaluation methods (bedside evaluations, written exams and tutor-led evaluations). METHODS: 133 third-year medical students used the McGill Electronic Evaluation Portfolio (MEEP) during their one-month clerkship rotation in Geriatric Medicine between September 2002 and September 2003. Students were divided into two groups, one who received an introductory hands-on session about the electronic evaluation portfolio and one who did not. Students' marks in their portfolios were compared between both groups. Additionally, students self-evaluated their performance and received feedback using the electronic portfolio during their mandatory clerkship rotation. Students were surveyed immediately after the rotation and at the end of the clerkship year. Tutors' opinions about this method were surveyed once. Finally, the number of evaluations/month was quantified. In all surveys, Likert scales were used and were analyzed using Chi-square tests and t-tests to assess significant differences in the responses from surveyed subjects. RESULTS: The introductory session had a significant effect on students' portfolio marks as well as on their comfort using the system. Both tutors and students reported positive notions about the method. Remarkably, an average (± SD) of 520 (± 70) evaluations/month was recorded with 30 (± 5) evaluations per student/month. CONCLUSION: The MEEP showed a significant and positive effect on both students' self-evaluations and tutors' evaluations involving an important amount of self-reflection and feedback which may complement the more traditional evaluation methods

    Student and tutor perceptions on attributes of effective problems in problem-based learning

    Get PDF
    This study aimed to identify the attributes that students and tutors associated with effective PBL problems, and assess the extent to which these attributes related to the actual effectiveness of problems. To this end, students and tutors in focus groups were asked to discuss about possible attributes of effective problems. The same participants were then asked to individually and independently judge eight sample problems they had worked with. Text analysis of the focus group discussion transcripts identified eleven problem attributes. Participants' judgments of the sample problems were then frequency-scored on the eleven problem attributes. Relating the participants' judgments with the entire student cohort's grades yielded high and significant correlations, suggesting that the eleven problem attributes reflect aspects of problem effectiveness

    What can we learn from facilitator and student perceptions of facilitation skills and roles in the first year of a problem-based learning curriculum?

    Get PDF
    BACKGROUND: The small group tutorial is a cornerstone of problem-based learning. By implication, the role of the facilitator is of pivotal importance. The present investigation canvassed perceptions of facilitators with differing levels of experience regarding their roles and duties in the tutorial. METHODS: In January 2002, one year after problem-based learning implementation at the Nelson R. Mandela School of Medicine, facilitators with the following experience were canvassed: trained and about to facilitate, facilitated once only and facilitated more than one six-week theme. Student comments regarding facilitator skills were obtained from a 2001 course survey. RESULTS: While facilitators generally agreed that the three-day training workshop provided sufficient insight into the facilitation process, they become more comfortable with increasing experience. Many facilitators experienced difficulty not providing content expertise. Again, this improved with increasing experience. Most facilitators saw students as colleagues. They agreed that they should be role models, but were less enthusiastic about being mentors. Students were critical of facilitators who were not up to date with curriculum implementation or who appeared disinterested. While facilitator responses suggest that there was considerable intrinsic motivation, this might in fact not be the case. CONCLUSIONS: Even if they had facilitated on all six themes, facilitators could still be considered as novices. Faculty support is therefore critical for the first few years of problem-based learning, particularly for those who had facilitated once only. Since student and facilitator expectations in the small group tutorial may differ, roles and duties of facilitators must be explicit for both parties from the outset

    Changing the culture of assessment: the dominance of the summative assessment paradigm

    Get PDF
    Background Despite growing evidence of the benefits of including assessment for learning strategies within programmes of assessment, practical implementation of these approaches is often problematical. Organisational culture change is often hindered by personal and collective beliefs which encourage adherence to the existing organisational paradigm. We aimed to explore how these beliefs influenced proposals to redesign a summative assessment culture in order to improve students’ use of assessment-related feedback. Methods Using the principles of participatory design, a mixed group comprising medical students, clinical teachers and senior faculty members was challenged to develop radical solutions to improve the use of post-assessment feedback. Follow-up interviews were conducted with individual members of the group to explore their personal beliefs about the proposed redesign. Data were analysed using a socio-cultural lens. Results Proposed changes were dominated by a shared belief in the primacy of the summative assessment paradigm, which prevented radical redesign solutions from being accepted by group members. Participants’ prior assessment experiences strongly influenced proposals for change. As participants had largely only experienced a summative assessment culture, they found it difficult to conceptualise radical change in the assessment culture. Although all group members participated, students were less successful at persuading the group to adopt their ideas. Faculty members and clinical teachers often used indirect techniques to close down discussions. The strength of individual beliefs became more apparent in the follow-up interviews. Conclusions Naïve epistemologies and prior personal experiences were influential in the assessment redesign but were usually not expressed explicitly in a group setting, perhaps because of cultural conventions of politeness. In order to successfully implement a change in assessment culture, firmly-held intuitive beliefs about summative assessment will need to be clearly understood as a first step
    corecore