1,646 research outputs found

    Si nous les évaluons, apprendront-ils? Le point de vue des étudiants sur la complexité de l’évaluation pour l’apprentissage

    Get PDF
    Introduction: Assessment can positively influence learning, however designing effective assessment-for-learning interventions has proved challenging. We implemented a mandatory assessment-for-learning system comprising a workplace-based assessment of non-medical expert competencies and a progress test in undergraduate medical education and evaluated its impact. Methods: We conducted semi-structured interviews with year-3 and 4 medical students at McGill University to explore how the assessment system had influenced their learning in year 3. We conducted theory-informed thematic analysis of the data. Results: Eleven students participated, revealing that the assessment influenced learning through several mechanisms. Some required little student engagement (i.e., feed-up, test-enhanced learning, looking things up after an exam). Others required substantial engagement (e.g., studying for tests, selecting raters for quality feedback, using feedback). Student engagement was moderated by the perceived credibility of the system and of the costs and benefits of engagement. Credibility was shaped by students’ goals-in-context: becoming a good doctor, contributing to the healthcare team, succeeding in assessments. Discussion: Our assessment system failed to engage students enough to leverage its full potential. We discuss the inherent flaws and external factors that hindered student engagement. Assessment designers should leverage easy-to-control mechanisms to support assessment-for-learning and anticipate significant collaborative work to modify learning cultures.Introduction : L’évaluation peut influencer positivement l’apprentissage mais la conception de dispositifs d’évaluation pour l’apprentissage efficaces s’avère difficile. Nous avons implanté en formation prédoctorale un système obligatoire d’évaluation pour l’apprentissage comprenant une évaluation en milieu clinique des compétences transversales et un test de rendement progressif, puis évalué ses effets. Méthodes : Nous avons mené des entretiens semi-dirigés avec des étudiants en troisième et quatrième années de médecine à l’Université McGill pour explorer la manière dont le système d’évaluation avait influencé leur apprentissage au cours de la troisième année. Nous avons effectué une analyse thématique, informée par la théorie, des données. Résultats : Onze étudiants ont participé. Les résultats indiquent que l’évaluation a influencé leur apprentissage par le biais de plusieurs mécanismes différents. Certains d’entre eux nécessitaient une implication faible de la part de l’étudiant, comme l’identification des objectifs à atteindre (feed-up), l’apprentissage amélioré par les tests, la recherche d’informations après un examen. D’autres exigeaient une implication importante (par exemple, étudier pour les tests, sélectionner les évaluateurs pour obtenir une rétroaction de qualité, mettre à profit la rétroaction). L’implication des étudiants était modulée par leur perception des avantages et des inconvénients de s’impliquer, et de la crédibilité du système. Cette dernière était influencée par les objectifs-en-contexte des étudiants: devenir un bon médecin, contribuer à l’équipe soignante, réussir les épreuves d’évaluation. Discussion : Notre système d’évaluation n’a pas réussi à impliquer suffisamment les étudiants que pour réaliser son potentiel. Nous abordons les défauts inhérents au système ainsi que les facteurs externes qui ont entravé l’implication des apprenants. Pour implanter efficacement un dispositif d’évaluation pour l’apprentissage, les concepteurs d’évaluations devraient optimiser les mécanismes qui sont faciles à contrôler et être prêts à s’investir dans un important travail de collaboration pour changer les cultures d’apprentissage

    Assessing Professionalism: A theoretical framework for defining clinical rotation assessment criteria

    Get PDF
    Although widely accepted as an important graduate competence, professionalism is a challenging outcome to define and assess. Clinical rotations provide an excellent opportunity to develop student professionalism through the use of experiential learning and effective feedback, but without appropriate theoretical frameworks, clinical teachers may find it difficult to identify appropriate learning outcomes. The adage “I know it when I see it” is unhelpful in providing feedback and guidance for student improvement, and criteria that are more specifically defined would help students direct their own development. This study sought first to identify how clinical faculty in one institution currently assess professionalism, using retrospective analysis of material obtained in undergraduate teaching and faculty development sessions. Subsequently, a faculty workshop was held in which a round-table type discussion sought to develop these ideas and identify how professionalism assessment could be improved. The output of this session was a theoretical framework for teaching and assessing professionalism, providing example assessment criteria and ideas for clinical teaching. This includes categories such as client and colleague interaction, respect and trust, recognition of limitations, and understanding of different professional identities. Each category includes detailed descriptions of the knowledge, skills, and behaviors expected of students in these areas. The criteria were determined by engaging faculty in the development of the framework, and therefore they should represent a focused development of criteria already used to assess professionalism, and not a novel and unfamiliar set of assessment guidelines. The faculty-led nature of this framework is expected to facilitate implementation in clinical teaching

    FACTORS AFFECTING THE COMPOSITION OF DATES

    Full text link

    If we assess, will they learn? Students’ perspectives on the complexities of assessment-for-learning

    Get PDF
    Introduction: Assessment can positively influence learning, however designing effective assessment-for-learning interventions has proved challenging. We implemented a mandatory assessment-for-learning system comprising a workplace-based assessment of non-medical expert competencies and a progress test in undergraduate medical education and evaluated its impact. Methods: We conducted semi-structured interviews with year-3 and 4 medical students at McGill University to explore how the assessment system had influenced their learning in year 3. We conducted theory-informed thematic analysis of the data. Results: Eleven students participated, revealing that the assessment influenced learning through several mechanisms. Some required little student engagement (i.e., feed-up, test-enhanced learning, looking things up after an exam). Others required substantial engagement (e.g., studying for tests, selecting raters for quality feedback, using feedback). Student engagement was moderated by the perceived credibility of the system and of the costs and benefits of engagement. Credibility was shaped by students’ goals-in-context: becoming a good doctor, contributing to the healthcare team, succeeding in assessments. Discussion: Our assessment system failed to engage students enough to leverage its full potential. We discuss the inherent flaws and external factors that hindered student engagement. Assessment designers should leverage easy-to-control mechanisms to support assessment-for-learning and anticipate significant collaborative work to modify learning cultures

    Students as teachers: the value of peer-led teaching

    Get PDF

    Developing professional identity in undergraduate pharmacy students: a role for self-determination theory

    Get PDF
    Professional identity development, seen as essential in the transition from student to professional, needs to be owned by the universities in order to ensure a workforce appropriately prepared to provide global health care in the future. The development of professional identity involves a focus on who the student is becoming, as well as what they know or can do, and requires authentic learning experiences such as practice exposure and interaction with pharmacist role models. This article examines conceptual frameworks aligned with professional identity development and will explore the role for self-determination theory (SDT) in pharmacy professional education. SDT explains the concepts of competence, relatedness and autonomy and the part they play in producing highly motivated individuals, leading to the development of one’s sense of self. Providing support for students in these three critical areas may, in accordance with the tenets of SDT, have the potential to increase motivation levels and their sense of professional identity

    Effects of partial sleep deprivation on food consumption and food choice

    Get PDF
    Abstract Sleep deprivation alters food consumption in animals; however, little is known of the effects of partial sleep deprivation on food consumption and choice in humans. We examined 50 undergraduate students who recorded sleep quality, food consumption, and food choice in daily diaries for four days. On the second night of the study, participants were instructed to sleep for 4 h or less, which served as a partial sleep deprivation manipulation. Following sleep loss, participants reported consuming fewer calories. They also reported altering food choice following deprivation, choosing foods less for health and weight concerns. The results provide initial evidence that sleep deprivation impacts food consumption and choice, which may have subsequent health implications
    • …
    corecore