65 research outputs found

    Si nous les évaluons, apprendront-ils? Le point de vue des étudiants sur la complexité de l’évaluation pour l’apprentissage

    Get PDF
    Introduction: Assessment can positively influence learning, however designing effective assessment-for-learning interventions has proved challenging. We implemented a mandatory assessment-for-learning system comprising a workplace-based assessment of non-medical expert competencies and a progress test in undergraduate medical education and evaluated its impact. Methods: We conducted semi-structured interviews with year-3 and 4 medical students at McGill University to explore how the assessment system had influenced their learning in year 3. We conducted theory-informed thematic analysis of the data. Results: Eleven students participated, revealing that the assessment influenced learning through several mechanisms. Some required little student engagement (i.e., feed-up, test-enhanced learning, looking things up after an exam). Others required substantial engagement (e.g., studying for tests, selecting raters for quality feedback, using feedback). Student engagement was moderated by the perceived credibility of the system and of the costs and benefits of engagement. Credibility was shaped by students’ goals-in-context: becoming a good doctor, contributing to the healthcare team, succeeding in assessments. Discussion: Our assessment system failed to engage students enough to leverage its full potential. We discuss the inherent flaws and external factors that hindered student engagement. Assessment designers should leverage easy-to-control mechanisms to support assessment-for-learning and anticipate significant collaborative work to modify learning cultures.Introduction : L’évaluation peut influencer positivement l’apprentissage mais la conception de dispositifs d’évaluation pour l’apprentissage efficaces s’avère difficile. Nous avons implanté en formation prédoctorale un système obligatoire d’évaluation pour l’apprentissage comprenant une évaluation en milieu clinique des compétences transversales et un test de rendement progressif, puis évalué ses effets. Méthodes : Nous avons mené des entretiens semi-dirigés avec des étudiants en troisième et quatrième années de médecine à l’Université McGill pour explorer la manière dont le système d’évaluation avait influencé leur apprentissage au cours de la troisième année. Nous avons effectué une analyse thématique, informée par la théorie, des données. Résultats : Onze étudiants ont participé. Les résultats indiquent que l’évaluation a influencé leur apprentissage par le biais de plusieurs mécanismes différents. Certains d’entre eux nécessitaient une implication faible de la part de l’étudiant, comme l’identification des objectifs à atteindre (feed-up), l’apprentissage amélioré par les tests, la recherche d’informations après un examen. D’autres exigeaient une implication importante (par exemple, étudier pour les tests, sélectionner les évaluateurs pour obtenir une rétroaction de qualité, mettre à profit la rétroaction). L’implication des étudiants était modulée par leur perception des avantages et des inconvénients de s’impliquer, et de la crédibilité du système. Cette dernière était influencée par les objectifs-en-contexte des étudiants: devenir un bon médecin, contribuer à l’équipe soignante, réussir les épreuves d’évaluation. Discussion : Notre système d’évaluation n’a pas réussi à impliquer suffisamment les étudiants que pour réaliser son potentiel. Nous abordons les défauts inhérents au système ainsi que les facteurs externes qui ont entravé l’implication des apprenants. Pour implanter efficacement un dispositif d’évaluation pour l’apprentissage, les concepteurs d’évaluations devraient optimiser les mécanismes qui sont faciles à contrôler et être prêts à s’investir dans un important travail de collaboration pour changer les cultures d’apprentissage

    Using script theory to cultivate illness script formation and clinical reasoning in health professions education

    Get PDF
    Background: Script theory proposes an explanation for how information is stored in and retrieved from the human mind to influence individuals’ interpretation of events in the world. Applied to medicine, script theory focuses on knowledge organization as the foundation of clinical reasoning during patient encounters. According to script theory, medical knowledge is bundled into networks called ‘illness scripts’ that allow physicians to integrate new incoming information with existing knowledge, recognize patterns and irregularities in symptom complexes, identify similarities and differences between disease states, and make predictions about how diseases are likely to unfold. These knowledge networks become updated and refined through experience and learning. The implications of script theory on medical education are profound. Since clinician-teachers cannot simply transfer their customized collections of illness scripts into the minds of learners, they must create opportunities to help learners develop and fine-tune their own sets of scripts. In this essay, we provide a basic sketch of script theory, outline the role that illness scripts play in guiding reasoning during clinical encounters, and propose strategies for aligning teaching practices in the classroom and the clinical setting with the basic principles of script theory

    Using script theory to cultivate illness script formation and clinical reasoning in health professions education

    Get PDF
    Background: Script theory proposes an explanation for how information is stored in and retrieved from the human mind to influence individuals’ interpretation of events in the world. Applied to medicine, script theory focuses on knowledge organization as the foundation of clinical reasoning during patient encounters. According to script theory, medical knowledge is bundled into networks called ‘illness scripts’ that allow physicians to integrate new incoming information with existing knowledge, recognize patterns and irregularities in symptom complexes, identify similarities and differences between disease states, and make predictions about how diseases are likely to unfold. These knowledge networks become updated and refined through experience and learning. The implications of script theory on medical education are profound. Since clinician-teachers cannot simply transfer their customized collections of illness scripts into the minds of learners, they must create opportunities to help learners develop and fine-tune their own sets of scripts. In this essay, we provide a basic sketch of script theory, outline the role that illness scripts play in guiding reasoning during clinical encounters, and propose strategies for aligning teaching practices in the classroom and the clinical setting with the basic principles of script theory

    Self-organising in primary care in the first wave of Covid-19: a qualitative study conducted in Wallonia, Belgium

    Full text link
    peer reviewedBackground The first wave of the Covid-19 pandemic demanded rapid adaptation to the delivery of routine, and covid-19 related, primary care. This study aims to describe the adaptations implemented at local level in primary healthcare centres in Wallonia, Belgium in response to the first wave of the evolving crisis. Questions/Methods Qualitative data were collected in the form of weekly semi-structured interviews with general practitioners, nurses and receptionists. Interviews focussed on evolving changes taking place in their practices between April and June 2020. The participants worked at three community health centres in the province of Liège, Belgium. Data were analysed using thematic content analysis through the lens of a complex adaptive system model focussing on the level of self-organisation. Results Nine participants participated in the study and 90 interviews were conducted. Adaptations described by the participants included the transition from in-person to telephone consultations, managing protective clothing stock, setting up testing procedures, following up on chronic patients, collaboration with pharmacists, specialists and local government actors. Discussion The results describe the way primary care in Wallonia, Belgium has had to adapt to the fast-moving crisis at a time when little information or directives were available. Using a complex adaptive system lens allows a deep analysis of the way self-organising at the local level in a highly decentralised system took place and contributes to our understanding of health system resilience and pandemic management as a whole and at the local level in Wallonia, Belgium

    Multifaceted Assessment in a Family Medicine Clerkship: A Pilot Study

    Get PDF
    Background and Objectives: Programs of assessment should reflect the multifaceted nature of medical competence. We experimented with new testing methods, ie, script concordance testing (SCT) and clinical reasoning problems (CRPs), combined with the habitual OSCE for an end of family medicine clerkship. Our aims were to compare students’ scores with experts’ scores, to determine whether the new tests detected learning over a 3-month period, and to examine whether the tests were redundant. Methods: We conducted a longitudinal study on one cohort of family medicine clerks. Two formative testing sessions using both SCT and CRPs were held 3 months apart. Students’ scores were compared to those of the panel of experts used to score the tests. We examined the difference in students’ scores between the two testing sessions. Finally, we computed correlation coefficients between these scores and the summative OSCE. Results: Panelists’ scores were significantly higher than students’ scores. SCT scores did not change significantly over 3 months whereas CRP scores improved (Wilcoxon z -3.058, effect size 0.461, P=.002). Correlations between the OSCE and the written tests were low or non-significant. There were low correlations between the first CRP and both SCTs (Spearman’s rho 0.357 and 0.358) but not between the second CRP and any SCT. Conclusions: Written tests of clinical reasoning could provide relevant additional information to the evaluation of students’ competence over the course of a family medicine clerkship. Further research is needed to determine the potential educational consequences of such programs of assessment

    If we assess, will they learn? Students’ perspectives on the complexities of assessment-for-learning

    Get PDF
    Introduction: Assessment can positively influence learning, however designing effective assessment-for-learning interventions has proved challenging. We implemented a mandatory assessment-for-learning system comprising a workplace-based assessment of non-medical expert competencies and a progress test in undergraduate medical education and evaluated its impact. Methods: We conducted semi-structured interviews with year-3 and 4 medical students at McGill University to explore how the assessment system had influenced their learning in year 3. We conducted theory-informed thematic analysis of the data. Results: Eleven students participated, revealing that the assessment influenced learning through several mechanisms. Some required little student engagement (i.e., feed-up, test-enhanced learning, looking things up after an exam). Others required substantial engagement (e.g., studying for tests, selecting raters for quality feedback, using feedback). Student engagement was moderated by the perceived credibility of the system and of the costs and benefits of engagement. Credibility was shaped by students’ goals-in-context: becoming a good doctor, contributing to the healthcare team, succeeding in assessments. Discussion: Our assessment system failed to engage students enough to leverage its full potential. We discuss the inherent flaws and external factors that hindered student engagement. Assessment designers should leverage easy-to-control mechanisms to support assessment-for-learning and anticipate significant collaborative work to modify learning cultures

    Self-assessment in general practice trainees: insights into the profession and its postgraduate training (PhD reports)

    No full text
    Introduction: General practice has produced new definitions of its tasks and implemented specialised postgraduate curricula. We explored the results of these endeavours by examining the perceptions of the next generation of general practitioners (GPs). In order to frame our questions, we developed a model of self-assessment as a multilayered construct from the global notion of self-concept, to self-perceived competence, self-monitoring, and metacognitive regulation1. Research questions addressed three levels of self-assessment: 1. Professional identity: How do GP trainees envision their future role? 2. Self-perceived competence (SPC): Do GP trainees feel prepared to practice autonomously? What are the determinants of SPC? 3. Metacognitive regulation: Do trainees possess adequate levels of usable (i.e. certain) knowledge? Are they able to assess their knowledge appropriately? Methods: Study 1 (professional identity and SPC framed as self-efficacy beliefs): We conducted 5 focus groups in Belgium and France with 28 participants. Transcripts were analysed by immersion crystallisation. Study 2 (SPC and metacognitive regulation): We conducted a cross-sectional study of 127 postgraduate trainees in Belgium. Trainees assessed their competence in four clinical domains using ordinal scales and sat a written test on these domains. Confidence judgements were elicited for each test item. Results: Study 1: Trainees’ descriptions were consistent with new definitions. Initially low self-efficacy beliefs usually increased through experiences of success, positive feedback and sharing anxieties with peers. A few trainees developed avoidance strategies, continued to display low self-efficacy beliefs and envisaged leaving the profession. Study 2: Depending on the domain, 20.5-76.4% of trainees described their competence as good or very good. Number of patient contacts and on-call duties were not linked to higher SPC, nor was postgraduate year. Men were more likely to feel competent (OR 1.2-6.1). SPC was not correlated to test scores. The average proportion of knowledge that was usable (i.e. assigned a high degree of certainty) was 36.6%. The average proportion of ignorance that was hazardous (i.e. assigned a high degree of certainty) was 14.3%. These were not linked to test performance. Discussion and conclusion: New definitions are being embodied by future GPs. Most trainees feel confident enough to practice autonomously. There are however concerns about those who do not. Volume of practice does not seem influential. The impact of low SPC on future practice requires longitudinal studies. Self-assessment ability is low even at the specific level of test response confidence judgement. Teachers should be aware of the partial nature of much knowledge and of the significant amount of misconceptions

    Ce que les maîtres de stage pensent de leur rôle pédagogique

    No full text
    Le comportement des individus est lié à leurs croyances concernant les conséquences du comportement, leur capacité à le mettre en oeuvre, et les attentes des autres. Des études ont montré que les stratégies pédagogiques des professeurs d’université étaient associées à leurs conceptions de l’enseignement comme étant avant tout une transmission de savoirs versus une facilitation de l’apprentissage. Des études ont été menées chez les maîtres de stage dans les sciences de la santé. Les conceptions de l’enseignement comme un phénomène de transmission sont courantes. Certaines études ont également montré l’importance de leurs croyances normatives et de leur auto-efficacité. Ces croyances sont importantes à considérer dans le cadre d’un développement professoral visant à professionnaliser les maîtres de stage dans un rôle de plus en plus complexe

    Principes d'une «bonne» évaluation des étudiants et praticiens en Sciences de la Santé

    No full text
    Si leur importance relative varie en fonction des objectifs de l’évaluation, les critères de qualité suivants font l’objet d’un récent consensus. La validité fait référence à la capacité qu’a une évaluation à mesurer ce qu’elle est censée mesurer. Il s’agit de choisir un instrument adapté en fonction de ses objectifs d’apprentissage et d’utiliser le plus possible d’items représentant au mieux le contenu testé. La reproductibilité s’évalue de manière empirique en fonction de divers modèles psychométriques. L’équivalence concerne la stabilité des scores d’un même test appliqué dans des situations différentes. La faisabilité et l’acceptabilité sont deux autres critères dont il faut tenir compte. D’un point de vue pédagogique, la qualité d’une évaluation s’apprécie d’une part sur son effet avant le testing et d’autre part sur son effet rétroactif, dit catalytique. C’est en tenant compte de ces deux derniers critères que l’on peut optimaliser l’effet levier de l’évaluation sur l’apprentissage

    Pédagogie médicale une nouvelle rubrique dans votre Louvain Médical

    No full text
    • …
    corecore