108 research outputs found

    Expertise in performance assessment: assessors' perspectives

    Get PDF
    The recent rise of interest among the medical education community in individual faculty making subjective judgments about medical trainee performance appears to be directly related to the introduction of notions of integrated competency-based education and assessment for learning. Although it is known that assessor expertise plays an important role in performance assessment, the roles played by different factors remain to be unraveled. We therefore conducted an exploratory study with the aim of building a preliminary model to gain a better understanding of assessor expertise. Using a grounded theory approach, we conducted seventeen semi-structured interviews with individual faculty members who differed in professional background and assessment experience. The interviews focused on participants' perceptions of how they arrived at judgments about student performance. The analysis resulted in three categories and three recurring themes within these categories: the categories assessor characteristics, assessors' perceptions of the assessment tasks, and the assessment context, and the themes perceived challenges, coping strategies, and personal development. Central to understanding the key processes in performance assessment appear to be the dynamic interrelatedness of the different factors and the developmental nature of the processes. The results are supported by literature from the field of expertise development and in line with findings from social cognition research. The conceptual framework has implications for faculty development and the design of programs of assessmen

    Even a little sleepiness influences neural activation and clinical reasoning in novices

    Get PDF
    Funding: This study was funded by a grant from the Scottish Medical EducationResearch Consortium (SMERC). SMERC had no involvement in thestudy design; collection, analysis, and interpretation of data; writing ofthe report; or the decision to submit the report for publication. Acknowledgements: We thank the students who took part in this project, and the Instituteof Education for Medical and Dental Sciences, University of Aber-deen, for supporting this project. We thank the American College ofPhysicians for the questions used in this study. We thank ProfessorCLELANDET AL.7of9&C?JRFѥ1AGCLACѥ0CNMPRQSusan Jamieson, University of Glasgow, for her support at the stageof seeking funding for this work.Peer reviewedPublisher PD

    A model of the pre-assessment learning effects of summative assessment in medical education

    Get PDF
    It has become axiomatic that assessment impacts powerfully on student learning. However, surprisingly little research has been published emanating from authentic higher education settings about the nature and mechanism of the pre-assessment learning effects of summative assessment. Less still emanates from health sciences education settings. This study explored the pre-assessment learning effects of summative assessment in theoretical modules by exploring the variables at play in a multifaceted assessment system and the relationships between them. Using a grounded theory strategy, in-depth interviews were conducted with individual medical students and analyzed qualitatively. Respondents’ learning was influenced by task demands and system design. Assessment impacted on respondents’ cognitive processing activities and metacognitive regulation activities. Individually, our findings confirm findings from other studies in disparate non-medical settings and identify some new factors at play in this setting. Taken together, findings from this study provide, for the first time, some insight into how a whole assessment system influences student learning over time in a medical education setting. The findings from this authentic and complex setting paint a nuanced picture of how intricate and multifaceted interactions between various factors in an assessment system interact to influence student learning. A model linking the sources, mechanism and consequences of the pre-assessment learning effects of summative assessment is proposed that could help enhance the use of summative assessment as a tool to augment learning

    Modelling the pre-assessment learning effects of assessment : evidence in the validity chain

    Get PDF
    Publication of this article was funded by the Stellenbosch University Open Access Fund.The original publication is available at http://onlinelibrary.wiley.com/journal/10.1111/%28ISSN%291365-2923/OBJECTIVES We previously developed a model of the pre-assessment learning effects of consequential assessment and started to validate it. The model comprises assessment factors, mechanism factors and learning effects. The purpose of this study was to continue the validation process. For stringency, we focused on a subset of assessment factor–learning effect associations that featured least commonly in a baseline qualitative study. Our aims were to determine whether these uncommon associations were operational in a broader but similar population to that in which the model was initially derived. METHODS A cross-sectional survey of 361 senior medical students at one medical school was undertaken using a purpose-made questionnaire based on a grounded theory and comprising pairs of written situational tests. In each pair, the manifestation of an assessment factor was varied. The frequencies at which learning effects were selected were compared for each item pair, using an adjusted alpha to assign significance. The frequencies at which mechanism factors were selected were calculated. RESULTS There were significant differences in the learning effect selected between the two scenarios of an item pair for 13 of this subset of 21 uncommon associations, even when a p-value of < 0.00625 was considered to indicate significance. Three mechanism factors were operational in most scenarios: agency; response efficacy, and response value. CONCLUSIONS For a subset of uncommon associations in the model, the role of most assessment factor–learning effect associations and the mechanism factors involved were supported in a broader but similar population to that in which the model was derived. Although model validation is an ongoing process, these results move the model one step closer to the stage of usefully informing interventions. Results illustrate how factors not typically included in studies of the learning effects of assessment could confound the results of interventions aimed at using assessment to influence learning.Stellenbosch UniversityPublishers' Versio

    Comparison of formula and number-right scoring in undergraduate medical training: a Rasch model analysis

    Get PDF
    This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.Background Progress testing is an assessment tool used to periodically assess all students at the end-of-curriculum level. Because students cannot know everything, it is important that they recognize their lack of knowledge. For that reason, the formula-scoring method has usually been used. However, where partial knowledge needs to be taken into account, the number-right scoring method is used. Research comparing both methods has yielded conflicting results. As far as we know, in all these studies, Classical Test Theory or Generalizability Theory was used to analyze the data. In contrast to these studies, we will explore the use of the Rasch model to compare both methods. Methods A 2 × 2 crossover design was used in a study where 298 students from four medical schools participated. A sample of 200 previously used questions from the progress tests was selected. The data were analyzed using the Rasch model, which provides fit parameters, reliability coefficients, and response option analysis. Results The fit parameters were in the optimal interval ranging from 0.50 to 1.50, and the means were around 1.00. The person and item reliability coefficients were higher in the number-right condition than in the formula-scoring condition. The response option analysis showed that the majority of dysfunctional items emerged in the formula-scoring condition. Conclusions The findings of this study support the use of number-right scoring over formula scoring. Rasch model analyses showed that tests with number-right scoring have better psychometric properties than formula scoring. However, choosing the appropriate scoring method should depend not only on psychometric properties but also on self-directed test-taking strategies and metacognitive skills

    Exploring implications of context specificity and cognitive load in residents

    Get PDF
    Introduction: Context specificity (CS) refers to the variability in clinical reasoning across different presentations of the same diagnosis. Cognitive load (CL) refers to limitations in working memory that may impact clinicians’ clinical reasoning. CL might be one of the factors that lead to CS. Although CL during clinical reasoning would be expected to be higher in internal medicine residents, CL’s effect on CS in residents has not been studied. Methods: Internal medicine residents watched a series of three cases portrayed on videos. Following each case, participants filled out a post-encounter form and completed a validated measure of CL. Results: Fourteen residents completed all three cases. Across cases, self-reported CL was relatively high and there were small to moderate correlations between CL and performance in clinical reasoning (r’s = .43, -.33, -.23). In terms of changing CL across cases, the correlations between change in CL and change in total performance were statistically significantly only in moving from case 1 to case 2 (r = -.54, p =.05). Discussion and Conclusion: Residents self-reported measurements of CL were relatively high across cases. However, higher CL was not consistently associated with poorer performance. We did observe the expected associations when looking at case-to-case change in CL. This relationship warrants further study

    Contextual factors and clinical reasoning: differences in diagnostic and therapeutic reasoning in board certified versus resident physicians

    Get PDF
    This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.Background The impact of context on the complex process of clinical reasoning is not well understood. Using situated cognition as the theoretical framework and videos to provide the same contextual “stimulus” to all participants, we examined the relationship between specific contextual factors on diagnostic and therapeutic reasoning accuracy in board certified internists versus resident physicians. Methods Each participant viewed three videotaped clinical encounters portraying common diagnoses in internal medicine. We explicitly modified the context to assess its impact on performance (patient and physician contextual factors). Patient contextual factors, including English as a second language and emotional volatility, were portrayed in the videos. Physician participant contextual factors were self-rated sleepiness and burnout.. The accuracy of diagnostic and therapeutic reasoning was compared with covariates using Fisher Exact, Mann-Whitney U tests and Spearman Rho’s correlations as appropriate. Results Fifteen board certified internists and 10 resident physicians participated from 2013 to 2014. Accuracy of diagnostic and therapeutic reasoning did not differ between groups despite residents reporting significantly higher rates of sleepiness (mean rank 20.45 vs 8.03, U = 0.5, p < .001) and burnout (mean rank 20.50 vs 8.00, U = 0.0, p < .001). Accuracy of diagnosis and treatment were uncorrelated (r = 0.17, p = .65). In both groups, the proportion scoring correct responses for treatment was higher than the proportion scoring correct responses for diagnosis. Conclusions This study underscores that specific contextual factors appear to impact clinical reasoning performance. Further, the processes of diagnostic and therapeutic reasoning, although related, may not be interchangeable. This raises important questions about the impact that contextual factors have on clinical reasoning and provides insight into how clinical reasoning processes in more authentic settings may be explained by situated cognition theory
    corecore