2 research outputs found

    "Do I really have to complete another evaluation?" exploring relationships among physicians' evaluative load, evaluative strain, and the quality of clinical clerkship evaluations

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)Background. Despite widespread criticism of physician-performed evaluations of medical students’ clinical skills, clinical clerkship evaluations (CCEs) remain the foremost means by which to assess trainees’ clinical prowess. Efforts undertaken to improve the quality of feedback students receive have ostensibly led to higher assessment demands on physician faculty; the consequences of which remain unknown. Accordingly, this study investigated the extent to which physicians’ evaluative responsibilities influenced the quality of CCEs and qualitatively explored physicians’ perceptions of these evaluations. Methods. A questionnaire was delivered to physicians (n = 93) at Indiana University School of Medicine to gauge their perceived evaluative responsibilities. Evaluation records of each participant were obtained and were used to calculate one’s measurable quantity of CCEs, the timeliness of CCE submissions, and the quality of the Likert-scale and written feedback data included in each evaluation. A path analysis estimated the extent to which one’s evaluative responsibilities affected the timeliness of CCE submissions and CCE quality. Semi-structured interviews with a subset of participants (n = 8) gathered perceptions of the evaluations and the evaluative process. Results. One’s measurable quantity of evaluations did not influence one’s perceptions of the evaluative task, but did directly influence the quality of the Likert-scale items. Moreover, one’s perceptions of the evaluative task directly influenced the timeliness of CCE submissions and indirectly influenced the quality of the closed-ended CCE items. Tardiness in the submission of CCEs had a positive effect on the amount of score differentiation among the Likert-scale data. Neither evaluative responsibilities nor the timeliness of CCE submissions influenced the quality of written feedback. Qualitative analysis revealed mixed opinions on the utility of CCEs and highlighted the temporal burden and practical limitations of completing CCEs. Conclusions. These findings suggest physicians’ perceptions of CCEs are independent of their assigned evaluative quantity, yet influence both the timeliness of evaluation submissions and evaluative quality. Further elucidation of the mechanisms underlying the positive influence of evaluation quantity and timely CCE submissions on CCE quality are needed to fully rationalize these findings and improve the evaluative process. Continued research is needed to pinpoint which factors influence the quality of written feedback

    "Do I really have to complete another evaluation?" exploring relationships among physicians' evaluative load, evaluative strain, and the quality of clinical clerkship evaluations

    No full text
    Indiana University-Purdue University Indianapolis (IUPUI)Background. Despite widespread criticism of physician-performed evaluations of medical students’ clinical skills, clinical clerkship evaluations (CCEs) remain the foremost means by which to assess trainees’ clinical prowess. Efforts undertaken to improve the quality of feedback students receive have ostensibly led to higher assessment demands on physician faculty; the consequences of which remain unknown. Accordingly, this study investigated the extent to which physicians’ evaluative responsibilities influenced the quality of CCEs and qualitatively explored physicians’ perceptions of these evaluations. Methods. A questionnaire was delivered to physicians (n = 93) at Indiana University School of Medicine to gauge their perceived evaluative responsibilities. Evaluation records of each participant were obtained and were used to calculate one’s measurable quantity of CCEs, the timeliness of CCE submissions, and the quality of the Likert-scale and written feedback data included in each evaluation. A path analysis estimated the extent to which one’s evaluative responsibilities affected the timeliness of CCE submissions and CCE quality. Semi-structured interviews with a subset of participants (n = 8) gathered perceptions of the evaluations and the evaluative process. Results. One’s measurable quantity of evaluations did not influence one’s perceptions of the evaluative task, but did directly influence the quality of the Likert-scale items. Moreover, one’s perceptions of the evaluative task directly influenced the timeliness of CCE submissions and indirectly influenced the quality of the closed-ended CCE items. Tardiness in the submission of CCEs had a positive effect on the amount of score differentiation among the Likert-scale data. Neither evaluative responsibilities nor the timeliness of CCE submissions influenced the quality of written feedback. Qualitative analysis revealed mixed opinions on the utility of CCEs and highlighted the temporal burden and practical limitations of completing CCEs. Conclusions. These findings suggest physicians’ perceptions of CCEs are independent of their assigned evaluative quantity, yet influence both the timeliness of evaluation submissions and evaluative quality. Further elucidation of the mechanisms underlying the positive influence of evaluation quantity and timely CCE submissions on CCE quality are needed to fully rationalize these findings and improve the evaluative process. Continued research is needed to pinpoint which factors influence the quality of written feedback
    corecore