33 research outputs found
Clearing the confusion about self-directed learning and self-regulated learning
Self-Directed Learning (SDL) and SelfRegulated Learning (SRL) are often used without a clear distinction, leading to confusion in understanding and the use of inappropriate measurement tools. SDL is a general approach to learning and can be identified using ‘aptitude’ questionnaires but SRL is a dynamic and context specific learning process and requires ‘event’ measures, such as microanalysis. These differences have implications for research and remediation
Application of Various Student Assessment Methods by Educational Departments in Tehran University of Medical Sciences, Iran
Background & Objective: In recent years, with increasing awareness of limitations of traditional assessment methods in the measurement of learner capabilities, assessment methods have undergone many changes. This survey addresses the extent to which educational departments in Tehran University of Medical Sciences, Tehran, Iran, use various student assessment methods. Methods: The present cross-sectional study was conducted using a researcher developed tool to gather information about student assessment methods in 2012. Based on Miller’s pyramid of assessment, common student assessment methods were classified into written and oral assessment, clinical reasoning assessment, clinical skills assessment, and workplace-based assessment. Study sample consisted of all educational departments. Sampling was performed using the census method, which determined the use or lack of use of each method of assessment at different educational levels. The data collected were analyzed using descriptive statistics. Results: The response rate was 70.43%; 81 of 115 departments completed the questionnaire. The most frequently used methods by departments were written and oral exams. Among them, the multiple choice test was the most widely applied assessment method. Patient management problem (PMP) was the most broadly used method to assess clinical reasoning. Moreover, among clinical skills assessment, objective structured clinical examination (OSCE) was the most commonly applied method in medical clinical courses. Conclusion: Graduates of medical universities must acquire capabilities far beyond the acquisition of theoretical knowledge, but assessment methods used by departments do not necessarily assess their capabilities. The results of this study emphasize the need for the revision of medical student assessment programs. Key Words: Student assessment methods, Miller’s pyramid, Medical educatio
The challenge of understanding, evaluating and providing feedback on regulation during group learning
Learning in groups is commonly used in academic and clinical health professions education (HPE). There is growing recognition that regulation during learning is essential for both the individual learner and group learning. The authors in this article propose a practical approach for understanding, evaluating and providing feedback on regulation during group learning. The approach is informed by previous studies conducted in other areas of education. Three varieties of regulation during group learning are discussed: individual, co-regulation and shared regulation. Each variety of regulation has a focus on three essential activities during group learning: task, social and motivation. Illustrative scenarios are presented to describe how the approach can be practically used in HPE. The specific and additional focus on regulation can enhance current approaches for providing feedback on group learning and the authors discuss recommendations for practical implementation and future research
Students' Knowledge of Faculty Evaluation in Tehran University of Medical Sciences, Iran: Expectations, Challenges, Solutions
Background & Objective: Evaluation of faculty members' teaching activities by student is one of the commonly used methods the use of which has increased accordingly over time. Therefore, dynamic and meaningful participation of the students in the process is imperative. The purpose of this study is to understand the knowledge of students of the evaluation process and its challenges, and their solutions.
Methods: This qualitative study was performed in 3 schools of Tehran University of Medical Sciences, Iran, in 2012. The opinions of student were collected during 3 focus group discussions with the participation of 19 students. Participants were selected based on purposive sampling from among the student advisory committee. All group discussions were recorded, and then, transcribed. Data were analyzed by content analysis. For credibility and authentication of data, the peer monitoring and review method was used.
Results: Major themes include the input challenges with 2 subthemes (barriers to participation, and factors affecting evaluation), the process challenges with 2 subthemes (the evaluation time, and students preferred format for evaluation) and the impact challenges with 2 subthemes (publication of results, and reward and punishment evaluation systems).
Conclusion: The perceptions and awareness of students about the evaluation process and its application is important in the increasing of participation and their seriousness in answering questions. Thus, in order to reduce bias in the evaluation process, it is recommended that suitable and timely information be given to students and faculty members, courses be performed to familiarize them with the concept and application of evaluation, and the evaluation questionnaires be revised.
Keywords
Student Faculty evaluation Perceptio
Developing Comprehensive Course Evaluation Guidelines: A Step towards Organizing Program Evaluation Activities in Tehran University of Medical Sciences, Iran
Background & Objective: One of the potential strategies for ensuring the quality of educational programs is adopting a systematic approach to its evaluation. Current evidence indicates the lack of high quality program evaluation activities in the field of medical education. The aim of this study was to review the current status of program evaluation activities in Tehran University of Medical Sciences, Tehran, Iran, and formulate guidelines to promote program evaluation activities at the University level.
Methods: A survey was conducted to investigate the current conditions of program evaluation using a questionnaire in 2012. Then, the comprehensive course evaluation guidelines, consisting of 22 items, were developed based on literature review, survey results, and experts’ opinions. Finally, each affiliated school developed its own evaluation plan. The evaluation taskforce reviewed evaluation plans using a checklist.
Results: Using one tool or resource, 9 schools (90%) conducted course evaluation at least once. The views of students, faculty, staff or alumni were used occasionally. Moreover, 4 schools (40%) reported the evaluation results. After reviewing 14 submitted course plans based on the checklist, 51 feedbacks were provided. Most and least feedbacks were related to evaluation design and implementation and evaluation infrastructure, respectively.
Conclusion: The process of developing guidelines and plans resulted in stakeholders reaching a common understanding of course evaluation, and in turn, creating evaluation capacity and more accountability.
Keywords: Program evaluation; Ongoing evaluation; Evaluation system; Comprehensive evaluatio
Developing a Microanalytic Self-regulated Learning Assessment Protocol for Biomedical Science Learning
Background & Objective: Self-regulated learning (SRL) is highly task and context dependent.
Microanalytic assessment method measures students’ SRL processes while performing a
particular learning task. The present study aimed to design a microanalytic SRL assessment
protocol for biomedical science learning.
Methods: This mixed method study was conducted in Tehran University of Medical Sciences,
Iran, in 2013. The data collection tool was a microanalytic SRL assessment protocol that was
designed based on the literature review, expert opinion, and cognitive interview with medical
students, and then, piloted. The participants consisted of 13 second year medical students. The
subjects were interviewed while conducting a biomedical science learning task. Interviews were
recorded, transcribed and coded based on a predetermined coding framework. Descriptive
statistics were used to analyze the data.
Results: The microanalytic SRL assessment protocol was developed in three parts; interview
guide, coding framework, and biomedical science learning task. An interview guide was designed
consisting of 6 open-ended questions aimed at assessing 5 SRL sub-processes of goal setting,
strategic planning, meta-cognitive monitoring, causal attribution, and adaptive inferences and a
close-ended question regarding self-efficacy. Based on the pilot study, most participants reported
task-specific and task-general processes for the sub-processes of strategic planning (92%), metacognitive
monitoring (77%), causal attribution (85%), and adaptive inferences (92%).
Conclusion: The developed protocol could capture the fine-grained nature of the self-regulatory
sub-processes of medical students for biomedical science learning. Therefore, it has the potential
application of modifying SRL processes in early years of medical school.
Key Words: Self-regulated learning, Microanalytic assessment method, Biomedical science
learnin
Factors Influencing Mini-Clinical Evaluation Exercise Scores: A Review Article
Introduction: The Mini-Clinical Evaluation Exercise (Mini-CEX ) is widely used to assess clinical performance because of its practicality. Although initial studies have supported its reliability and validity, the utility of Mini-CEX is under serious debate due to inconsistencies in rater judgments. This study aimed to review the relevant literature on the utility of Mini-CEX in order to find out the influencing factors on Mini-CEX scores.
Methods: In this narrative review study,the MEDLINE, EMBASE, SCOPUS, SID & Magiran were searched to retrieve the relevant articles published from 1995 to 2014, using the key words such as Mini-CEX , ‘utility’, ‘validity’, ‘reliability’,‘score variability’. Subsequently, nineteen relevant studies out of 51 articles were selected to be reviewed.
Results: Factors influencing Mini-CEX scores were related to the raters, rating forms and clinical exposures, in order of importance. Variability in raters’ stringency and leniency and high intercorrelation between items were the most influencing factors affecting test scores. The results of the studies regarding factors such as rater trainingon Mini-CEX scores were not conclusive.
Conclusion: Considering the inconsistencies in Mini-CEX exam scores and uncertainty about the influencing circumstances on scoring, it is recommended that Mini-CEX scores should not be used for summative assessment purposes, especially in order to rank the performance of the students