3 research outputs found

    Effects of COVID-19 Pandemic on Progress Test Performance in German-Speaking Countries

    Get PDF
    Background. The COVID-19 pandemic has been the source of many challenges for medical students worldwide. The authors examined short-term effects on the knowledge gain of medical students in German-speaking countries. Methods. The development of the knowledge gain of medical students during the pandemic was measured by comparing the outcomes of shared questions within Berlin Progress Test (PT) pairs. The PT is a formative test of 200 multiple choice questions at the graduate level, which provides feedback to students on knowledge and knowledge gain during their course of study. It is provided to about 11,000 students in Germany and Austria around the beginning of each semester. We analyzed three successive test pairs: PT36-PT41 (both conducted before the pandemic), PT37-PT42 (PT37 took place before the pandemic; PT42 was conducted from April 2020 onwards), and PT38-PT43 (PT38 was administered before the pandemic; PT43 started in November 2020). The authors used mixed-effect regression models and compared the absolute variations in the percentage of correct answers per subject. Results. The most recent test of each PT pair showed a higher mean score compared to the previous test in the same pair (PT36-PT41 : 2.53 (95% CI: 1.31-3.75), PT37-PT42 : 3.72 (2.57-4.88), and PT38-PT43 : 5.66 (4.63-6.69)). Analogously, an increase in the share of correct answers was observed for most medical disciplines, with Epidemiology showing the most remarkable upsurge. Conclusions. Overall, PT performance improved during the pandemic, which we take as an indication that the sudden shift to online learning did not have a negative effect on the knowledge gain of students. We consider that these results may be helpful in advancing innovative approaches to medical education

    Die Leistungsfähigkeit einer KI im Medizinstudium: In welchem Semester wäre ChatGPT?

    No full text
    Friederichs H, Friederichs WJ, Roselló Atanet I, März M. Die Leistungsfähigkeit einer KI im Medizinstudium: In welchem Semester wäre ChatGPT? In: Walkenhorst U, Brandes C, eds. Jahrestagung der Gesellschaft für Medizinische Ausbildung (GMA), Osnabrück 14.09. – 16.09.2023, Abstractband. German Medical Science GMS Publishing House; 2023

    Discovering unknown response patterns in progress test data to improve the estimation of student performance

    No full text
    Abstract Background The Progress Test Medizin (PTM) is a 200-question formative test that is administered to approximately 11,000 students at medical universities (Germany, Austria, Switzerland) each term. Students receive feedback on their knowledge (development) mostly in comparison to their own cohort. In this study, we use the data of the PTM to find groups with similar response patterns. Methods We performed k-means clustering with a dataset of 5,444 students, selected cluster number k = 5, and answers as features. Subsequently, the data was passed to XGBoost with the cluster assignment as target enabling the identification of cluster-relevant questions for each cluster with SHAP. Clusters were examined by total scores, response patterns, and confidence level. Relevant questions were evaluated for difficulty index, discriminatory index, and competence levels. Results Three of the five clusters can be seen as “performance” clusters: cluster 0 (n = 761) consisted predominantly of students close to graduation. Relevant questions tend to be difficult, but students answered confidently and correctly. Students in cluster 1 (n = 1,357) were advanced, cluster 3 (n = 1,453) consisted mainly of beginners. Relevant questions for these clusters were rather easy. The number of guessed answers increased. There were two “drop-out” clusters: students in cluster 2 (n = 384) dropped out of the test about halfway through after initially performing well; cluster 4 (n = 1,489) included students from the first semesters as well as “non-serious” students both with mostly incorrect guesses or no answers. Conclusion Clusters placed performance in the context of participating universities. Relevant questions served as good cluster separators and further supported our “performance” cluster groupings
    corecore