13 research outputs found

    An analysis of medical students’ reflective essays in problem-based learning

    Get PDF

    BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

    Full text link
    Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License

    An analysis of medical students’ reflective essays in problem-based learning

    No full text
    Purpose This study aimed to explore students’ learning experience in problem-based learning (PBL) particularly in terms of what they learned and how they learned in one Korean medical school by analyzing their reflective essays with qualitative research methods. Methods This study included 44 first-year medical students. They took three consecutive PBL courses and wrote reflective essays 3 times anonymously on the last day of each course. Their reflective essays were analyzed using an inductive content analysis method. Results The coding process yielded 16 sub-categories and these categories were grouped into six categories according to the distinctive characteristics of PBL learning experience: integrated knowledge base, clinical problem solving, collaboration, intrinsic motivation, self-directed learning, and professional attitude. Among these categories, integrated knowledge base (34.68%) and professional attitude (2.31%) were the categories mentioned most and least frequently. Conclusion The findings of this study provide an overall understanding of the learning experience of Korean medical students during PBL in terms of what they learned and how they learned with rich descriptive commentaries from their perspectives as well as several thoughtful insights to help develop instructional strategies to enhance the effectiveness of PBL

    Fostering clinical reasoning ability in preclinical students through an illness script worksheet approach in flipped learning: a quasi-experimental study

    No full text
    Abstract Background The consensus that clinical reasoning should be explicitly addressed throughout medical training is increasing; however, studies on specific teaching methods, particularly, for preclinical students, are lacking. This study investigated the effects of an illness script worksheet approach in flipped learning on the development of clinical reasoning abilities in preclinical students. It also explored whether the impact of this intervention differed depending on clinical reasoning ability after dividing the students into high and low groups based on their pre-diagnostic thinking inventory (DTI) scores. Methods This study used a one-group pre-post test design and convenience sampling. Forty-two second-year medical students were invited to participate in this study. The course, “clinical reasoning method,” was redesigned as an illness script worksheet approach in flipped learning. The course was an eight-week long program. The students met once or twice per week with a different professor each time and engaged with 15 clinical cases in small groups in one classroom. Each time, one professor facilitated seven groups in a single classroom. The effectiveness of the intervention was measured using DTI before and after the intervention. A learning experience survey was conducted with post-DTI assessment. Results Thirty-six students participated in the survey and their data were analyzed. The mean pre-DTI score was 170.4, and the mean post-DTI score was 185.2, indicating an 8.68% increase (p < .001). Significant differences were also found in both high and low groups between the pre- and post-DTI assessments. However, the low group improved much more than the high group and exhibited a significant increase in one of the DTI subscales as well. The overall average score on the learning experience survey was 3.11 out of 4. Conclusion The findings indicated that the intervention was an effective instructional method for the development of clinical reasoning in preclinical students and was more beneficial for students with a low level of clinical reasoning ability. This study demonstrated that the intervention can be a feasible and scalable method to effectively and efficiently train clinical reasoning in preclinical students in a classroom

    Validating the Korean shorter Diagnostic Thinking Inventory in medical education: a pilot study

    No full text
    Purpose Developing clinical reasoning across the medical curriculum requires valid, reliable, and feasible assessment tools. However, few validated tools are available for the convenient and efficient quantification of clinical reasoning. Thus, this study aimed to create a shorter version of the Diagnostic Thinking Inventory (DTI) and validate it in the Korean medical education context (DTI-SK). Methods The DTI-SK was constructed using content validity and a translation and back-translation process. It comprises two subcategories and 14 items. Its validity and reliability were explored using exploratory and confirmatory factor analyses, mean comparisons of four medical student groups (med 1 to med 4), and internal consistency using Cronbach’s α. Two hundred medical students were invited to participate through email, and the survey was administered for 2 weeks. Results Data from 136 students were analyzed. Exploratory factor analysis revealed two factors with eigenvalues greater than 1.0 and they together explained 54.65% of the variance. Confirmatory factor analysis demonstrated that the model had acceptable level of fit and convergent validity. Discriminant validity was confirmed using heterotrait-monotrait criterion. Group comparisons demonstrated that the med 4 students showed significantly higher scores than the med 1 and 2 students. The inventory exhibited strong internal consistency for all items (Cronbach’s α=0.906). Conclusion The findings indicated that the DTI-SK is a reliable and valid tool for measuring medical students’ clinical reasoning in the context of Korean medical education

    Developing Clinical Reasoning Skills Through Argumentation With the Concept Map Method in Medical Problem-Based Learning

    Get PDF
    This study aims to explore the effects of argumentation with the concept map method during medical problem-based learning (PBL) on individual clinical reasoning. Individual clinical reasoning ability was assessed through problem-solving performance and arguments that students constructed during individual clinical reasoning processes. Toulmin’s model of argument was utilized as a structure for arguments. The study also explored whether there would be any differences between the firstand second-year medical students. Ninety-five medical students participated in this study, and they took two PBL modules. During PBL, they were asked as a group to construct concept maps based on their argumentation about a case under discussion. Before and after each PBL, they were asked to write individual clinical problem-solving tests. One-way, within-subjects ANOVAs were conducted to examine the quality of arguments and clinical problem-solving performance in three individual tests. The results provided evidence that utilizing argumentation with the concept map method during PBL positively affects the development of clinical reasoning skills by individual students

    Needs assessment for developing teaching competencies of medical educators

    No full text

    BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

    No full text
    Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License
    corecore