39 research outputs found

    Aplikasi Konsep Personal Knowledge Management (PKM) dengan Social Web

    Full text link
    This study discusses the impact of social media to the development of personal knowledge management (PKM). Here the author describeS the factual condition of the company that useS social media as a means of personal knowledge management. Furthermore, these interaction patterns have significant impact on the organization. The purpose of this article is to analyze the application of personal knowledge managementconcept, combined with the social media concept that focuses on social networks with the consideration that they are widespreadly used by the public. Plus the emergence of social networking sites are increasingly new added value to the development of social media. The method used is literature study obtained from the online journals, articles and text books. The result of this study is expected to expand the use of social networking as a means of personal knowledge management in the organization

    Electronic charts do not facilitate the recognition of patient hazards by advanced medical students: A randomized controlled study

    No full text
    Chart review is an important tool to identify patient hazards. Most advanced medical students perform poorly during chart review but can learn how to identify patient hazards context-independently. Many hospitals have implemented electronic health records, which enhance patient safety but also pose challenges. We investigated whether electronic charts impair advanced medical students' recognition of patient hazards compared with traditional paper charts. Fifth-year medical students were randomized into two equal groups. Both groups attended a lecture on patient hazards and a training session on handling electronic health records. One group reviewed an electronic chart with 12 standardized patient hazards and then reviewed another case in a paper chart; the other group reviewed the charts in reverse order. The two case scenarios (diabetes and gastrointestinal bleeding) were used as the first and second case equally often. After each case, the students were briefed about the patient safety hazards. In total, 78.5% of the students handed in their notes for evaluation. Two blinded raters independently assessed the number of patient hazards addressed in the students' notes. For the diabetes case, the students identified a median of 4.0 hazards [25%-75% quantiles (Q25-Q75): 2.0-5.5] in the electronic chart and 5.0 hazards (Q25-Q75: 3.0-6.75) in the paper chart (equivalence testing, p = 0.005). For the gastrointestinal bleeding case, the students identified a median of 5.0 hazards (Q25-Q75: 4.0-6.0) in the electronic chart and 5.0 hazards (Q25-Q75: 3.0-6.0) in the paper chart (equivalence testing, p < 0.001). We detected no improvement between the first case [median 5.0 (Q25-Q75: 3.0-6.0)] and second case [median, 5.0 (Q25-Q75: 3.0-6.0); p < 0.001, test for equivalence]. Electronic charts do not seem to facilitate advanced medical students' recognition of patient hazards during chart review and may impair expertise formation

    Medical students’ attitudes and perceived competence regarding medical cannabis and its suggestibility

    No full text
    Abstract Introduction The global trend of legalizing medical cannabis (MC) is on the rise. In Germany, physicians have prescribed MC at the expense of health insurers since 2017. However, the teaching on MC has been scant in medical training. This study investigates medical students’ attitudes and perceived competence regarding MC and evaluates how varying materials (videos/articles) impact their opinions. Methods Fourth-year medical students were invited to participate in the cross-sectional study. During an online session, students viewed a video featuring a patient with somatoform pain discussing her medical history, plus one of four randomly assigned MC-related materials (each an article and a video depicting a positive or negative perspective on MC). Students’ opinions were measured at the beginning [T0] and the end of the course [T1] using a standardized questionnaire with a five-point Likert scale. We assessed the influence of the material on the students’ opinions using paired-sample t-tests. One-way analysis of variance and Tukey post-hoc tests were conducted to compare the four groups. Pearson correlations assessed correlations. Results 150 students participated in the course, the response rate being 75.3% [T0] and 72.7% [T1]. At T0, students felt a little competent regarding MC therapy (M = 1.80 ± 0.82). At T1, students in groups 1 (positive video) and 3 (positive article) rated themselves as more capable in managing MC therapy (t(28)=−3.816,p<0.001;t(23)=−4.153,p<0.001) (\text{t}\left(28\right)=-3.816,\text{p}<0.001; \text{t}\left(23\right)=-4.153,\text{p}<0.001) , and students in groups 3 (positive article) and 4 (negative article) felt more skilled in treating patients with chronic pain (t(23)=−2.251,p=0.034;t(30)=−2.034;p=0.051) (\text{t}\left(23\right)=-2.251,\text{p}=0.034;\text{t}\left(30\right)=-2.034;\text{p}=0.051) . Compared to the other groups, group 2 students (negative video) felt significantly less competent. They perceived cannabis as addictive, hazardous and unsuitable for medical prescription. Discussion This study showed that medical students lack knowledge and perceived competence in MC therapy. Material influences their opinions in different ways, and they seek more training on MC. This underlines that integrating MC education into medical curricula is crucial to address this knowledge gap

    Assessing ChatGPT’s Mastery of Bloom’s Taxonomy Using Psychosomatic Medicine Exam Questions: Mixed-Methods Study

    No full text
    BackgroundLarge language models such as GPT-4 (Generative Pre-trained Transformer 4) are being increasingly used in medicine and medical education. However, these models are prone to “hallucinations” (ie, outputs that seem convincing while being factually incorrect). It is currently unknown how these errors by large language models relate to the different cognitive levels defined in Bloom’s taxonomy. ObjectiveThis study aims to explore how GPT-4 performs in terms of Bloom’s taxonomy using psychosomatic medicine exam questions. MethodsWe used a large data set of psychosomatic medicine multiple-choice questions (N=307) with real-world results derived from medical school exams. GPT-4 answered the multiple-choice questions using 2 distinct prompt versions: detailed and short. The answers were analyzed using a quantitative approach and a qualitative approach. Focusing on incorrectly answered questions, we categorized reasoning errors according to the hierarchical framework of Bloom’s taxonomy. ResultsGPT-4’s performance in answering exam questions yielded a high success rate: 93% (284/307) for the detailed prompt and 91% (278/307) for the short prompt. Questions answered correctly by GPT-4 had a statistically significant higher difficulty than questions answered incorrectly (P=.002 for the detailed prompt and P<.001 for the short prompt). Independent of the prompt, GPT-4’s lowest exam performance was 78.9% (15/19), thereby always surpassing the “pass” threshold. Our qualitative analysis of incorrect answers, based on Bloom’s taxonomy, showed that errors were primarily in the “remember” (29/68) and “understand” (23/68) cognitive levels; specific issues arose in recalling details, understanding conceptual relationships, and adhering to standardized guidelines. ConclusionsGPT-4 demonstrated a remarkable success rate when confronted with psychosomatic medicine multiple-choice exam questions, aligning with previous findings. When evaluated through Bloom’s taxonomy, our data revealed that GPT-4 occasionally ignored specific facts (remember), provided illogical reasoning (understand), or failed to apply concepts to a new situation (apply). These errors, which were confidently presented, could be attributed to inherent model biases and the tendency to generate outputs that maximize likelihood
    corecore