11 research outputs found

    Virtual reality-based neurological examination teaching tool(VRNET) versus standardized patient in teaching neurological examinations for the medical students: a randomized, single-blind study

    Get PDF
    Background: The conventional methods for teaching neurological examination with real patients to medical students have some limitations if the patient with the symptom or disease is not available. Therefore, we developed a Virtual Reality-based Neurological Examination Teaching Tool (VRNET) and evaluated its usefulness in in teaching neurological examinations for the medical students. Methods: In this prospective, randomized, single-blind study, we recruited 98 medical students and divided them into two groups: 1) A standardized patient(SP) group that received the clinical performance examination utilizing standard patients complaining of dizziness was provided neurological findings using conventional method such as verbal description, photographs, and video clips; 2) A SP with VRNET group that was provided the neurological findings using the newly developed tool. Among the 98 students, 3 did not agree to participate, and 95 were enrolled in this study. The SP group comprised 39 students and the SP with VRNET group had 56 students. Results: There were no statistical differences in VRNET's realness and student satisfaction between the SP and SP with VRNET groups. However, a statistically significant difference was found in the Neurologic Physical Exam (NPE) score (p = 0.043); the SP with VRNET group had higher NPE scores (3.81 ยฑ 0.92) than the SP group (3.40 ยฑ 1.01). Conclusions: VRNET is useful in teaching senior (graduating) medical students with SP with a neurologic problem.ope

    The experience of remote videoconferencing to enhance emergency resident education using Google Hangouts

    Get PDF
    Objective: It is difficult for emergency residents to attend all the lectures that are required because of the limited labor time. The Google Hangouts program for has been used as a remote videoconference to overcome the limit to provide equal opportunities and reduce the time and costs since 2015. This article reports the authorsโ€™ experiences of running a residency education program using Google Hangouts. Methods: From 2015, topics on the emergency radiology were lectured to emergency residents in three different hospitals connected by Google Hangouts. From 2017, electrocardiography analysis, emergency radiology, ventilator application, and journal review were selected for the remote videoconference. The residents' self-assessment score, and a posteducation satisfaction questionnaire were surveyed. Results: Twenty-nine emergency residents responded to the questionnaire after using the Google Hangouts. The number of participants before and after Hangout increased significantly in other two hospitals. All the residents answered that the score on achieving the learning goal increased before and after the videoconference lectures. All the residents answered that the training program is more satisfactory after using the Google Hangouts than before. Conclusion: All emergency residents were satisfied and were more confident after the remote videoconference education using the Google Hangouts than before.ope

    Differential Biases and Variabilities of Deep Learning-Based Artificial Intelligence and Human Experts in Clinical Diagnosis: Retrospective Cohort and Survey Study

    Get PDF
    Background: Deep learning (DL)-based artificial intelligence may have different diagnostic characteristics than human experts in medical diagnosis. As a data-driven knowledge system, heterogeneous population incidence in the clinical world is considered to cause more bias to DL than clinicians. Conversely, by experiencing limited numbers of cases, human experts may exhibit large interindividual variability. Thus, understanding how the 2 groups classify given data differently is an essential step for the cooperative usage of DL in clinical application. Objective: This study aimed to evaluate and compare the differential effects of clinical experience in otoendoscopic image diagnosis in both computers and physicians exemplified by the class imbalance problem and guide clinicians when utilizing decision support systems. Methods: We used digital otoendoscopic images of patients who visited the outpatient clinic in the Department of Otorhinolaryngology at Severance Hospital, Seoul, South Korea, from January 2013 to June 2019, for a total of 22,707 otoendoscopic images. We excluded similar images, and 7500 otoendoscopic images were selected for labeling. We built a DL-based image classification model to classify the given image into 6 disease categories. Two test sets of 300 images were populated: balanced and imbalanced test sets. We included 14 clinicians (otolaryngologists and nonotolaryngology specialists including general practitioners) and 13 DL-based models. We used accuracy (overall and per-class) and kappa statistics to compare the results of individual physicians and the ML models. Results: Our ML models had consistently high accuracies (balanced test set: mean 77.14%, SD 1.83%; imbalanced test set: mean 82.03%, SD 3.06%), equivalent to those of otolaryngologists (balanced: mean 71.17%, SD 3.37%; imbalanced: mean 72.84%, SD 6.41%) and far better than those of nonotolaryngologists (balanced: mean 45.63%, SD 7.89%; imbalanced: mean 44.08%, SD 15.83%). However, ML models suffered from class imbalance problems (balanced test set: mean 77.14%, SD 1.83%; imbalanced test set: mean 82.03%, SD 3.06%). This was mitigated by data augmentation, particularly for low incidence classes, but rare disease classes still had low per-class accuracies. Human physicians, despite being less affected by prevalence, showed high interphysician variability (ML models: kappa=0.83, SD 0.02; otolaryngologists: kappa=0.60, SD 0.07). Conclusions: Even though ML models deliver excellent performance in classifying ear disease, physicians and ML models have their own strengths. ML models have consistent and high accuracy while considering only the given image and show bias toward prevalence, whereas human physicians have varying performance but do not show bias toward prevalence and may also consider extra information that is not images. To deliver the best patient care in the shortage of otolaryngologists, our ML model can serve a cooperative role for clinicians with diverse expertise, as long as it is kept in mind that models consider only images and could be biased toward prevalent diseases even after data augmentation.ope

    Effect of deep learning-based assistive technology use on chest radiograph interpretation by emergency department physicians: a prospective interventional simulation-based study

    No full text
    Background: Interpretation of chest radiographs (CRs) by emergency department (ED) physicians is inferior to that by radiologists. Recent studies have investigated the effect of deep learning-based assistive technology on CR interpretation (DLCR), although its relevance to ED physicians remains unclear. This study aimed to investigate whether DLCR supports CR interpretation and the clinical decision-making of ED physicians. Methods: We conducted a prospective interventional study using a web-based performance assessment system. Study participants were recruited through the official notice targeting board for certified emergency physicians and residents working at the present ED. Of the eight ED physicians who volunteered to participate in the study, seven ED physicians were included, while one participant declared withdrawal during performance assessment. Seven physicians' CR interpretations and clinical decision-making were assessed based on the clinical data from 388 patients, including detecting the target lesion with DLCR. Participant performance was evaluated by area under the receiver operating characteristic curve (AUROC), sensitivity, specificity, and accuracy analyses; decision-making consistency was measured by kappa statistics. ED physicians with < 24 months of experience were defined as 'inexperienced'. Results: Among the 388 simulated cases, 259 (66.8%) had CR abnormality. Their median value of abnormality score measured by DLCR was 59.3 (31.77, 76.25) compared to a score of 3.35 (1.57, 8.89) for cases of normal CR. There was a difference in performance between ED physicians working with and without DLCR (AUROC: 0.801, P < 0.001). The diagnostic sensitivity and accuracy of CR were higher for all ED physicians working with DLCR than for those working without it. The overall kappa value for decision-making consistency was 0.902 (95% confidence interval [CI] 0.884-0.920); concurrently, the kappa value for the experienced group was 0.956 (95% CI 0.934-0.979), and that for the inexperienced group was 0.862 (95% CI 0.835-0.889). Conclusions: This study presents preliminary evidence that ED physicians using DLCR in a clinical setting perform better at CR interpretation than their counterparts who do not use this technology. DLCR use influenced the clinical decision-making of inexperienced physicians more strongly than that of experienced physicians. These findings require prospective validation before DLCR can be recommended for use in routine clinical practice.ope
    corecore