18 research outputs found

    From bench to bedside - current clinical and translational challenges in fibula free flap reconstruction.

    Get PDF
    Fibula free flaps (FFF) represent a working horse for different reconstructive scenarios in facial surgery. While FFF were initially established for mandible reconstruction, advancements in planning for microsurgical techniques have paved the way toward a broader spectrum of indications, including maxillary defects. Essential factors to improve patient outcomes following FFF include minimal donor site morbidity, adequate bone length, and dual blood supply. Yet, persisting clinical and translational challenges hamper the effectiveness of FFF. In the preoperative phase, virtual surgical planning and artificial intelligence tools carry untapped potential, while the intraoperative role of individualized surgical templates and bioprinted prostheses remains to be summarized. Further, the integration of novel flap monitoring technologies into postoperative patient management has been subject to translational and clinical research efforts. Overall, there is a paucity of studies condensing the body of knowledge on emerging technologies and techniques in FFF surgery. Herein, we aim to review current challenges and solution possibilities in FFF. This line of research may serve as a pocket guide on cutting-edge developments and facilitate future targeted research in FFF

    Assessing the role of advanced artificial intelligence as a tool in multidisciplinary tumor board decision-making for primary head and neck cancer cases

    Get PDF
    BackgroundHead and neck squamous cell carcinoma (HNSCC) is a complex malignancy that requires a multidisciplinary approach in clinical practice, especially in tumor board discussions. In recent years, artificial intelligence has emerged as a tool to assist healthcare professionals in making informed decisions. This study investigates the application of ChatGPT 3.5 and ChatGPT 4.0, natural language processing models, in tumor board decision-making.MethodsWe conducted a pilot study in October 2023 on 20 consecutive head and neck cancer patients discussed in our multidisciplinary tumor board (MDT). Patients with a primary diagnosis of head and neck cancer were included. The MDT and ChatGPT 3.5 and ChatGPT 4.0 recommendations for each patient were compared by two independent reviewers and the number of therapy options, the clinical recommendation, the explanation and the summarization were graded.ResultsIn this study, ChatGPT 3.5 provided mostly general answers for surgery, chemotherapy, and radiation therapy. For clinical recommendation, explanation and summarization ChatGPT 3.5 and 4.0 scored well, but demonstrated to be mostly an assisting tool, suggesting significantly more therapy options than our MDT, while some of the recommended treatment modalities like primary immunotherapy are not part of the current treatment guidelines.ConclusionsThis research demonstrates that advanced AI models at the moment can merely assist in the MDT setting, since the current versions list common therapy options, but sometimes recommend incorrect treatment options and in the case of ChatGPT 3.5 lack information on the source material

    Diagnosing lagophthalmos using artificial intelligence

    Get PDF
    Lagophthalmos is the incomplete closure of the eyelids posing the risk of corneal ulceration and blindness. Lagophthalmos is a common symptom of various pathologies. We aimed to program a convolutional neural network to automatize lagophthalmos diagnosis. From June 2019 to May 2021, prospective data acquisition was performed on 30 patients seen at the Department of Plastic, Hand, and Reconstructive Surgery at the University Hospital Regensburg, Germany (IRB reference number: 20-2081-101). In addition, comparative data were gathered from 10 healthy patients as the control group. The training set comprised 826 images, while the validation and testing sets consisted of 91 patient images each. Validation accuracy was 97.8% over the span of 64 epochs. The model was trained for 17.3 min. For training and validation, an average loss of 0.304 and 0.358 and a final loss of 0.276 and 0.157 were noted. The testing accuracy was observed to be 93.41% with a loss of 0.221. This study proposes a novel application for rapid and reliable lagophthalmos diagnosis. Our CNN-based approach combines effective anti-overfitting strategies, short training times, and high accuracy levels. Ultimately, this tool carries high translational potential to facilitate the physician’s workflow and improve overall lagophthalmos patient care

    In-depth analysis of ChatGPT’s performance based on specific signaling words and phrases in the question stem of 2377 USMLE step 1 style questions

    Get PDF
    ChatGPT has garnered attention as a multifaceted AI chatbot with potential applications in medicine. Despite intriguing preliminary findings in areas such as clinical management and patient education, there remains a substantial knowledge gap in comprehensively understanding the chances and limitations of ChatGPT’s capabilities, especially in medical test-taking and education. A total of n = 2,729 USMLE Step 1 practice questions were extracted from the Amboss question bank. After excluding 352 image-based questions, a total of 2,377 text-based questions were further categorized and entered manually into ChatGPT, and its responses were recorded. ChatGPT’s overall performance was analyzed based on question difficulty, category, and content with regards to specific signal words and phrases. ChatGPT achieved an overall accuracy rate of 55.8% in a total number of n = 2,377 USMLE Step 1 preparation questions obtained from the Amboss online question bank. It demonstrated a significant inverse correlation between question difficulty and performance with rs = -0.306; p &lt; 0.001, maintaining comparable accuracy to the human user peer group across different levels of question difficulty. Notably, ChatGPT outperformed in serology-related questions (61.1% vs. 53.8%; p = 0.005) but struggled with ECG-related content (42.9% vs. 55.6%; p = 0.021). ChatGPT achieved statistically significant worse performances in pathophysiology-related question stems. (Signal phrase = “what is the most likely/probable cause”). ChatGPT performed consistent across various question categories and difficulty levels. These findings emphasize the need for further investigations to explore the potential and limitations of ChatGPT in medical examination and education.</p

    From bench to bedside – current clinical and translational challenges in fibula free flap reconstruction

    Get PDF
    Fibula free flaps (FFF) represent a working horse for different reconstructive scenarios in facial surgery. While FFF were initially established for mandible reconstruction, advancements in planning for microsurgical techniques have paved the way toward a broader spectrum of indications, including maxillary defects. Essential factors to improve patient outcomes following FFF include minimal donor site morbidity, adequate bone length, and dual blood supply. Yet, persisting clinical and translational challenges hamper the effectiveness of FFF. In the preoperative phase, virtual surgical planning and artificial intelligence tools carry untapped potential, while the intraoperative role of individualized surgical templates and bioprinted prostheses remains to be summarized. Further, the integration of novel flap monitoring technologies into postoperative patient management has been subject to translational and clinical research efforts. Overall, there is a paucity of studies condensing the body of knowledge on emerging technologies and techniques in FFF surgery. Herein, we aim to review current challenges and solution possibilities in FFF. This line of research may serve as a pocket guide on cutting-edge developments and facilitate future targeted research in FFF

    30-Day Postoperative Outcomes in Adults with Obstructive Sleep Apnea Undergoing Upper Airway Surgery

    Get PDF
    Background: Obstructive sleep apnea (OSA) is a chronic disorder of the upper airway. OSA surgery has oftentimes been researched based on the outcomes of single-institutional facilities. We retrospectively analyzed a multi-institutional national database to investigate the outcomes of OSA surgery and identify risk factors for complications. Methods: We reviewed the American College of Surgeons National Surgical Quality Improvement Program (NSQIP) database (2008–2020) to identify patients who underwent OSA surgery. The postoperative outcomes of interest included 30-day surgical and medical complications, reoperation, readmission, and mortality. Additionally, we assessed risk-associated factors for complications, including comorbidities and preoperative blood values. Results: The study population included 4662 patients. Obesity (n = 2909; 63%) and hypertension (n = 1435; 31%) were the most frequent comorbidities. While two (0.04%) deaths were reported within the 30-day postoperative period, the total complication rate was 6.3% (n = 292). Increased BMI (p = 0.01), male sex (p = 0.03), history of diabetes (p = 0.002), hypertension requiring treatment (p = 0.03), inpatient setting (p < 0.0001), and American Society of Anesthesiology (ASA) physical status classification scores ≥ 4 (p < 0.0001) were identified as risk-associated factors for any postoperative complications. Increased alkaline phosphatase (ALP) was identified as a risk-associated factor for the occurrence of any complications (p = 0.02) and medical complications (p = 0.001). Conclusions: OSA surgery outcomes were analyzed at the national level, with complications shown to depend on AP levels, male gender, extreme BMI, and diabetes mellitus. While OSA surgery has demonstrated an overall positive safety profile, the implementation of these novel risk-associated variables into the perioperative workflow may further enhance patient care

    Pure Wisdom or Potemkin Villages? A Comparison of ChatGPT 3.5 and ChatGPT 4 on USMLE Step 3 Style Questions: Quantitative Analysis

    Get PDF
    Background: The United States Medical Licensing Examination (USMLE) has been critical in medical education since 1992, testing various aspects of a medical student’s knowledge and skills through different steps, based on their training level. Artificial intelligence (AI) tools, including chatbots like ChatGPT, are emerging technologies with potential applications in medicine. However, comprehensive studies analyzing ChatGPT’s performance on USMLE Step 3 in large-scale scenarios and comparing different versions of ChatGPT are limited. Objective: This paper aimed to analyze ChatGPT’s performance on USMLE Step 3 practice test questions to better elucidate the strengths and weaknesses of AI use in medical education and deduce evidence-based strategies to counteract AI cheating. Methods: A total of 2069 USMLE Step 3 practice questions were extracted from the AMBOSS study platform. After including 229 image-based questions, a total of 1840 text-based questions were further categorized and entered into ChatGPT 3.5, while a subset of 229 questions were entered into ChatGPT 4. Responses were recorded, and the accuracy of ChatGPT answers as well as its performance in different test question categories and for different difficulty levels were compared between both versions. Results: Overall, ChatGPT 4 demonstrated a statistically significant superior performance compared to ChatGPT 3.5, achieving an accuracy of 84.7% (194/229) and 56.9% (1047/1840), respectively. A noteworthy correlation was observed between the length of test questions and the performance of ChatGPT 3.5 (ρ=–0.069; P=.003), which was absent in ChatGPT 4 (P=.87). Additionally, the difficulty of test questions, as categorized by AMBOSS hammer ratings, showed a statistically significant correlation with performance for both ChatGPT versions, with ρ=–0.289 for ChatGPT 3.5 and ρ=–0.344 for ChatGPT 4. ChatGPT 4 surpassed ChatGPT 3.5 in all levels of test question difficulty, except for the 2 highest difficulty tiers (4 and 5 hammers), where statistical significance was not reached. Conclusions: In this study, ChatGPT 4 demonstrated remarkable proficiency in taking the USMLE Step 3, with an accuracy rate of 84.7% (194/229), outshining ChatGPT 3.5 with an accuracy rate of 56.9% (1047/1840). Although ChatGPT 4 performed exceptionally, it encountered difficulties in questions requiring the application of theoretical concepts, particularly in cardiology and neurology. These insights are pivotal for the development of examination strategies that are resilient to AI and underline the promising role of AI in the realm of medical education and diagnostics

    ChatGPT’s Response Consistency:A Study on Repeated Queries of Medical Examination Questions

    Get PDF
    (1) Background: As the field of artificial intelligence (AI) evolves, tools like ChatGPT are increasingly integrated into various domains of medicine, including medical education and research. Given the critical nature of medicine, it is of paramount importance that AI tools offer a high degree of reliability in the information they provide. (2) Methods: A total of n = 450 medical examination questions were manually entered into ChatGPT thrice, each for ChatGPT 3.5 and ChatGPT 4. The responses were collected, and their accuracy and consistency were statistically analyzed throughout the series of entries. (3) Results: ChatGPT 4 displayed a statistically significantly improved accuracy with 85.7% compared to that of 57.7% of ChatGPT 3.5 (p &lt; 0.001). Furthermore, ChatGPT 4 was more consistent, correctly answering 77.8% across all rounds, a significant increase from the 44.9% observed from ChatGPT 3.5 (p &lt; 0.001).(4) Conclusions: The findings underscore the increased accuracy and dependability of ChatGPT 4 in the context of medical education and potential clinical decision making. Nonetheless, the research emphasizes the indispensable nature of human-delivered healthcare and the vital role of continuous assessment in leveraging AI in medicine.</p

    Postoperative free flap monitoring in reconstructive surgery—man or machine?

    Get PDF
    Free tissue transfer is widely used for the reconstruction of complex tissue defects. The survival of free flaps depends on the patency and integrity of the microvascular anastomosis. Accordingly, the early detection of vascular comprise and prompt intervention are indispensable to increase flap survival rates. Such monitoring strategies are commonly integrated into the perioperative algorithm, with clinical examination still being considered the gold standard for routine free flap monitoring. Despite its widespread acceptance as state of the art, the clinical examination also has its pitfalls, such as the limited applicability in buried flaps and the risk of poor interrater agreement due to inconsistent flap (failure) appearances. To compensate for these shortcomings, a plethora of alternative monitoring tools have been proposed in recent years, each of them with inherent strengths and limitations. Given the ongoing demographic change, the number of older patients requiring free flap reconstruction, e.g., after cancer resection, is rising. Yet, age-related morphologic changes may complicate the free flap evaluation in elderly patients and delay the prompt detection of clinical signs of flap compromise. In this review, we provide an overview of currently available and employed methods for free flap monitoring, with a special focus on elderly patients and how senescence may impact standard free flap monitoring strategies
    corecore