20 research outputs found

    Technical skill assessment in minimally invasive surgery using artificial intelligence: a systematic review.

    Get PDF
    BACKGROUND Technical skill assessment in surgery relies on expert opinion. Therefore, it is time-consuming, costly, and often lacks objectivity. Analysis of intraoperative data by artificial intelligence (AI) has the potential for automated technical skill assessment. The aim of this systematic review was to analyze the performance, external validity, and generalizability of AI models for technical skill assessment in minimally invasive surgery. METHODS A systematic search of Medline, Embase, Web of Science, and IEEE Xplore was performed to identify original articles reporting the use of AI in the assessment of technical skill in minimally invasive surgery. Risk of bias (RoB) and quality of the included studies were analyzed according to Quality Assessment of Diagnostic Accuracy Studies criteria and the modified Joanna Briggs Institute checklists, respectively. Findings were reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement. RESULTS In total, 1958 articles were identified, 50 articles met eligibility criteria and were analyzed. Motion data extracted from surgical videos (n = 25) or kinematic data from robotic systems or sensors (n = 22) were the most frequent input data for AI. Most studies used deep learning (n = 34) and predicted technical skills using an ordinal assessment scale (n = 36) with good accuracies in simulated settings. However, all proposed models were in development stage, only 4 studies were externally validated and 8 showed a low RoB. CONCLUSION AI showed good performance in technical skill assessment in minimally invasive surgery. However, models often lacked external validity and generalizability. Therefore, models should be benchmarked using predefined performance metrics and tested in clinical implementation studies

    Proposal and multicentric validation of a laparoscopic Roux-en-Y gastric bypass surgery ontology.

    Get PDF
    BACKGROUND Phase and step annotation in surgical videos is a prerequisite for surgical scene understanding and for downstream tasks like intraoperative feedback or assistance. However, most ontologies are applied on small monocentric datasets and lack external validation. To overcome these limitations an ontology for phases and steps of laparoscopic Roux-en-Y gastric bypass (LRYGB) is proposed and validated on a multicentric dataset in terms of inter- and intra-rater reliability (inter-/intra-RR). METHODS The proposed LRYGB ontology consists of 12 phase and 46 step definitions that are hierarchically structured. Two board certified surgeons (raters) with > 10 years of clinical experience applied the proposed ontology on two datasets: (1) StraBypass40 consists of 40 LRYGB videos from Nouvel Hôpital Civil, Strasbourg, France and (2) BernBypass70 consists of 70 LRYGB videos from Inselspital, Bern University Hospital, Bern, Switzerland. To assess inter-RR the two raters' annotations of ten randomly chosen videos from StraBypass40 and BernBypass70 each, were compared. To assess intra-RR ten randomly chosen videos were annotated twice by the same rater and annotations were compared. Inter-RR was calculated using Cohen's kappa. Additionally, for inter- and intra-RR accuracy, precision, recall, F1-score, and application dependent metrics were applied. RESULTS The mean ± SD video duration was 108 ± 33 min and 75 ± 21 min in StraBypass40 and BernBypass70, respectively. The proposed ontology shows an inter-RR of 96.8 ± 2.7% for phases and 85.4 ± 6.0% for steps on StraBypass40 and 94.9 ± 5.8% for phases and 76.1 ± 13.9% for steps on BernBypass70. The overall Cohen's kappa of inter-RR was 95.9 ± 4.3% for phases and 80.8 ± 10.0% for steps. Intra-RR showed an accuracy of 98.4 ± 1.1% for phases and 88.1 ± 8.1% for steps. CONCLUSION The proposed ontology shows an excellent inter- and intra-RR and should therefore be implemented routinely in phase and step annotation of LRYGB

    CholecTrack20: A Dataset for Multi-Class Multiple Tool Tracking in Laparoscopic Surgery

    Full text link
    Tool tracking in surgical videos is vital in computer-assisted intervention for tasks like surgeon skill assessment, safety zone estimation, and human-machine collaboration during minimally invasive procedures. The lack of large-scale datasets hampers Artificial Intelligence implementation in this domain. Current datasets exhibit overly generic tracking formalization, often lacking surgical context: a deficiency that becomes evident when tools move out of the camera's scope, resulting in rigid trajectories that hinder realistic surgical representation. This paper addresses the need for a more precise and adaptable tracking formalization tailored to the intricacies of endoscopic procedures by introducing CholecTrack20, an extensive dataset meticulously annotated for multi-class multi-tool tracking across three perspectives representing the various ways of considering the temporal duration of a tool trajectory: (1) intraoperative, (2) intracorporeal, and (3) visibility within the camera's scope. The dataset comprises 20 laparoscopic videos with over 35,000 frames and 65,000 annotated tool instances with details on spatial location, category, identity, operator, phase, and surgical visual conditions. This detailed dataset caters to the evolving assistive requirements within a procedure.Comment: Surgical tool tracking dataset paper, 15 pages, 9 figures, 4 table

    ClipAssistNet: bringing real-time safety feedback to operating rooms

    Get PDF
    Purpose: Cholecystectomy is one of the most common laparoscopic procedures. A critical phase of laparoscopic cholecystectomy consists in clipping the cystic duct and artery before cutting them. Surgeons can improve the clipping safety by ensuring full visibility of the clipper, while enclosing the artery or the duct with the clip applier jaws. This can prevent unintentional interaction with neighboring tissues or clip misplacement. In this article, we present a novel real-time feedback to ensure safe visibility of the instrument during this critical phase. This feedback incites surgeons to keep the tip of their clip applier visible while operating. Methods: We present a new dataset of 300 laparoscopic cholecystectomy videos with frame-wise annotation of clipper tip visibility. We further present ClipAssistNet, a neural network-based image classifier which detects the clipper tip visibility in single frames. ClipAssistNet ensembles predictions from 5 neural networks trained on different subsets of the dataset. Results: Our model learns to classify the clipper tip visibility by detecting its presence in the image. Measured on a separate test set, ClipAssistNet classifies the clipper tip visibility with an AUROC of 0.9107, and 66.15% specificity at 95% sensitivity. Additionally, it can perform real-time inference (16 FPS) on an embedded computing board; this enables its deployment in operating room settings. Conclusion: This work presents a new application of computer-assisted surgery for laparoscopic cholecystectomy, namely real-time feedback on adequate visibility of the clip applier. We believe this feedback can increase surgeons' attentiveness when departing from safe visibility during the critical clipping of the cystic duct and artery

    Surgical Phase Recognition: From Public Datasets to Real-World Data

    Get PDF
    Automated recognition of surgical phases is a prerequisite for computer-assisted analysis of surgeries. The research on phase recognition has been mostly driven by publicly available datasets of laparoscopic cholecystectomy (Lap Chole) videos. Yet, videos observed in real-world settings might contain challenges, such as additional phases and longer videos, which may be missing in curated public datasets. In this work, we study (i) the possible data distribution discrepancy between videos observed in a given medical center and videos from existing public datasets, and (ii) the potential impact of this distribution difference on model development. To this end, we gathered a large, private dataset of 384 Lap Chole videos. Our dataset contained all videos, including emergency surgeries and teaching cases, recorded in a continuous time frame of five years. We observed strong differences between our dataset and the most commonly used public dataset for surgical phase recognition, Cholec80. For instance, our videos were much longer, included additional phases, and had more complex transitions between phases. We further trained and compared several state-of-the-art phase recognition models on our dataset. The models’ performances greatly varied across surgical phases and videos. In particular, our results highlighted the challenge of recognizing extremely under- represented phases (usually missing in public datasets); the major phases were recognized with at least 76 percent recall. Overall, our results highlighted the need to better understand the distribution of the video data phase that recognition models are trained on

    Work Characteristics of Acute Care Surgeons at a Swiss Tertiary Care Hospital: A Prospective One-Month Snapshot Study.

    Get PDF
    BACKGROUND Multiple acute care surgery (ACS) working models have been implemented. To optimize resources and on-call rosters, knowledge about work characteristics is required. Therefore, this study aimed to investigate the daily work characteristics of ACS surgeons at a Swiss tertiary care hospital. METHODS Single-center prospective snapshot study. In February 2020, ACS fellows prospectively recorded their work characteristics, case volume and surgical case mix for 20 day shifts and 16 night shifts. Work characteristics were categorized in 11 different activities and documented in intervals of 30 min. Descriptive statistics were applied. RESULTS A total of 432.5 working hours (h) were documented and characterized. The three main activities 'surgery,' 'patient consultations' and 'administrative work' ranged from 30.8 to 35.9% of the documented working time. A total of 46 surgical interventions were performed. In total, during day shifts, there were 16 elective and 15 emergency interventions, during night shifts 15 emergency interventions. For surgery, two peaks between 10:00 a.m.-02:00 p.m. and 08:00 p.m.-11:00 p.m. were observed. A total of 225 patient were consulted, with a first peak between 08:00 a.m. and 11:00 a.m. and a second, wider peak between 02:00 p.m. and 02:00 a.m. CONCLUSION The three main activities 'surgery,' 'patient consultations' and 'administrative work' were comparable with approximately one third of the working time each. There was a bimodal temporal distribution for both surgery and patient consultations. These results may help to improve hospital resources and on-call rosters of ACS services

    From Bit to Bedside – Artificial Intelligence and its Potential in Surgery

    No full text
    These days a lot of scientific breakthroughs are enabled by artificial intelligence (AI). As most surgeons are not familiar with computer science, this review article will explain basic concepts of AI and deep learning, how they are applied to surgery and what potential can be leveraged through AI in surgery

    Enhanced Recovery in Emergency Abdominal Surgery

    No full text
    Enhanced recovery after surgery (ERAS) protocols define evidence-based treatment bundles in order to improve clinical outcomes and were established originally in elective surgery. Nevertheless, ERAS protocols were also implemented in emergency abdominal surgery, resulting in reduced hospital length of stay (H-LOS) and morbidity. However, implementation of ERAS in emergency abdominal surgery was without impact on mortality. Elderly patients undergoing emergency abdominal surgery are also expected to profit from ERAS programs, however, scientific evidence is currently lacking
    corecore