23 research outputs found

    Situation Interpretation for Knowledge- and Model Based Laparoscopic Surgery

    Get PDF
    To manage the influx of information into surgical practice, new man-machine interaction methods are necessary to prevent information overflow. This work presents an approach to automatically segment surgeries into phases and select the most appropriate pieces of information for the current situation. This way, assistance systems can adopt themselves to the needs of the surgeon and not the other way around

    Toward a Knowledge-Driven Context-Aware System for Surgical Assistance

    Get PDF
    Complex surgeries complications are increasing, thus making an efficient surgical assistance is a real need. In this work, an ontology-based context-aware system was developed for surgical training/assistance during Thoracentesis by using image processing and semantic technologies. We evaluated the Thoracentesis ontology and implemented a paradigmatic test scenario to check the efficacy of the system by recognizing contextual information, e.g. the presence of surgical instruments on the table. The framework was able to retrieve contextual information about current surgical activity along with information on the need or presence of a surgical instrument

    KontextsensitivitĂ€t fĂŒr den Operationssaal der Zukunft

    Get PDF
    The operating room of the future is a topic of high interest. In this thesis, which is among the first in the recently defined field of Surgical Data Science, three major topics for automated context awareness in the OR of the future will be examined: improved surgical workflow analysis, the newly developed event impact factors, and as application combining these and other concepts the unified surgical display.Der Operationssaal der Zukunft ist ein Forschungsfeld von großer Bedeutung. In dieser Dissertation, die eine der ersten im kĂŒrzlich definierten Bereich „Surgical Data Science“ ist, werden drei Themen fĂŒr die automatisierte KontextsensitivitĂ€t im OP der Zukunft untersucht: verbesserte chirurgische Worflowanalyse, die neuentwickelten „Event Impact Factors“ und als Anwendungsfall, der diese Konzepte mit anderen kombiniert, das vereinheitlichte chirurgische Display

    KontextsensitivitĂ€t fĂŒr den Operationssaal der Zukunft

    Get PDF
    The operating room of the future is a topic of high interest. In this thesis, which is among the first in the recently defined field of Surgical Data Science, three major topics for automated context awareness in the OR of the future will be examined: improved surgical workflow analysis, the newly developed event impact factors, and as application combining these and other concepts the unified surgical display.Der Operationssaal der Zukunft ist ein Forschungsfeld von großer Bedeutung. In dieser Dissertation, die eine der ersten im kĂŒrzlich definierten Bereich „Surgical Data Science“ ist, werden drei Themen fĂŒr die automatisierte KontextsensitivitĂ€t im OP der Zukunft untersucht: verbesserte chirurgische Worflowanalyse, die neuentwickelten „Event Impact Factors“ und als Anwendungsfall, der diese Konzepte mit anderen kombiniert, das vereinheitlichte chirurgische Display

    Image-Based Scene Analysis for Computer-Assisted Laparoscopic Surgery

    Get PDF
    This thesis is concerned on image-based scene analysis for computer-assisted laparoscopic surgery. The focus lies on how to extract different types of information from laparoscopic video data. Methods for semantic analysis can be used to determine what instruments and organs are currently visible and where they are located. Quantitative analysis provides numerical information on the size and distances of structures. Workflow analysis uses information from previously seen images to estimate the progression of surgery. To demonstrate that the proposed methods function in real-world scenarios, multiple evaluations on actual laparoscopic image data recorded from surgeries were performed. The proposed methods for semantic and quantitative analysis were successfully evaluated in live phantom and animal studies and also used during a live gastric bypass on a human patient

    Why Deep Surgical Models Fail?: Revisiting Surgical Action Triplet Recognition through the Lens of Robustness

    Full text link
    Surgical action triplet recognition provides a better understanding of the surgical scene. This task is of high relevance as it provides to the surgeon with context-aware support and safety. The current go-to strategy for improving performance is the development of new network mechanisms. However, the performance of current state-of-the-art techniques is substantially lower than other surgical tasks. Why is this happening? This is the question that we address in this work. We present the first study to understand the failure of existing deep learning models through the lens of robustness and explainabilty. Firstly, we study current existing models under weak and strong ή−\delta-perturbations via adversarial optimisation scheme. We then provide the failure modes via feature based explanations. Our study revels that the key for improving performance and increasing reliability is in the core and spurious attributes. Our work opens the door to more trustworthiness and reliability deep learning models in surgical science

    SAGES consensus recommendations on an annotation framework for surgical video

    Get PDF
    Background: The growing interest in analysis of surgical video through machine learning has led to increased research efforts; however, common methods of annotating video data are lacking. There is a need to establish recommendations on the annotation of surgical video data to enable assessment of algorithms and multi-institutional collaboration. Methods: Four working groups were formed from a pool of participants that included clinicians, engineers, and data scientists. The working groups were focused on four themes: (1) temporal models, (2) actions and tasks, (3) tissue characteristics and general anatomy, and (4) software and data structure. A modified Delphi process was utilized to create a consensus survey based on suggested recommendations from each of the working groups. Results: After three Delphi rounds, consensus was reached on recommendations for annotation within each of these domains. A hierarchy for annotation of temporal events in surgery was established. Conclusions: While additional work remains to achieve accepted standards for video annotation in surgery, the consensus recommendations on a general framework for annotation presented here lay the foundation for standardization. This type of framework is critical to enabling diverse datasets, performance benchmarks, and collaboration
    corecore