353 research outputs found

    Inductive learning of the surgical workflow model through video annotations

    Get PDF
    partially_open5siSurgical workflow modeling is becoming increasingly useful to train surgical residents for complex surgical procedures. Rule-based surgical workflows have shown to be useful to create context-aware systems. However, manually constructing production rules is a time-intensive and laborious task. With the expansion of new technologies, large video archive can be created and annotated exploiting and storing the expert’s knowledge. This paper presents a prototypical study of automatic generation of production rules, in the Horn-clause, using the First Order Inductive Learner (FOIL) algorithm applied to annotated surgical videos of Thoracentesis procedure and of its feasibility to use in context-aware system framework. The algorithm was able to learn 18 rules for surgical workflow model with 0.88 precision, and 0.94 F1 score on the standard video annotation data, representing entities of the surgical workflow, which was used to retrieve contextual information on Thoracentesis workflow for its application to surgical training.openNakawala, HIRENKUMAR CHANDRAKANT; DE MOMI, Elena; Pescatori, Erica Laura; Morelli, Anna; Ferrigno, GiancarloNakawala, HIRENKUMAR CHANDRAKANT; DE MOMI, Elena; Pescatori, Erica Laura; Morelli, Anna; Ferrigno, Giancarl

    Development of an intelligent surgical training system for Thoracentesis

    Get PDF
    Surgical training improves patient care, helps to reduce surgical risks, increases surgeon’s confidence, and thus enhances overall patient safety. Current surgical training systems are more focused on developing technical skills, e.g. dexterity, of the surgeons while lacking the aspects of context-awareness and intra-operative real-time guidance. Context-aware intelligent training systems interpret the current surgical situation and help surgeons to train on surgical tasks. As a prototypical scenario, we chose Thoracentesis procedure in this work. We designed the context-aware software framework using the surgical process model encompassing ontology and production rules, based on the procedure descriptions obtained through textbooks and interviews, and ontology-based and marker-based object recognition, where the system tracked and recognised surgical instruments and materials in surgeon’s hands and recognised surgical instruments on the surgical stand. The ontology was validated using annotated surgical videos, where the system identified “Anaesthesia” and “Aspiration” phase with 100% relative frequency and “Penetration” phase with 65% relative frequency. The system tracked surgical swab and 50 mL syringe with approximately 88.23% and 100% accuracy in surgeon’s hands and recognised surgical instruments with approximately 90% accuracy on the surgical stand. Surgical workflow training with the proposed system showed equivalent results as the traditional mentor-based training regime, thus this work is a step forward a new tool for context awareness and decision-making during surgical training

    Interpretable task planning and learning for autonomous robotic surgery with logic programming

    Get PDF
    This thesis addresses the long-term goal of full (supervised) autonomy in surgery, characterized by dynamic environmental (anatomical) conditions, unpredictable workflow of execution and workspace constraints. The scope is to reach autonomy at the level of sub-tasks of a surgical procedure, i.e. repetitive, yet tedious operations (e.g., dexterous manipulation of small objects in a constrained environment, as needle and wire for suturing). This will help reducing time of execution, hospital costs and fatigue of surgeons during the whole procedure, while further improving the recovery time for the patients. A novel framework for autonomous surgical task execution is presented in the first part of this thesis, based on answer set programming (ASP), a logic programming paradigm, for task planning (i.e., coordination of elementary actions and motions). Logic programming allows to directly encode surgical task knowledge, representing emph{plan reasoning methodology} rather than a set of pre-defined plans. This solution introduces several key advantages, as reliable human-like interpretable plan generation, real-time monitoring of the environment and the workflow for ready adaptation and failure recovery. Moreover, an extended review of logic programming for robotics is presented, motivating the choice of ASP for surgery and providing an useful guide for robotic designers. In the second part of the thesis, a novel framework based on inductive logic programming (ILP) is presented for surgical task knowledge learning and refinement. ILP guarantees fast learning from very few examples, a common drawback of surgery. Also, a novel action identification algorithm is proposed based on automatic environmental feature extraction from videos, dealing for the first time with small and noisy datasets collecting different workflows of executions under environmental variations. This allows to define a systematic methodology for unsupervised ILP. All the results in this thesis are validated on a non-standard version of the benchmark training ring transfer task for surgeons, which mimics some of the challenges of real surgery, e.g. constrained bimanual motion in small space

    Object-Centric Multiple Object Tracking

    Full text link
    Unsupervised object-centric learning methods allow the partitioning of scenes into entities without additional localization information and are excellent candidates for reducing the annotation burden of multiple-object tracking (MOT) pipelines. Unfortunately, they lack two key properties: objects are often split into parts and are not consistently tracked over time. In fact, state-of-the-art models achieve pixel-level accuracy and temporal consistency by relying on supervised object detection with additional ID labels for the association through time. This paper proposes a video object-centric model for MOT. It consists of an index-merge module that adapts the object-centric slots into detection outputs and an object memory module that builds complete object prototypes to handle occlusions. Benefited from object-centric learning, we only require sparse detection labels (0%-6.25%) for object localization and feature binding. Relying on our self-supervised Expectation-Maximization-inspired loss for object association, our approach requires no ID labels. Our experiments significantly narrow the gap between the existing object-centric model and the fully supervised state-of-the-art and outperform several unsupervised trackers.Comment: ICCV 2023 camera-ready versio

    Kontextsensitivität für den Operationssaal der Zukunft

    Get PDF
    The operating room of the future is a topic of high interest. In this thesis, which is among the first in the recently defined field of Surgical Data Science, three major topics for automated context awareness in the OR of the future will be examined: improved surgical workflow analysis, the newly developed event impact factors, and as application combining these and other concepts the unified surgical display.Der Operationssaal der Zukunft ist ein Forschungsfeld von großer Bedeutung. In dieser Dissertation, die eine der ersten im kürzlich definierten Bereich „Surgical Data Science“ ist, werden drei Themen für die automatisierte Kontextsensitivität im OP der Zukunft untersucht: verbesserte chirurgische Worflowanalyse, die neuentwickelten „Event Impact Factors“ und als Anwendungsfall, der diese Konzepte mit anderen kombiniert, das vereinheitlichte chirurgische Display

    Automatic extraction of robotic surgery actions from text and kinematic data

    Get PDF
    The latest generation of robotic systems is becoming increasingly autonomous due to technological advancements and artificial intelligence. The medical field, particularly surgery, is also interested in these technologies because automation would benefit surgeons and patients. While the research community is active in this direction, commercial surgical robots do not currently operate autonomously due to the risks involved in dealing with human patients: it is still considered safer to rely on human surgeons' intelligence for decision-making issues. This means that robots must possess human-like intelligence, including various reasoning capabilities and extensive knowledge, to become more autonomous and credible. As demonstrated by current research in the field, indeed, one of the most critical aspects in developing autonomous systems is the acquisition and management of knowledge. In particular, a surgical robot must base its actions on solid procedural surgical knowledge to operate autonomously, safely, and expertly. This thesis investigates different possibilities for automatically extracting and managing knowledge from text and kinematic data. In the first part, we investigated the possibility of extracting procedural surgical knowledge from real intervention descriptions available in textbooks and academic papers on the robotic-surgical domains, by exploiting Transformer-based pre-trained language models. In particular, we released SurgicBERTa, a RoBERTa-based pre-trained language model for surgical literature understanding. It has been used to detect procedural sentences in books and extract procedural elements from them. Then, with some use cases, we explored the possibilities of translating written instructions into logical rules usable for robotic planning. Since not all the knowledge required for automatizing a procedure is written in texts, we introduce the concept of surgical commonsense, showing how it relates to different autonomy levels. In the second part of the thesis, we analyzed surgical procedures from a lower granularity level, showing how each surgical gesture is associated with a given combination of kinematic data

    Kontextsensitivität für den Operationssaal der Zukunft

    Get PDF
    The operating room of the future is a topic of high interest. In this thesis, which is among the first in the recently defined field of Surgical Data Science, three major topics for automated context awareness in the OR of the future will be examined: improved surgical workflow analysis, the newly developed event impact factors, and as application combining these and other concepts the unified surgical display.Der Operationssaal der Zukunft ist ein Forschungsfeld von großer Bedeutung. In dieser Dissertation, die eine der ersten im kürzlich definierten Bereich „Surgical Data Science“ ist, werden drei Themen für die automatisierte Kontextsensitivität im OP der Zukunft untersucht: verbesserte chirurgische Worflowanalyse, die neuentwickelten „Event Impact Factors“ und als Anwendungsfall, der diese Konzepte mit anderen kombiniert, das vereinheitlichte chirurgische Display
    • …
    corecore