750 research outputs found

    Context-aware learning for robot-assisted endovascular catheterization

    Get PDF
    Endovascular intervention has become a mainstream treatment of cardiovascular diseases. However, multiple challenges remain such as unwanted radiation exposures, limited two-dimensional image guidance, insufficient force perception and haptic cues. Fast evolving robot-assisted platforms improve the stability and accuracy of instrument manipulation. The master-slave system also removes radiation to the operator. However, the integration of robotic systems into the current surgical workflow is still debatable since repetitive, easy tasks have little value to be executed by the robotic teleoperation. Current systems offer very low autonomy, potential autonomous features could bring more benefits such as reduced cognitive workloads and human error, safer and more consistent instrument manipulation, ability to incorporate various medical imaging and sensing modalities. This research proposes frameworks for automated catheterisation with different machine learning-based algorithms, includes Learning-from-Demonstration, Reinforcement Learning, and Imitation Learning. Those frameworks focused on integrating context for tasks in the process of skill learning, hence achieving better adaptation to different situations and safer tool-tissue interactions. Furthermore, the autonomous feature was applied to next-generation, MR-safe robotic catheterisation platform. The results provide important insights into improving catheter navigation in the form of autonomous task planning, self-optimization with clinical relevant factors, and motivate the design of intelligent, intuitive, and collaborative robots under non-ionizing image modalities.Open Acces

    Surgical skills modeling in cardiac ablation using deep learning

    Get PDF
    Cardiovascular diseases, a leading global cause of death, can be treated using Minimally Invasive Surgery (MIS) for various heart conditions. Cardiac ablation is an example of MIS, treating heart rhythm disorders like atrial fibrillation and the operation outcomes are highly dependent on the surgeon's skills. This procedure utilizes catheters, flexible endovascular devices inserted into the patient's blood vessels through a small incision. Traditionally, novice surgeons' performance is assessed in the Operating Room (OR) through surgical tasks. Unskilled behavior can lead to longer operations and inferior surgical outcomes. However, an alternative approach can be capturing surgeons' maneuvers and using them as input for an AI model to evaluate their skills outside the OR. To this end, two experimental setups were proposed to study the skills modelling for surgical behaviours. The first setup simulates the ablation procedure using a mechanical system with a synthetic heartbeat mechanism that measures contact forces between the catheter's tip and tissue. The second one simulates the cardiac catheterization procedure for the surgeon’s practice and records the user's maneuvers at the same time. The first task involved maintaining the force within a safe range while the tip of the catheter is touching the surface. The second task was passing a catheter’s tip through curves and level-intersection on a transparent blood vessel phantom. To evaluate attendees' demonstrations, it is crucial to extract maneuver models for both expert and novice surgeons. Data from participants, including novices and experts, performing the task using the experimental setups, is compiled. Deep recurrent neural networks are employed to extract the model of skills by solving a binary classification problem, distinguishing between expert and novice maneuvers. The results demonstrate the proposed networks' ability to accurately distinguish between novice and expert surgical skills, achieving an accuracy of over 92%

    Surgical Subtask Automation for Intraluminal Procedures using Deep Reinforcement Learning

    Get PDF
    Intraluminal procedures have opened up a new sub-field of minimally invasive surgery that use flexible instruments to navigate through complex luminal structures of the body, resulting in reduced invasiveness and improved patient benefits. One of the major challenges in this field is the accurate and precise control of the instrument inside the human body. Robotics has emerged as a promising solution to this problem. However, to achieve successful robotic intraluminal interventions, the control of the instrument needs to be automated to a large extent. The thesis first examines the state-of-the-art in intraluminal surgical robotics and identifies the key challenges in this field, which include the need for safe and effective tool manipulation, and the ability to adapt to unexpected changes in the luminal environment. To address these challenges, the thesis proposes several levels of autonomy that enable the robotic system to perform individual subtasks autonomously, while still allowing the surgeon to retain overall control of the procedure. The approach facilitates the development of specialized algorithms such as Deep Reinforcement Learning (DRL) for subtasks like navigation and tissue manipulation to produce robust surgical gestures. Additionally, the thesis proposes a safety framework that provides formal guarantees to prevent risky actions. The presented approaches are evaluated through a series of experiments using simulation and robotic platforms. The experiments demonstrate that subtask automation can improve the accuracy and efficiency of tool positioning and tissue manipulation, while also reducing the cognitive load on the surgeon. The results of this research have the potential to improve the reliability and safety of intraluminal surgical interventions, ultimately leading to better outcomes for patients and surgeons

    Surgical robotics beyond enhanced dexterity instrumentation: a survey of machine learning techniques and their role in intelligent and autonomous surgical actions

    Get PDF
    PURPOSE: Advances in technology and computing play an increasingly important role in the evolution of modern surgical techniques and paradigms. This article reviews the current role of machine learning (ML) techniques in the context of surgery with a focus on surgical robotics (SR). Also, we provide a perspective on the future possibilities for enhancing the effectiveness of procedures by integrating ML in the operating room. METHODS: The review is focused on ML techniques directly applied to surgery, surgical robotics, surgical training and assessment. The widespread use of ML methods in diagnosis and medical image computing is beyond the scope of the review. Searches were performed on PubMed and IEEE Explore using combinations of keywords: ML, surgery, robotics, surgical and medical robotics, skill learning, skill analysis and learning to perceive. RESULTS: Studies making use of ML methods in the context of surgery are increasingly being reported. In particular, there is an increasing interest in using ML for developing tools to understand and model surgical skill and competence or to extract surgical workflow. Many researchers begin to integrate this understanding into the control of recent surgical robots and devices. CONCLUSION: ML is an expanding field. It is popular as it allows efficient processing of vast amounts of data for interpreting and real-time decision making. Already widely used in imaging and diagnosis, it is believed that ML will also play an important role in surgery and interventional treatments. In particular, ML could become a game changer into the conception of cognitive surgical robots. Such robots endowed with cognitive skills would assist the surgical team also on a cognitive level, such as possibly lowering the mental load of the team. For example, ML could help extracting surgical skill, learned through demonstration by human experts, and could transfer this to robotic skills. Such intelligent surgical assistance would significantly surpass the state of the art in surgical robotics. Current devices possess no intelligence whatsoever and are merely advanced and expensive instruments

    Artificial intelligence surgery: how do we get to autonomous actions in surgery?

    Get PDF
    Most surgeons are skeptical as to the feasibility of autonomous actions in surgery. Interestingly, many examples of autonomous actions already exist and have been around for years. Since the beginning of this millennium, the field of artificial intelligence (AI) has grown exponentially with the development of machine learning (ML), deep learning (DL), computer vision (CV) and natural language processing (NLP). All of these facets of AI will be fundamental to the development of more autonomous actions in surgery, unfortunately, only a limited number of surgeons have or seek expertise in this rapidly evolving field. As opposed to AI in medicine, AI surgery (AIS) involves autonomous movements. Fortuitously, as the field of robotics in surgery has improved, more surgeons are becoming interested in technology and the potential of autonomous actions in procedures such as interventional radiology, endoscopy and surgery. The lack of haptics, or the sensation of touch, has hindered the wider adoption of robotics by many surgeons; however, now that the true potential of robotics can be comprehended, the embracing of AI by the surgical community is more important than ever before. Although current complete surgical systems are mainly only examples of tele-manipulation, for surgeons to get to more autonomously functioning robots, haptics is perhaps not the most important aspect. If the goal is for robots to ultimately become more and more independent, perhaps research should not focus on the concept of haptics as it is perceived by humans, and the focus should be on haptics as it is perceived by robots/computers. This article will discuss aspects of ML, DL, CV and NLP as they pertain to the modern practice of surgery, with a focus on current AI issues and advances that will enable us to get to more autonomous actions in surgery. Ultimately, there may be a paradigm shift that needs to occur in the surgical community as more surgeons with expertise in AI may be needed to fully unlock the potential of AIS in a safe, efficacious and timely manner

    The Use of Tactile Sensors in Oral and Maxillofacial Surgery: An Overview

    Get PDF
    Background: This overview aimed to characterize the type, development, and use of haptic technologies for maxillofacial surgical purposes. The work aim is to summarize and evaluate current advantages, drawbacks, and design choices of presented technologies for each field of application in order to address and promote future research as well as to provide a global view of the issue. Methods: Relevant manuscripts were searched electronically through Scopus, MEDLINE/PubMed, and Cochrane Library databases until 1 November 2022. Results: After analyzing the available literature, 31 articles regarding tactile sensors and interfaces, sensorized tools, haptic technologies, and integrated platforms in oral and maxillofacial surgery have been included. Moreover, a quality rating is provided for each article following appropriate evaluation metrics. Discussion: Many efforts have been made to overcome the technological limits of computed assistant diagnosis, surgery, and teaching. Nonetheless, a research gap is evident between dental/maxillofacial surgery and other specialties such as endovascular, laparoscopic, and microsurgery; especially for what concerns electrical and optical-based sensors for instrumented tools and sensorized tools for contact forces detection. The application of existing technologies is mainly focused on digital simulation purposes, and the integration into Computer Assisted Surgery (CAS) is far from being widely actuated. Virtual reality, increasingly adopted in various fields of surgery (e.g., sino-nasal, traumatology, implantology) showed interesting results and has the potential to revolutionize teaching and learning. A major concern regarding the actual state of the art is the absence of randomized control trials and the prevalence of case reports, retrospective cohorts, and experimental studies. Nonetheless, as the research is fast growing, we can expect to see many developments be incorporated into maxillofacial surgery practice, after adequate evaluation by the scientific community

    Transubilical Laparoscopically Assisted Pediatric Surgery

    Get PDF

    A Deep Learning Approach to Classify Surgical Skill in Microsurgery Using Force Data from a Novel Sensorised Surgical Glove

    Get PDF
    Microsurgery serves as the foundation for numerous operative procedures. Given its highly technical nature, the assessment of surgical skill becomes an essential component of clinical practice and microsurgery education. The interaction forces between surgical tools and tissues play a pivotal role in surgical success, making them a valuable indicator of surgical skill. In this study, we employ six distinct deep learning architectures (LSTM, GRU, Bi-LSTM, CLDNN, TCN, Transformer) specifically designed for the classification of surgical skill levels. We use force data obtained from a novel sensorized surgical glove utilized during a microsurgical task. To enhance the performance of our models, we propose six data augmentation techniques. The proposed frameworks are accompanied by a comprehensive analysis, both quantitative and qualitative, including experiments conducted with two cross-validation schemes and interpretable visualizations of the network’s decision-making process. Our experimental results show that CLDNN and TCN are the top-performing models, achieving impressive accuracy rates of 96.16% and 97.45%, respectively. This not only underscores the effectiveness of our proposed architectures, but also serves as compelling evidence that the force data obtained through the sensorzsed surgical glove contains valuable information regarding surgical skill
    • …
    corecore