55 research outputs found

    Patient-specific simulation for autonomous surgery

    Get PDF
    An Autonomous Robotic Surgical System (ARSS) has to interact with the complex anatomical environment, which is deforming and whose properties are often uncertain. Within this context, an ARSS can benefit from the availability of patient-specific simulation of the anatomy. For example, simulation can provide a safe and controlled environment for the design, test and validation of the autonomous capabilities. Moreover, it can be used to generate large amounts of patient-specific data that can be exploited to learn models and/or tasks. The aim of this Thesis is to investigate the different ways in which simulation can support an ARSS and to propose solutions to favor its employability in robotic surgery. We first address all the phases needed to create such a simulation, from model choice in the pre-operative phase based on the available knowledge to its intra-operative update to compensate for inaccurate parametrization. We propose to rely on deep neural networks trained with synthetic data both to generate a patient-specific model and to design a strategy to update model parametrization starting directly from intra-operative sensor data. Afterwards, we test how simulation can assist the ARSS, both for task learning and during task execution. We show that simulation can be used to efficiently train approaches that require multiple interactions with the environment, compensating for the riskiness to acquire data from real surgical robotic systems. Finally, we propose a modular framework for autonomous surgery that includes deliberative functions to handle real anatomical environments with uncertain parameters. The integration of a personalized simulation proves fundamental both for optimal task planning and to enhance and monitor real execution. The contributions presented in this Thesis have the potential to introduce significant step changes in the development and actual performance of autonomous robotic surgical systems, making them closer to applicability to real clinical conditions

    Autonomous tissue retraction with a biomechanically informed logic based framework

    Get PDF
    Autonomy in robot-assisted surgery is essential to reduce surgeons’ cognitive load and eventually improve the overall surgical outcome. A key requirement for autonomy in a safety-critical scenario as surgery lies in the generation of interpretable plans that rely on expert knowledge. Moreover, the Autonomous Robotic Surgical System (ARSS) must be able to reason on the dynamic and unpredictable anatomical environment, and quickly adapt the surgical plan in case of unexpected situations. In this paper, we present a modular Framework for Robot-Assisted Surgery (FRAS) in deformable anatomical environments. Our framework integrates a logic module for task-level interpretable reasoning, a biomechanical simulation that complements data from real sensors, and a situation awareness module for context interpretation. The framework performance is evaluated on simulated soft tissue retraction, a common surgical task to remove the tissue hiding a region of interest. Results show that the framework has the adaptability required to successfully accomplish the task, handling dynamic environmental conditions and possible failures, while guaranteeing the computational efficiency required in a real surgical scenario. The framework is made publicly available

    Autonomous tissue retraction with a biomechanically informed logic based framework

    Get PDF
    Autonomy in robot-assisted surgery is essential to reduce surgeons\u2019 cognitive load and eventually improve the overall surgical outcome. A key requirement for autonomy in a safety-critical scenario as surgery lies in the generation of interpretable plans that rely on expert knowledge. Moreover, the Autonomous Robotic Surgical System (ARSS) must be able to reason on the dynamic and unpredictable anatomical environment, and quickly adapt the surgical plan in case of unexpected situations. In this paper, we present a modular Framework for Robot-Assisted Surgery (FRAS) in deformable anatomical environments. Our framework integrates a logic module for task-level interpretable reasoning, a biomechanical simulation that complements data from real sensors, and a situation awareness module for context interpretation. The framework performance is evaluated on simulated soft tissue retraction, a common surgical task to remove the tissue hiding a region of interest. Results show that the framework has the adaptability required to successfully accomplish the task, handling dynamic environmental conditions and possible failures, while guaranteeing the computational efficiency required in a real surgical scenario. The framework is made publicly available

    Deliberation in autonomous robotic surgery: a framework for handling anatomical uncertainty

    Get PDF
    Autonomous robotic surgery requires deliberation, i.e. the ability to plan and execute a task adapting to uncertain and dynamic environments. Uncertainty in the surgical domain is mainly related to the partial pre-operative knowledge about patient-specific anatomical properties. In this paper, we introduce a logic-based framework for surgical tasks with deliberative functions of monitoring and learning. The DEliberative Framework for Robot-Assisted Surgery (DEFRAS) estimates a pre-operative patient-specific plan, and executes it while continuously measuring the applied force obtained from a biomechanical pre-operative model. Monitoring module compares this model with the actual situation reconstructed from sensors. In case of significant mismatch, the learning module is invoked to update the model, thus improving the estimate of the exerted force. DEFRAS is validated both in simulated and real environment with da Vinci Research Kit executing soft tissue retraction. Compared with state-of-the-art related works, the success rate of the task is improved while minimizing the interaction with the tissue to prevent unintentional damage

    Autonomous Robotic System for Breast Biopsy With Deformation Compensation

    Get PDF
    Image-guided biopsy is the most common technique for breast cancer diagnosis. Although magnetic resonance imaging (MRI) has the highest sensitivity in breast lesion detection, ultrasound (US) biopsy guidance is generally preferred due to its non-invasiveness and real-time image feedback during the insertion. In this work, we propose an autonomous robotic system for US-guided biopsy of breast lesions identified on pre-operative MRI. After initial MRI to breast registration, the US probe attached to the robotic manipulator compresses the breast tissues until a pre-determined force level is reached. This technique, known as preloading, will allow to minimize lesion displacement during the needle insertion. Our workflow integrates a deformation compensation strategy based on patient-specific biomechanical model to update the US probe orientation keeping the target lesion on the US image plane during compression. By relying on a deformation model, the proposed system does not require lesion visibility on US. Experimental evaluation is performed to assess the performance of the system on a realistic breast phantom with 15 internal lesions, considering different preloading forces. The deformation compensation strategy allows to improve localization accuracy, and as a consequence final probe positioning, for all considered lesions. Median lesion localization error is 3.3 mm, which is lower than the median lesion radius, when applying a preloading of 2 N, which guarantees both minimal needle insertion error and tissue stress

    Biomechanical modelling of probe to tissue interaction during ultrasound scanning

    Get PDF
    Purpose: Biomechanical simulation of anatomical deformations caused by ultrasound probe pressure is of outstanding importance for several applications, from the testing of robotic acquisition systems to multi-modal image fusion and development of ultrasound training platforms. Different approaches can be exploited for modelling the probe-tissue interaction, each achieving different trade-offs among accuracy, computation time and stability. Methods: We assess the performances of different strategies based on the finite element method for modelling the interaction between the rigid probe and soft tissues. Probe\u2013tissue contact is modelled using (i) penalty forces, (ii) constraint forces, and (iii) by prescribing the displacement of the mesh surface nodes. These methods are tested in the challenging context of ultrasound scanning of the breast, an organ undergoing large nonlinear deformations during the procedure. Results: The obtained results are evaluated against those of a non-physically based method. While all methods achieve similar accuracy, performance in terms of stability and speed shows high variability, especially for those methods modelling the contacts explicitly. Overall, prescribing surface displacements is the approach with best performances, but it requires prior knowledge of the contact area and probe trajectory. Conclusions: In this work, we present different strategies for modelling probe\u2013tissue interaction, each able to achieve different compromises among accuracy, speed and stability. The choice of the preferred approach highly depends on the requirements of the specific clinical application. Since the presented methodologies can be applied to describe general tool\u2013tissue interactions, this work can be seen as a reference for researchers seeking the most appropriate strategy to model anatomical deformation induced by the interaction with medical tools

    UnityFlexML: Training Reinforcement Learning Agents in a Simulated Surgical Environment

    Get PDF
    Sim-to-real Deep Reinforcement Learning (DRL) has shown promising in subtasks automation for surgical robotic systems, since it allows to safely perform all the trial and error attempts needed to learn the optimal control policy. However, a realistic simulation environment is essential to guarantee direct transfer of the learnt policy from the simulated to the real system. In this work, we introduce UnityFlexML, an open-source framework providing support for soft bodies simulation and state-of-the-art DRL methods. We demonstrate that a DRL agent can be successfully trained within UnityFlexML to manipulate deformable fat tissues for tumor exposure during a nephrectomy procedure. Furthermore, we show that the learned policy can be directly deployed on the da Vinci Research Kit, which is able to execute the trajectories generated by the DRL agent. The proposed framework represents an essential component for the development of autonomous robotic systems, where the interaction with the deformable anatomical environment is involved

    Soft Tissue Simulation Environment to Learn Manipulation Tasks in Autonomous Robotic Surgery

    Get PDF
    Reinforcement Learning (RL) methods have demonstrated promising results for the automation of subtasks in surgical robotic systems. Since many trial and error attempts are required to learn the optimal control policy, RL agent training can be performed in simulation and the learned behavior can be then deployed in real environments. In this work, we introduce an open-source simulation environment providing support for position based dynamics soft bodies simulation and state-of-the-art RL methods. We demonstrate the capabilities of the proposed framework by training an RL agent based on Proximal Policy Optimization in fat tissue manipulation for tumor exposure during a nephrectomy procedure. Leveraging on a preliminary optimization of the simulation parameters, we show that our agent is able to learn the task on a virtual replica of the anatomical environment. The learned behavior is robust to changes in the initial end-effector position. Furthermore, we show that the learned policy can be directly deployed on the da Vinci Research Kit, which is able to execute the trajectories generated by the RL agent. The proposed simulation environment represents an essential component for the development of next-generation robotic systems, where the interaction with the deformable anatomical environment is involved

    Data-driven Intra-operative Estimation of Anatomical Attachments for Autonomous Tissue Dissection

    Get PDF
    The execution of surgical tasks by an Autonomous Robotic System (ARS) requires an up-to-date model of the current surgical environment, which has to be deduced from measurements collected during task execution. In this work, we propose to automate tissue dissection tasks by introducing a convolutional neural network, called BA-Net, to predict the location of attachment points between adjacent tissues. BA-Net identifies the attachment areas from a single partial view of the deformed surface, without any a-priori knowledge about their location. The proposed method guarantees a very fast prediction time, which makes it ideal for intra-operative applications. Experimental validation is carried out on both simulated and real world phantom data of soft tissue manipulation performed with the da Vinci Research Kit (dVRK). The obtained results demonstrate that BA-Net provides robust predictions at varying geometric configurations, material properties, distributions of attachment points and grasping point locations. The estimation of attachment points provided by BA-Net improves the simulation of the anatomical environment where the system is acting, leading to a median simulation error below 5mm in all the tested conditions. BA-Net can thus further support an ARS by providing a more robust test bench for the robotic actions intra-operatively, in particular when replanning is needed. The method and collected dataset are available at https://gitlab.com/altairLab/banet

    Intra-operative Update of Boundary Conditions for Patient-specific Surgical Simulation

    Get PDF
    Patient-specific Biomechanical Models (PBMs) can enhance computer assisted surgical procedures with critical information. Although pre-operative data allow to parametrize such PBMs based on each patient's properties, they are not able to fully characterize them. In particular, simulation boundary conditions cannot be determined from pre-operative modalities, but their correct definition is essential to improve the PBM predictive capability. In this work, we introduce a pipeline that provides an up-to-date estimate of boundary conditions, starting from the pre-operative model of patient anatomy and the displacement undergone by points visible from an intra-operative vision sensor. The presented pipeline is experimentally validated in realistic conditions on an ex vivo pararenal fat tissue manipulation. We demonstrate its capability to update a PBM reaching clinically acceptable performances, both in terms of accuracy and intra-operative time constraints
    • …
    corecore