28 research outputs found

    A Continuum Robot and Control Interface for Surgical Assist in Fetoscopic Interventions

    Get PDF
    Twin-twin transfusion syndrome requires interventional treatment using a fetoscopically introduced laser to sever the shared blood supply between the fetuses. This is a delicate procedure relying on small instrumentation with limited articulation to guide the laser tip and a narrow field of view to visualize all relevant vascular connections. In this letter, we report on a mechatronic design for a comanipulated instrument that combines concentric tube actuation to a larger manipulator constrained by a remote centre of motion. A stereoscopic camera is mounted at the distal tip and used for imaging. Our mechanism provides enhanced dexterity and stability of the imaging device. We demonstrate that the imaging system can be used for computing geometry and enhancing the view at the operating site. Results using electromagnetic sensors for verification and comparison to visual odometry from the distal sensor show that our system is promising and can be developed further for multiple clinical needs in fetoscopic procedures

    ToolNet: Holistically-Nested Real-Time Segmentation of Robotic Surgical Tools

    Get PDF
    Real-time tool segmentation from endoscopic videos is an essential part of many computer-assisted robotic surgical systems and of critical importance in robotic surgical data science. We propose two novel deep learning architectures for automatic segmentation of non-rigid surgical instruments. Both methods take advantage of automated deep-learning-based multi-scale feature extraction while trying to maintain an accurate segmentation quality at all resolutions. The two proposed methods encode the multi-scale constraint inside the network architecture. The first proposed architecture enforces it by cascaded aggregation of predictions and the second proposed network does it by means of a holistically-nested architecture where the loss at each scale is taken into account for the optimization process. As the proposed methods are for real-time semantic labeling, both present a reduced number of parameters. We propose the use of parametric rectified linear units for semantic labeling in these small architectures to increase the regularization ability of the design and maintain the segmentation accuracy without overfitting the training sets. We compare the proposed architectures against state-of-the-art fully convolutional networks. We validate our methods using existing benchmark datasets, including ex vivo cases with phantom tissue and different robotic surgical instruments present in the scene. Our results show a statistically significant improved Dice Similarity Coefficient over previous instrument segmentation methods. We analyze our design choices and discuss the key drivers for improving accuracy.Comment: Paper accepted at IROS 201

    Deep Sequential Mosaicking of Fetoscopic Videos

    Get PDF
    Twin-to-twin transfusion syndrome treatment requires fetoscopic laser photocoagulation of placental vascular anastomoses to regulate blood flow to both fetuses. Limited field-of-view (FoV) and low visual quality during fetoscopy make it challenging to identify all vascular connections. Mosaicking can align multiple overlapping images to generate an image with increased FoV, however, existing techniques apply poorly to fetoscopy due to the low visual quality, texture paucity, and hence fail in longer sequences due to the drift accumulated over time. Deep learning techniques can facilitate in overcoming these challenges. Therefore, we present a new generalized Deep Sequential Mosaicking (DSM) framework for fetoscopic videos captured from different settings such as simulation, phantom, and real environments. DSM extends an existing deep image-based homography model to sequential data by proposing controlled data augmentation and outlier rejection methods. Unlike existing methods, DSM can handle visual variations due to specular highlights and reflection across adjacent frames, hence reducing the accumulated drift. We perform experimental validation and comparison using 5 diverse fetoscopic videos to demonstrate the robustness of our framework.Comment: Accepted at MICCAI 201

    FetReg: Placental Vessel Segmentation and Registration in Fetoscopy Challenge Dataset

    Get PDF
    Fetoscopy laser photocoagulation is a widely used procedure for the treatment of Twin-to-Twin Transfusion Syndrome (TTTS), that occur in mono-chorionic multiple pregnancies due to placental vascular anastomoses. This procedure is particularly challenging due to limited field of view, poor manoeuvrability of the fetoscope, poor visibility due to fluid turbidity, variability in light source, and unusual position of the placenta. This may lead to increased procedural time and incomplete ablation, resulting in persistent TTTS. Computer-assisted intervention may help overcome these challenges by expanding the fetoscopic field of view through video mosaicking and providing better visualization of the vessel network. However, the research and development in this domain remain limited due to unavailability of high-quality data to encode the intra- and inter-procedure variability. Through the \textit{Fetoscopic Placental Vessel Segmentation and Registration (FetReg)} challenge, we present a large-scale multi-centre dataset for the development of generalized and robust semantic segmentation and video mosaicking algorithms for the fetal environment with a focus on creating drift-free mosaics from long duration fetoscopy videos. In this paper, we provide an overview of the FetReg dataset, challenge tasks, evaluation metrics and baseline methods for both segmentation and registration. Baseline methods results on the FetReg dataset shows that our dataset poses interesting challenges, offering large opportunity for the creation of novel methods and models through a community effort initiative guided by the FetReg challenge

    Investigating exploration for deep reinforcement learning of concentric tube robot control

    Get PDF
    PURPOSE: Concentric tube robots are composed of multiple concentric, pre-curved, super-elastic, telescopic tubes that are compliant and have a small diameter suitable for interventions that must be minimally invasive like fetal surgery. Combinations of rotation and extension of the tubes can alter the robot's shape but the inverse kinematics are complex to model due to the challenge of incorporating friction and other tube interactions or manufacturing imperfections. We propose a model-free reinforcement learning approach to form the inverse kinematics solution and directly obtain a control policy. METHOD: Three exploration strategies are shown for deep deterministic policy gradient with hindsight experience replay for concentric tube robots in simulation environments. The aim is to overcome the joint to Cartesian sampling bias and be scalable with the number of robotic tubes. To compare strategies, evaluation of the trained policy network to selected Cartesian goals and associated errors are analyzed. The learned control policy is demonstrated with trajectory following tasks. RESULTS: Separation of extension and rotation joints for Gaussian exploration is required to overcome Cartesian sampling bias. Parameter noise and Ornstein-Uhlenbeck were found to be optimal strategies with less than 1 mm error in all simulation environments. Various trajectories can be followed with the optimal exploration strategy learned policy at high joint extension values. Our inverse kinematics solver in evaluation has 0.44 mm extension and [Formula: see text] rotation error. CONCLUSION: We demonstrate the feasibility of effective model-free control for concentric tube robots. Directly using the control policy, arbitrary trajectories can be followed and this is an important step towards overcoming the challenge of concentric tube robot control for clinical use in minimally invasive interventions

    Robot-assisted Optical Ultrasound Scanning

    Get PDF
    Optical ultrasound, where ultrasound is both generated and received using light, can be integrated in very small diameter instruments making it ideally suited to minimally invasive interventions. One-dimensional information can be obtained using a single pair of optical fibres comprising of a source and detector but this can be difficult to interpret clinically. In this paper, we present a robotic-assisted scanning solution where a concentric tube robot manipulates an optical ultrasound probe along a consistent trajectory. A torque coil is utilised as a buffer between the curved nitinol tube and the probe to prevent torsion on the probe and maintain the axial orientation of the probe while the tube is rotating. The design and control of the scanning mechanism are presented along with the integration of the mechanism with a fibre-based imaging probe. Trajectory repeatability is assessed using electromagnetic tracking and a technique to calibrate the transformation between imaging and robot coordinates using a known model is presented. Finally, we show example images of 3D printed phantoms generated by collecting multiple OpUS A-scans within the same 3D scene to illustrate how robot-assisted scanning can expand the field of view

    Haptic Guidance Based on All-Optical Ultrasound Distance Sensing for Safer Minimally Invasive Fetal Surgery

    Get PDF
    By intervening during the early stage of gestation, fetal surgeons aim to correct or minimize the effects of congenital disorders. As compared to postnatal treatment of these disorders, such early interventions can often actually save the life of the fetus and also improve the quality of life of the newborn. However, fetal surgery is considered one of the most challenging disciplines within Minimally Invasive Surgery (MIS), owing to factors such as the fragility of the anatomic features, poor visibility, limited manoeuvrability, and extreme requirements in terms of instrument handling with precise positioning. This work is centered on a fetal laser surgery procedure treating placental disorders. It proposes the use of haptic guidance to enhance the overall safety of this procedure and to simplify instrument handling. A method is described that provides effective guidance by installing a forbidden region virtual fixture over the placenta, thereby safeguarding adequate clearance between the instrument tip and the placenta. With a novel application of all-optical ultrasound distance sensing in which transmission and reception are performed with fibre optics, this method can be used with a sole reliance on intraoperatively acquired data. The added value of the guidance approach, in terms of safety and performance, is demonstrated in a series of experiments with a robotic platform

    Design and Modeling of Multi-Arm Continuum Robots

    Get PDF
    Continuum robots are snake-like systems able to deliver optimal therapies to pathologies deep inside the human cavity by following 3D complex paths. They show promise when anatomical pathways need to be traversed thanks to their enhanced flexibility and dexterity and show advantages when deployed in the field of single-port surgery. This PhD thesis concerns the development and modelling of multi-arm and hybrid continuum robots for medical interventions. The flexibility and steerability of the robot’s end-effector are achieved through concentric tube technology and push/pull technology. Medical robotic prototypes have been designed as proof of concepts and testbeds of the proposed theoretical works.System design considers the limitations and constraints that occur in the surgical procedures for which the systems were proposed for. Specifically, two surgical applications are considered. Our first prototype was designed to deliver multiple tools to the eye cavity for deep orbital interventions focusing on a currently invasive intervention named Optic Nerve Sheath Fenestration (ONSF). This thesis presents the end-to-end design, engineering and modelling of the prototype. The developed prototype is the first suggested system to tackle the challenges (limited workspace, need for enhanced flexibility and dexterity, danger for harming tissue with rigid instruments, extensive manipulation of the eye) arising in ONSF. It was designed taking into account the clinical requirements and constraints while theoretical works employing the Cosserat rod theory predict the shape of the continuum end-effector. Experimental runs including ex vivo experimental evaluations, mock-up surgical scenarios and tests with and without loading conditions prove the concept of accessing the eye cavity. Moreover, a continuum robot for thoracic interventions employing push/pull technology was designed and manufactured. The developed system can reach deep seated pathologies in the lungs and access regions in the bronchial tree that are inaccessible with rigid and straight instruments either robotically or manually actuated. A geometrically exact model of the robot that considers both the geometry of the robot and mechanical properties of the backbones is presented. It can predict the shape of the bronchoscope without the constant curvature assumption. The proposed model can also predict the robot shape and micro-scale movements accurately in contrast to the classic geometric model which provides an accurate description of the robot’s differential kinematics for large scale movements

    Combining Differential Kinematics and Optical Flow for Automatic Labeling of Continuum Robots in Minimally Invasive Surgery

    Get PDF
    International audienceThe segmentation of continuum robots in medical images can be of interest for analyzing surgical procedures or for controlling them. However, the automatic segmentation of continuous and flexible shapes is not an easy task. On one hand conventional approaches are not adapted to the specificities of these instruments, such as imprecise kinematic models, and on the other hand techniques based on deep-learning showed interesting capabilities but need many manually labeled images. In this article we propose a novel approach for segmenting continuum robots on endoscopic images, which requires no prior on the instrument visual appearance and no manual annotation of images. The method relies on the use of the combination of kinematic models and differential kinematic models of the robot and the analysis of optical flow in the images. A cost function aggregating information from the acquired image, from optical flow and from robot encoders is optimized using particle swarm optimization and provides estimated parameters of the pose of the continuum instrument and a mask defining the instrument in the image. In addition a temporal consistency is assessed in order to improve stochastic optimization and reject outliers. The proposed approach has been tested for the robotic instruments of a flexible endoscopy platform both for benchtop acquisitions and an in vivo video. The results show the ability of the technique to correctly segment the instruments without a prior, and in challenging conditions. The obtained segmentation can be used for several applications, for instance for providing automatic labels for machine learning techniques

    Hand-eye calibration with a remote centre of motion

    Get PDF
    In the eye-in-hand robot configuration, hand-eye calibration plays a vital role in completing the link between the robot and camera coordinate systems. Calibration algorithms are mature and provide accurate transformation estimations for an effective camera-robot link but rely on a sufficiently wide range of calibration data to avoid errors and degenerate configurations. This can be difficult in the context of keyhole surgical robots because they are mechanically constrained to move around a remote centre of motion (RCM) which is located at the trocar port. The trocar limits the range of feasible calibration poses that can be obtained and results in ill-conditioned hand-eye constraints. In this letter, we propose a new approach to deal with this problem by incorporating the RCM constraints into the hand-eye formulation. We show that this not only avoids ill-conditioned constraints but is also more accurate than classic hand-eye calibration with a free 6DoF motion, due to solving simpler equations that take advantage of the reduced DoF. We validate our method using simulation to test numerical stability and a physical implementation on an RCM constrained KUKA LBR iiwa 14 R820 equipped with a NanEye stereo camera
    corecore