109 research outputs found
Autonomous Tissue Scanning under Free-Form Motion for Intraoperative Tissue Characterisation
In Minimally Invasive Surgery (MIS), tissue scanning with imaging probes is
required for subsurface visualisation to characterise the state of the tissue.
However, scanning of large tissue surfaces in the presence of deformation is a
challenging task for the surgeon. Recently, robot-assisted local tissue
scanning has been investigated for motion stabilisation of imaging probes to
facilitate the capturing of good quality images and reduce the surgeon's
cognitive load. Nonetheless, these approaches require the tissue surface to be
static or deform with periodic motion. To eliminate these assumptions, we
propose a visual servoing framework for autonomous tissue scanning, able to
deal with free-form tissue deformation. The 3D structure of the surgical scene
is recovered and a feature-based method is proposed to estimate the motion of
the tissue in real-time. A desired scanning trajectory is manually defined on a
reference frame and continuously updated using projective geometry to follow
the tissue motion and control the movement of the robotic arm. The advantage of
the proposed method is that it does not require the learning of the tissue
motion prior to scanning and can deal with free-form deformation. We deployed
this framework on the da Vinci surgical robot using the da Vinci Research Kit
(dVRK) for Ultrasound tissue scanning. Since the framework does not rely on
information from the Ultrasound data, it can be easily extended to other
probe-based imaging modalities.Comment: 7 pages, 5 figures, ICRA 202
Registration-free simultaneous catheter and environment modelling
© Springer International Publishing AG 2016. Endovascular procedures are challenging to perform due to the complexity and difficulty in catheter manipulation. The simultaneous recovery of the 3D structure of the vasculature and the catheter position and orientation intra-operatively is necessary in catheter control and navigation. State-of-art Simultaneous Catheter and Environment Modelling provides robust and real-time 3D vessel reconstruction based on real-time intravascular ultrasound (IVUS) imaging and electromagnetic (EM) sensing,but still relies on accurate registration between EM and pre-operative data. In this paper,a registration-free vessel reconstruction method is proposed for endovascular navigation. In the optimisation framework,the EM-CT registration is estimated and updated intra-operatively together with the 3D vessel reconstruction from IVUS,EM and pre-operative data,and thus does not require explicit registration. The proposed algorithm can also deal with global (patient) motion and periodic deformation caused by cardiac motion. Phantom and invivo experiments validate the accuracy of the algorithm and the results demonstrate the potential clinical value of the technique
Online tracking and retargeting with applications to optical biopsy in gastrointestinal endoscopic examinations
With recent advances in biophotonics, techniques such as narrow band imaging, confocal laser endomicroscopy, fluorescence spectroscopy, and optical coherence tomography, can be combined with normal white-light endoscopes to provide in vivo microscopic tissue characterisation, potentially avoiding the need for offline histological analysis. Despite the advantages of these techniques to provide online optical biopsy in situ, it is challenging for gastroenterologists to retarget the optical biopsy sites during endoscopic examinations. This is because optical biopsy does not leave any mark on the tissue. Furthermore, typical endoscopic cameras only have a limited field-of-view and the biopsy sites often enter or exit the camera view as the endoscope moves. In this paper, a framework for online tracking and retargeting is proposed based on the concept of tracking-by-detection. An online detection cascade is proposed where a random binary descriptor using Haar-like features is included as a random forest classifier. For robust retargeting, we have also proposed a RANSAC-based location verification component that incorporates shape context. The proposed detection cascade can be readily integrated with other temporal trackers. Detailed performance evaluation on in vivo gastrointestinal video sequences demonstrates the performance advantage of the proposed method over the current state-of-the-art
SCEM+: Real-Time Robust Simultaneous Catheter and Environment Modeling for Endovascular Navigation
© 2016 IEEE. Endovascular procedures are characterised by significant challenges mainly due to the complexity in catheter control and navigation. Real-time recovery of the 3-D structure of the vasculature is necessary to visualise the interaction between the catheter and its surrounding environment to facilitate catheter manipulations. State-of-the-art intraoperative vessel reconstruction approaches are increasingly relying on nonionising imaging techniques such as optical coherence tomography (OCT) and intravascular ultrasound (IVUS). To enable accurate recovery of vessel structures and to deal with sensing errors and abrupt catheter motions, this letter presents a robust and real-time vessel reconstruction scheme for endovascular navigation based on IVUS and electromagnetic (EM) tracking. It is formulated as a nonlinear optimisation problem, which considers the uncertainty in both the IVUS contour and the EM pose, as well as vessel morphology provided by preoperative data. Detailed phantom validation is performed and the results demonstrate the potential clinical value of the technique
Pulmonary vasospasm in systemic sclerosis: noninvasive techniques for detection
In a subgroup of patients with systemic sclerosis (SSc), vasospasm affecting the pulmonary circulation may contribute to worsening respiratory symptoms, including dyspnea. Noninvasive assessment of pulmonary blood flow (PBF), utilizing inert-gas rebreathing (IGR) and dual-energy computed-tomography pulmonary angiography (DE-CTPA), may be useful for identifying pulmonary vasospasm. Thirty-one participants (22 SSc patients and 9 healthy volunteers) underwent PBF assessment with IGR and DE-CTPA at baseline and after provocation with a cold-air inhalation challenge (CACh). Before the study investigations, participants were assigned to subgroups: group A included SSc patients who reported increased breathlessness after exposure to cold air (n = 11), group B included SSc patients without cold-air sensitivity (n = 11), and group C patients included the healthy volunteers. Median change in PBF from baseline was compared between groups A, B, and C after CACh. Compared with groups B and C, in group A there was a significant decline in median PBF from baseline at 10 minutes (−10%; range: −52.2% to 4.0%; P < 0.01), 20 minutes (−17.4%; −27.9% to 0.0%; P < 0.01), and 30 minutes (−8.5%; −34.4% to 2.0%; P < 0.01) after CACh. There was no significant difference in median PBF change between groups B or C at any time point and no change in pulmonary perfusion on DE-CTPA. Reduction in pulmonary blood flow following CACh suggests that pulmonary vasospasm may be present in a subgroup of patients with SSc and may contribute to worsening dyspnea on exposure to cold
H-Net: unsupervised attention-based stereo depth estimation leveraging epipolar geometry
Depth estimation from a stereo image pair has become one of the most explored applications in computer vision, with most previous methods relying on fully supervised learning settings. However, due to the difficulty in acquiring accurate and scalable ground truth data, the training of fully supervised methods is challenging. As an alternative, self-supervised methods are becoming more popular to mitigate this challenge. In this paper, we introduce the H-Net, a deep-learning framework for unsupervised stereo depth estimation that leverages epipolar geometry to refine stereo matching. For the first time, a Siamese autoencoder architecture is used for depth estimation which allows mutual information between rectified stereo images to be extracted. To enforce the epipolar constraint, the mutual epipolar attention mechanism has been designed which gives more emphasis to correspondences of features that lie on the same epipolar line while learning mutual information between the input stereo pair. Stereo correspondences are further enhanced by incorporating semantic information to the proposed attention mechanism. More specifically, the optimal transport algorithm is used to suppress attention and eliminate outliers in areas not visible in both cameras. Extensive experiments on KITTI2015 and Cityscapes show that the proposed modules are able to improve the performance of the unsupervised stereo depth estimation methods while closing the gap with the fully supervised approaches
Regularising disparity estimation via multi task learning with structured light reconstruction
3D reconstruction is a useful tool for surgical planning and guidance.
However, the lack of available medical data stunts research and development in
this field, as supervised deep learning methods for accurate disparity
estimation rely heavily on large datasets containing ground truth information.
Alternative approaches to supervision have been explored, such as
self-supervision, which can reduce or remove entirely the need for ground
truth. However, no proposed alternatives have demonstrated performance
capabilities close to what would be expected from a supervised setup. This work
aims to alleviate this issue. In this paper, we investigate the learning of
structured light projections to enhance the development of direct disparity
estimation networks. We show for the first time that it is possible to
accurately learn the projection of structured light on a scene, implicitly
learning disparity. Secondly, we \textcolor{black}{explore the use of a multi
task learning (MTL) framework for the joint training of structured light and
disparity. We present results which show that MTL with structured light
improves disparity training; without increasing the number of model parameters.
Our MTL setup outperformed the single task learning (STL) network in every
validation test. Notably, in the medical generalisation test, the STL error was
1.4 times worse than that of the best MTL performance. The benefit of using MTL
is emphasised when the training data is limited.} A dataset containing
stereoscopic images, disparity maps and structured light projections on medical
phantoms and ex vivo tissue was created for evaluation together with virtual
scenes. This dataset will be made publicly available in the future
Caveats on the first-generation da Vinci Research Kit: latent technical constraints and essential calibrations
Telesurgical robotic systems provide a well established form of assistance in
the operating theater, with evidence of growing uptake in recent years. Until
now, the da Vinci surgical system (Intuitive Surgical Inc, Sunnyvale,
California) has been the most widely adopted robot of this kind, with more than
6,700 systems in current clinical use worldwide [1]. To accelerate research on
robotic-assisted surgery, the retired first-generation da Vinci robots have
been redeployed for research use as "da Vinci Research Kits" (dVRKs), which
have been distributed to research institutions around the world to support both
training and research in the sector. In the past ten years, a great amount of
research on the dVRK has been carried out across a vast range of research
topics. During this extensive and distributed process, common technical issues
have been identified that are buried deep within the dVRK research and
development architecture, and were found to be common among dVRK user feedback,
regardless of the breadth and disparity of research directions identified. This
paper gathers and analyzes the most significant of these, with a focus on the
technical constraints of the first-generation dVRK, which both existing and
prospective users should be aware of before embarking onto dVRK-related
research. The hope is that this review will aid users in identifying and
addressing common limitations of the systems promptly, thus helping to
accelerate progress in the field.Comment: 15 pages, 7 figure
Towards autonomous control of surgical instruments using adaptive-fusion tracking and robot self-calibration
The ability to track surgical instruments in realtime is crucial for autonomous Robotic Assisted Surgery (RAS). Recently, the fusion of visual and kinematic data has been proposed to track surgical instruments. However, these methods assume that both sensors are equally reliable, and cannot successfully handle cases where there are significant perturbations in one of the sensors' data. In this paper, we address this problem by proposing an enhanced fusion-based method. The main advantage of our method is that it can adjust fusion weights to adapt to sensor perturbations and failures. Another problem is that before performing an autonomous task, these robots have to be repetitively recalibrated by a human for each new patient to estimate the transformations between the different robotic arms. To address this problem, we propose a self-calibration algorithm that empowers the robot to autonomously calibrate the transformations by itself in the beginning of the surgery. We applied our fusion and selfcalibration algorithms for autonomous ultrasound tissue scanning and we showed that the robot achieved stable ultrasound imaging when using our method. Our performance evaluation shows that our proposed method outperforms the state-of-art both in normal and challenging situations
Detecting the Sensing Area of A Laparoscopic Probe in Minimally Invasive Cancer Surgery
In surgical oncology, it is challenging for surgeons to identify lymph nodes
and completely resect cancer even with pre-operative imaging systems like PET
and CT, because of the lack of reliable intraoperative visualization tools.
Endoscopic radio-guided cancer detection and resection has recently been
evaluated whereby a novel tethered laparoscopic gamma detector is used to
localize a preoperatively injected radiotracer. This can both enhance the
endoscopic imaging and complement preoperative nuclear imaging data. However,
gamma activity visualization is challenging to present to the operator because
the probe is non-imaging and it does not visibly indicate the activity
origination on the tissue surface. Initial failed attempts used segmentation or
geometric methods, but led to the discovery that it could be resolved by
leveraging high-dimensional image features and probe position information. To
demonstrate the effectiveness of this solution, we designed and implemented a
simple regression network that successfully addressed the problem. To further
validate the proposed solution, we acquired and publicly released two datasets
captured using a custom-designed, portable stereo laparoscope system. Through
intensive experimentation, we demonstrated that our method can successfully and
effectively detect the sensing area, establishing a new performance benchmark.
Code and data are available at
https://github.com/br0202/Sensing_area_detection.gitComment: Accepted by MICCAI 202
- …