498 research outputs found

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Sliding to predict: vision-based beating heart motion estimation by modeling temporal interactions

    Get PDF
    Purpose: Technical advancements have been part of modern medical solutions as they promote better surgical alternatives that serve to the benefit of patients. Particularly with cardiovascular surgeries, robotic surgical systems enable surgeons to perform delicate procedures on a beating heart, avoiding the complications of cardiac arrest. This advantage comes with the price of having to deal with a dynamic target which presents technical challenges for the surgical system. In this work, we propose a solution for cardiac motion estimation. Methods: Our estimation approach uses a variational framework that guarantees preservation of the complex anatomy of the heart. An advantage of our approach is that it takes into account different disturbances, such as specular reflections and occlusion events. This is achieved by performing a preprocessing step that eliminates the specular highlights and a predicting step, based on a conditional restricted Boltzmann machine, that recovers missing information caused by partial occlusions. Results: We carried out exhaustive experimentations on two datasets, one from a phantom and the other from an in vivo procedure. The results show that our visual approach reaches an average minima in the order of magnitude of 10-7 while preserving the heart’s anatomical structure and providing stable values for the Jacobian determinant ranging from 0.917 to 1.015. We also show that our specular elimination approach reaches an accuracy of 99% compared to a ground truth. In terms of prediction, our approach compared favorably against two well-known predictors, NARX and EKF, giving the lowest average RMSE of 0.071. Conclusion: Our approach avoids the risks of using mechanical stabilizers and can also be effective for acquiring the motion of organs other than the heart, such as the lung or other deformable objects.Peer ReviewedPostprint (published version

    Towards retrieving force feedback in robotic-assisted surgery: a supervised neuro-recurrent-vision approach

    Get PDF
    Robotic-assisted minimally invasive surgeries have gained a lot of popularity over conventional procedures as they offer many benefits to both surgeons and patients. Nonetheless, they still suffer from some limitations that affect their outcome. One of them is the lack of force feedback which restricts the surgeon's sense of touch and might reduce precision during a procedure. To overcome this limitation, we propose a novel force estimation approach that combines a vision based solution with supervised learning to estimate the applied force and provide the surgeon with a suitable representation of it. The proposed solution starts with extracting the geometry of motion of the heart's surface by minimizing an energy functional to recover its 3D deformable structure. A deep network, based on a LSTM-RNN architecture, is then used to learn the relationship between the extracted visual-geometric information and the applied force, and to find accurate mapping between the two. Our proposed force estimation solution avoids the drawbacks usually associated with force sensing devices, such as biocompatibility and integration issues. We evaluate our approach on phantom and realistic tissues in which we report an average root-mean square error of 0.02 N.Peer ReviewedPostprint (author's final draft

    MIS-SLAM: Real-Time Large-Scale Dense Deformable SLAM System in Minimal Invasive Surgery Based on Heterogeneous Computing

    Full text link
    © 2016 IEEE. Real-time simultaneous localization and dense mapping is very helpful for providing virtual reality and augmented reality for surgeons or even surgical robots. In this letter, we propose MIS-SLAM: A complete real-time large-scale dense deformable SLAM system with stereoscope in minimal invasive surgery (MIS) based on heterogeneous computing by making full use of CPU and GPU. Idled CPU is used to perform ORB-SLAM for providing robust global pose. Strategies are taken to integrate modules from CPU and GPU. We solved the key problem raised in the previous work, that is, fast movement of scope and blurry images make the scope tracking fail. Benefiting from improved localization, MIS-SLAM can achieve large-scale scope localizing and dense mapping in real time. It transforms and deforms current model and incrementally fuses new observation while keeping vivid texture. In-vivo experiments conducted on publicly available datasets presented in the form of videos demonstrate the feasibility and practicality of MIS-SLAM for potential clinical purpose

    EndoAbS dataset: Endoscopic abdominal stereo image dataset for benchmarking 3D stereo reconstruction algorithms

    Get PDF
    none5siembargoed_20190801Penza, Veronica; Ciullo, Andrea S.; Moccia, Sara; Mattos, Leonardo S.; De Momi, ElenaPenza, Veronica; Ciullo, Andrea S.; Moccia, Sara; Mattos, Leonardo S.; De Momi, Elen

    EndoAbS dataset: Endoscopic abdominal stereo image dataset for benchmarking 3D stereo reconstruction algorithms

    Get PDF
    Background: 3D reconstruction algorithms are of fundamental importance for augmented reality applications in computer-assisted surgery. However, few datasets of endoscopic stereo images with associated 3D surface references are currently openly available, preventing the proper validation of such algorithms. This work presents a new and rich dataset of endoscopic stereo images (EndoAbS dataset). Methods: The dataset includes (i) endoscopic stereo images of phantom abdominal organs, (ii) a 3D organ surface reference (RF) generated with a laser scanner and (iii) camera calibration parameters. A detailed description of the generation of the phantom and the camera–laser calibration method is also provided. Results: An estimation of the overall error in creation of the dataset is reported (camera–laser calibration error 0.43 mm) and the performance of a 3D reconstruction algorithm is evaluated using EndoAbS, resulting in an accuracy error in accordance with state-of-the-art results (<2 mm). Conclusions: The EndoAbS dataset contributes to an increase the number and variety of openly available datasets of surgical stereo images, including a highly accurate RF and different surgical conditions

    Sight to touch: 3D diffeomorphic deformation recovery with mixture components for perceiving forces in robotic-assisted surgery

    Get PDF
    Robotic-assisted minimally invasive surgical sys-tems suffer from one major limitation which is the lack of interaction forces feedback. The restricted sense of touch hinders the surgeons’ performance and reduces their dexterity and precision during a procedure. In this work, we present a sensory substitution approach that relies on visual stimuli to transmit the tool-tissue interaction forces to the operating surgeon. Our approach combines a 3D diffeomorphic defor-mation mapping with a generative model to precisely label the force level. The main highlights of our approach are that the use of diffeomorphic transformation ensures anatomical structure preservation and the label assignment is based on a parametric form of several mixture elements. We performed experimentations on both ex-vivo and in-vivo datasets and offer careful numerical results evaluating our approach. The results show that our solution has an error measure less than 1mm in all directions and an average labeling error of 2.05%. It can also be applicable to other scenarios that require force feedback such as microsurgery, knot tying or needle-based procedures.Peer ReviewedPostprint (author's final draft

    Artificial intelligence and automation in endoscopy and surgery

    Get PDF
    Modern endoscopy relies on digital technology, from high-resolution imaging sensors and displays to electronics connecting configurable illumination and actuation systems for robotic articulation. In addition to enabling more effective diagnostic and therapeutic interventions, the digitization of the procedural toolset enables video data capture of the internal human anatomy at unprecedented levels. Interventional video data encapsulate functional and structural information about a patient’s anatomy as well as events, activity and action logs about the surgical process. This detailed but difficult-to-interpret record from endoscopic procedures can be linked to preoperative and postoperative records or patient imaging information. Rapid advances in artificial intelligence, especially in supervised deep learning, can utilize data from endoscopic procedures to develop systems for assisting procedures leading to computer-assisted interventions that can enable better navigation during procedures, automation of image interpretation and robotically assisted tool manipulation. In this Perspective, we summarize state-of-the-art artificial intelligence for computer-assisted interventions in gastroenterology and surgery
    corecore