521 research outputs found
Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery
One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
Robot Assisted Object Manipulation for Minimally Invasive Surgery
Robotic systems have an increasingly important role in facilitating minimally invasive surgical treatments. In robot-assisted minimally invasive surgery, surgeons remotely control instruments from a console to perform operations inside the patient. However, despite the advanced technological status of surgical robots, fully autonomous systems, with decision-making capabilities, are not yet available.
In 2017, a structure to classify the research efforts toward autonomy achievable with surgical robots was proposed by Yang et al. Six different levels were identified: no autonomy, robot assistance, task autonomy,
conditional autonomy, high autonomy, and full autonomy. All the commercially available platforms in robot-assisted
surgery is still in level 0 (no autonomy). Despite increasing the level of autonomy remains an open challenge, its adoption could potentially introduce multiple benefits, such as decreasing surgeons’ workload and fatigue and pursuing a consistent
quality of procedures. Ultimately, allowing the surgeons to interpret the ample
and intelligent information from the system will enhance the surgical outcome and
positively reflect both on patients and society. Three main aspects are required to
introduce automation into surgery: the surgical robot must move with high precision,
have motion planning capabilities and understand the surgical scene. Besides
these main factors, depending on the type of surgery, there could be other aspects
that might play a fundamental role, to name some compliance, stiffness, etc. This
thesis addresses three technological challenges encountered when trying to achieve
the aforementioned goals, in the specific case of robot-object interaction. First,
how to overcome the inaccuracy of cable-driven systems when executing fine and
precise movements. Second, planning different tasks in dynamically changing environments.
Lastly, how the understanding of a surgical scene can be used to solve
more than one manipulation task.
To address the first challenge, a control scheme relying on accurate calibration is
implemented to execute the pick-up of a surgical needle. Regarding the planning of
surgical tasks, two approaches are explored: one is learning from demonstration to
pick and place a surgical object, and the second is using a gradient-based approach
to trigger a smoother object repositioning phase during intraoperative procedures.
Finally, to improve scene understanding, this thesis focuses on developing a simulation
environment where multiple tasks can be learned based on the surgical scene
and then transferred to the real robot. Experiments proved that automation of the pick and place task of different surgical objects is possible. The robot was successfully
able to autonomously pick up a suturing needle, position a surgical device for
intraoperative ultrasound scanning and manipulate soft tissue for intraoperative organ
retraction. Despite automation of surgical subtasks has been demonstrated in
this work, several challenges remain open, such as the capabilities of the generated
algorithm to generalise over different environment conditions and different patients
Multispectral Stereo-Image Fusion for 3D Hyperspectral Scene Reconstruction
Spectral imaging enables the analysis of optical material properties that are
invisible to the human eye. Different spectral capturing setups, e.g., based on
filter-wheel, push-broom, line-scanning, or mosaic cameras, have been
introduced in the last years to support a wide range of applications in
agriculture, medicine, and industrial surveillance. However, these systems
often suffer from different disadvantages, such as lack of real-time
capability, limited spectral coverage or low spatial resolution. To address
these drawbacks, we present a novel approach combining two calibrated
multispectral real-time capable snapshot cameras, covering different spectral
ranges, into a stereo-system. Therefore, a hyperspectral data-cube can be
continuously captured. The combined use of different multispectral snapshot
cameras enables both 3D reconstruction and spectral analysis. Both captured
images are demosaicked avoiding spatial resolution loss. We fuse the spectral
data from one camera into the other to receive a spatially and spectrally high
resolution video stream. Experiments demonstrate the feasibility of this
approach and the system is investigated with regard to its applicability for
surgical assistance monitoring.Comment: VISAPP 2024 - 19th International Conference on Computer Vision Theory
and Application
Surgical Guidance for Removal of Cholesteatoma Using a Multispectral 3D-Endoscope
We develop a stereo-multispectral endoscopic prototype in which a filter-wheel is used for surgical guidance to remove cholesteatoma tissue in the middle ear. Cholesteatoma is a destructive proliferating tissue. The only treatment for this disease is surgery. Removal is a very demanding task, even for experienced surgeons. It is very difficult to distinguish between bone and cholesteatoma. In addition, it can even reoccur if not all tissue particles of the cholesteatoma are removed, which leads to undesirable follow-up operations. Therefore, we propose an image-based method that combines multispectral tissue classification and 3D reconstruction to identify all parts of the removed tissue and determine their metric dimensions intraoperatively. The designed multispectral filter-wheel 3D-endoscope prototype can switch between narrow-band spectral and broad-band white illumination, which is technically evaluated in terms of optical system properties. Further, it is tested and evaluated on three patients. The wavelengths 400 nm and 420 nm are identified as most suitable for the differentiation task. The stereoscopic image acquisition allows accurate 3D surface reconstruction of the enhanced image information. The first results are promising, as the cholesteatoma can be easily highlighted, correctly identified, and visualized as a true-to-scale 3D model showing the patient-specific anatomy.Peer Reviewe
Spectrally encoded fiber-based structured lighting probe for intraoperative 3D imaging
Three dimensional quantification of organ shape and structure during minimally invasive surgery (MIS) could enhance precision by allowing the registration of multi-modal or pre-operative image data (US/MRI/CT) with the live optical image. Structured illumination is one technique to obtain 3D information through the projection of a known pattern onto the tissue, although currently these systems tend to be used only for macroscopic imaging or open procedures rather than in endoscopy. To account for occlusions, where a projected feature may be hidden from view and/or confused with a neighboring point, a flexible multispectral structured illumination probe has been developed that labels each projected point with a specific wavelength using a supercontinuum laser. When imaged by a standard endoscope camera they can then be segmented using their RGB values, and their 3D coordinates calculated after camera calibration. The probe itself is sufficiently small (1.7 mm diameter) to allow it to be used in the biopsy channel of commonly used medical endoscopes. Surgical robots could therefore also employ this technology to solve navigation and visualization problems in MIS, and help to develop advanced surgical procedures such as natural orifice translumenal endoscopic surgery
Intraoperative Navigation Systems for Image-Guided Surgery
Recent technological advancements in medical imaging equipment have resulted in
a dramatic improvement of image accuracy, now capable of providing useful information
previously not available to clinicians. In the surgical context, intraoperative
imaging provides a crucial value for the success of the operation.
Many nontrivial scientific and technical problems need to be addressed in order to
efficiently exploit the different information sources nowadays available in advanced
operating rooms. In particular, it is necessary to provide: (i) accurate tracking of
surgical instruments, (ii) real-time matching of images from different modalities, and
(iii) reliable guidance toward the surgical target. Satisfying all of these requisites
is needed to realize effective intraoperative navigation systems for image-guided
surgery.
Various solutions have been proposed and successfully tested in the field of image
navigation systems in the last ten years; nevertheless several problems still arise in
most of the applications regarding precision, usability and capabilities of the existing
systems. Identifying and solving these issues represents an urgent scientific challenge.
This thesis investigates the current state of the art in the field of intraoperative
navigation systems, focusing in particular on the challenges related to efficient and
effective usage of ultrasound imaging during surgery.
The main contribution of this thesis to the state of the art are related to:
Techniques for automatic motion compensation and therapy monitoring applied
to a novel ultrasound-guided surgical robotic platform in the context of
abdominal tumor thermoablation.
Novel image-fusion based navigation systems for ultrasound-guided neurosurgery
in the context of brain tumor resection, highlighting their applicability
as off-line surgical training instruments.
The proposed systems, which were designed and developed in the framework of
two international research projects, have been tested in real or simulated surgical
scenarios, showing promising results toward their application in clinical practice
Comparative validation of single-shot optical techniques for laparoscopic 3-D surface reconstruction
Intra-operative imaging techniques for obtaining the shape and morphology of soft-tissue surfaces in vivo are a key enabling technology for advanced surgical systems. Different optical techniques for 3-D surface reconstruction in laparoscopy have been proposed, however, so far no quantitative and comparative validation has been performed. Furthermore, robustness of the methods to clinically important factors like smoke or bleeding has not yet been assessed. To address these issues, we have formed a joint international initiative with the aim of validating different state-of-the-art passive and active reconstruction methods in a comparative manner. In this comprehensive in vitro study, we investigated reconstruction accuracy using different organs with various shape and texture and also tested reconstruction robustness with respect to a number of factors like the pose of the endoscope as well as the amount of blood or smoke present in the scene. The study suggests complementary advantages of the different techniques with respect to accuracy, robustness, point density, hardware complexity and computation time. While reconstruction accuracy under ideal conditions was generally high, robustness is a remaining issue to be addressed. Future work should include sensor fusion and in vivo validation studies in a specific clinical context. To trigger further research in surface reconstruction, stereoscopic data of the study will be made publically available at www.open-CAS.com upon publication of the paper
Ultrasound-Augmented Laparoscopy
Laparoscopic surgery is perhaps the most common minimally invasive procedure for many diseases in the abdomen. Since the laparoscopic camera provides only the surface view of the internal organs, in many procedures, surgeons use laparoscopic ultrasound (LUS) to visualize deep-seated surgical targets. Conventionally, the 2D LUS image is visualized in a display spatially separate from that displays the laparoscopic video. Therefore, reasoning about the geometry of hidden targets requires mentally solving the spatial alignment, and resolving the modality differences, which is cognitively very challenging. Moreover, the mental representation of hidden targets in space acquired through such cognitive medication may be error prone, and cause incorrect actions to be performed.
To remedy this, advanced visualization strategies are required where the US information is visualized in the context of the laparoscopic video. To this end, efficient computational methods are required to accurately align the US image coordinate system with that centred in the camera, and to render the registered image information in the context of the camera such that surgeons perceive the geometry of hidden targets accurately. In this thesis, such a visualization pipeline is described. A novel method to register US images with a camera centric coordinate system is detailed with an experimental investigation into its accuracy bounds. An improved method to blend US information with the surface view is also presented with an experimental investigation into the accuracy of perception of the target locations in space
Dense 3D Reconstruction Through Lidar: A Comparative Study on Ex-vivo Porcine Tissue
New sensing technologies and more advanced processing algorithms are
transforming computer-integrated surgery. While researchers are actively
investigating depth sensing and 3D reconstruction for vision-based surgical
assistance, it remains difficult to achieve real-time, accurate, and robust 3D
representations of the abdominal cavity for minimally invasive surgery. Thus,
this work uses quantitative testing on fresh ex-vivo porcine tissue to
thoroughly characterize the quality with which a 3D laser-based time-of-flight
sensor (lidar) can perform anatomical surface reconstruction. Ground-truth
surface shapes are captured with a commercial laser scanner, and the resulting
signed error fields are analyzed using rigorous statistical tools. When
compared to modern learning-based stereo matching from endoscopic images,
time-of-flight sensing demonstrates higher precision, lower processing delay,
higher frame rate, and superior robustness against sensor distance and poor
illumination. Furthermore, we report on the potential negative effect of
near-infrared light penetration on the accuracy of lidar measurements across
different tissue samples, identifying a significant measured depth offset for
muscle in contrast to fat and liver. Our findings highlight the potential of
lidar for intraoperative 3D perception and point toward new methods that
combine complementary time-of-flight and spectral imaging
Intraoperative Endoscopic Augmented Reality in Third Ventriculostomy
In neurosurgery, as a result of the brain-shift, the preoperative patient models used as a intraoperative reference change. A meaningful use of the preoperative virtual models during the operation requires for a model update. The NEAR project, Neuroendoscopy towards Augmented Reality, describes a new camera calibration model for high distorted lenses and introduces the concept of active endoscopes endowed with with navigation, camera calibration, augmented reality and triangulation modules
- …