940 research outputs found
Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery
One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
Comparative validation of single-shot optical techniques for laparoscopic 3-D surface reconstruction
Intra-operative imaging techniques for obtaining the shape and morphology of soft-tissue surfaces in vivo are a key enabling technology for advanced surgical systems. Different optical techniques for 3-D surface reconstruction in laparoscopy have been proposed, however, so far no quantitative and comparative validation has been performed. Furthermore, robustness of the methods to clinically important factors like smoke or bleeding has not yet been assessed. To address these issues, we have formed a joint international initiative with the aim of validating different state-of-the-art passive and active reconstruction methods in a comparative manner. In this comprehensive in vitro study, we investigated reconstruction accuracy using different organs with various shape and texture and also tested reconstruction robustness with respect to a number of factors like the pose of the endoscope as well as the amount of blood or smoke present in the scene. The study suggests complementary advantages of the different techniques with respect to accuracy, robustness, point density, hardware complexity and computation time. While reconstruction accuracy under ideal conditions was generally high, robustness is a remaining issue to be addressed. Future work should include sensor fusion and in vivo validation studies in a specific clinical context. To trigger further research in surface reconstruction, stereoscopic data of the study will be made publically available at www.open-CAS.com upon publication of the paper
Monitoring mouse brain perfusion with hybrid magnetic resonance optoacoustic tomography
Progress in brain research critically depends on the development of next-generation multi-modal imaging tools capable of capturing transient functional events and multiplexed contrasts noninvasively and concurrently, thus enabling a holistic view of dynamic events in vivo. Here we report on a hybrid magnetic resonance and optoacoustic tomography (MROT) system for murine brain imaging, which incorporates an MR-compatible spherical matrix array transducer and fiber-based light illumination into a 9.4 T small animal scanner. An optimized radiofrequency coil has further been devised for whole-brain interrogation. System's utility is showcased by acquiring complementary angiographic and soft tissue anatomical contrast along with simultaneous dual-modality visualization of contrast agent dynamics in vivo
A novel MRA-based framework for the detection of changes in cerebrovascular blood pressure.
Background: High blood pressure (HBP) affects 75 million adults and is the primary or contributing cause of mortality in 410,000 adults each year in the United States. Chronic HBP leads to cerebrovascular changes and is a significant contributor for strokes, dementia, and cognitive impairment. Non-invasive measurement of changes in cerebral vasculature and blood pressure (BP) may enable physicians to optimally treat HBP patients. This manuscript describes a method to non-invasively quantify changes in cerebral vasculature and BP using Magnetic Resonance Angiography (MRA) imaging.
Methods: MRA images and BP measurements were obtained from patients (n=15, M=8, F=7, Age= 49.2 ± 7.3 years) over a span of 700 days. A novel segmentation algorithm was developed to identify brain vasculature from surrounding tissue. The data was processed to calculate the vascular probability distribution function (PDF); a measure of the vascular diameters in the brain. The initial (day 0) PDF and final (day 700) PDF were used to correlate the changes in cerebral vasculature and BP. Correlation was determined by a mixed effects linear model analysis.
Results: The segmentation algorithm had a 99.9% specificity and 99.7% sensitivity in identifying and delineating cerebral vasculature. The PDFs had a statistically significant correlation to BP changes below the circle of Willis (p-value = 0.0007), but not significant (p-value = 0.53) above the circle of Willis, due to smaller blood vessels.
Conclusion: Changes in cerebral vasculature and pressure can be non-invasively obtained through MRA image analysis, which may be a useful tool for clinicians to optimize medical management of HBP
Deep Learning for Vascular Segmentation and Applications in Phase Contrast Tomography Imaging
Automated blood vessel segmentation is vital for biomedical imaging, as
vessel changes indicate many pathologies. Still, precise segmentation is
difficult due to the complexity of vascular structures, anatomical variations
across patients, the scarcity of annotated public datasets, and the quality of
images. We present a thorough literature review, highlighting the state of
machine learning techniques across diverse organs. Our goal is to provide a
foundation on the topic and identify a robust baseline model for application to
vascular segmentation in a new imaging modality, Hierarchical Phase Contrast
Tomography (HiP CT). Introduced in 2020 at the European Synchrotron Radiation
Facility, HiP CT enables 3D imaging of complete organs at an unprecedented
resolution of ca. 20mm per voxel, with the capability for localized zooms in
selected regions down to 1mm per voxel without sectioning. We have created a
training dataset with double annotator validated vascular data from three
kidneys imaged with HiP CT in the context of the Human Organ Atlas Project.
Finally, utilising the nnU Net model, we conduct experiments to assess the
models performance on both familiar and unseen samples, employing vessel
specific metrics. Our results show that while segmentations yielded reasonably
high scores such as clDice values ranging from 0.82 to 0.88, certain errors
persisted. Large vessels that collapsed due to the lack of hydrostatic pressure
(HiP CT is an ex vivo technique) were segmented poorly. Moreover, decreased
connectivity in finer vessels and higher segmentation errors at vessel
boundaries were observed. Such errors obstruct the understanding of the
structures by interrupting vascular tree connectivity. Through our review and
outputs, we aim to set a benchmark for subsequent model evaluations using
various modalities, especially with the HiP CT imaging database
Multi-Modal Partial Surface Matching for Intra-Operative Registration
An important task for computer-assisted surgical interventions is the alignment of pre- and intra-operative spaces allowing the transfer of pre-operative information to the current patient situation, known as intra-operative registration. Registration is usually performed by using markers or image-based techniques. Another approach is the intra-operative acquisition of organ surfaces by 3D range scanners, which are then matched to pre-operatively generated surfaces. However, this approach is not trivial, as methods for intra-operative surface matching must be able to deal with noise, distortions, deformations, and the availability of only partially overlapping, nearly flat surfaces. For these reasons, surface matching for intra-operative registration has so far only been used to account for displacements that occur in local scales, while the actual alignment is still performed manually. The main contributions of this thesis are two different approaches for automatic surface matching in intra-operative environments. The focus here is the registration of surfaces acquired by different modalities, dealing with the aforementioned issues and without relying on unique landmarks. For the first approach, surfaces are converted to graph representations and correspondences between them are identified by means of graph matching. Graphs are obtained automatically by segmenting the surfaces into regions with similar properties. As the graph matching problem is known to be NP-hard, it was solved by iteratively computing node similarity scores, and converting it to a linear assignment problem. In the second approach, correspondences are identified by the selection of two spatial configurations of landmarks that can be better fitted to each other, according to an error metric. This error metric does not only incorporate a fitting error, but also a new measure for spatial configuration reliability. The optimization problem is solved by means of a greedy algorithm. Evaluation of the two approaches was performed with several experiments, simulating intra-operative conditions. While the graph matching approach proved to be robust for the registration of small partial data, the point-based approach proved to be more reliable for noisy surfaces. Apart from being a significant contribution to the field of feature-less partial surface matching, this work represents a great effort towards the achievement of a fully automatic, marker-less, registration system for computer-assisted surgery guidance
Medical SLAM in an autonomous robotic system
One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-operative morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilities by observing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted instruments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This thesis addresses the ambitious goal of achieving surgical autonomy, through the study of the anatomical environment by Initially studying the technology present and what is needed to analyze the scene: vision sensors. A novel endoscope for autonomous surgical task execution is presented in the first part of this thesis. Which combines a standard stereo camera with a depth sensor. This solution introduces several key advantages, such as the possibility of reconstructing the 3D at a greater distance than traditional endoscopes. Then the problem of hand-eye calibration is tackled, which unites the vision system and the robot in a single reference system. Increasing the accuracy in the surgical work plan. In the second part of the thesis the problem of the 3D reconstruction and the algorithms currently in use were addressed. In MIS, simultaneous localization and mapping (SLAM) can be used to localize the pose of the endoscopic camera and build ta 3D model of the tissue surface. Another key element for MIS is to have real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy. Starting from the ORB-SLAM algorithm we have modified the architecture to make it usable in an anatomical environment by adding the registration of the pre-operative information of the intervention to the map obtained from the SLAM. Once it has been proven that the slam algorithm is usable in an anatomical environment, it has been improved by adding semantic segmentation to be able to distinguish dynamic features from static ones. All the results in this thesis are validated on training setups, which mimics some of the challenges of real surgery and on setups that simulate the human body within Autonomous Robotic Surgery (ARS) and Smart Autonomous Robotic Assistant Surgeon (SARAS) projects
Medical SLAM in an autonomous robotic system
One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-operative morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilities by observing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted instruments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This thesis addresses the ambitious goal of achieving surgical autonomy, through the study of the anatomical environment by Initially studying the technology present and what is needed to analyze the scene: vision sensors. A novel endoscope for autonomous surgical task execution is presented in the first part of this thesis. Which combines a standard stereo camera with a depth sensor. This solution introduces several key advantages, such as the possibility of reconstructing the 3D at a greater distance than traditional endoscopes. Then the problem of hand-eye calibration is tackled, which unites the vision system and the robot in a single reference system. Increasing the accuracy in the surgical work plan. In the second part of the thesis the problem of the 3D reconstruction and the algorithms currently in use were addressed. In MIS, simultaneous localization and mapping (SLAM) can be used to localize the pose of the endoscopic camera and build ta 3D model of the tissue surface. Another key element for MIS is to have real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy. Starting from the ORB-SLAM algorithm we have modified the architecture to make it usable in an anatomical environment by adding the registration of the pre-operative information of the intervention to the map obtained from the SLAM. Once it has been proven that the slam algorithm is usable in an anatomical environment, it has been improved by adding semantic segmentation to be able to distinguish dynamic features from static ones. All the results in this thesis are validated on training setups, which mimics some of the challenges of real surgery and on setups that simulate the human body within Autonomous Robotic Surgery (ARS) and Smart Autonomous Robotic Assistant Surgeon (SARAS) projects
Rapid Segmentation Techniques for Cardiac and Neuroimage Analysis
Recent technological advances in medical imaging have allowed for the quick acquisition of highly resolved data to aid in diagnosis and characterization of diseases or to guide interventions. In order to to be integrated into a clinical work flow, accurate and robust methods of analysis must be developed which manage this increase in data. Recent improvements in in- expensive commercially available graphics hardware and General-Purpose Programming on Graphics Processing Units (GPGPU) have allowed for many large scale data analysis problems to be addressed in meaningful time and will continue to as parallel computing technology improves. In this thesis we propose methods to tackle two clinically relevant image segmentation problems: a user-guided segmentation of myocardial scar from Late-Enhancement Magnetic Resonance Images (LE-MRI) and a multi-atlas segmentation pipeline to automatically segment and partition brain tissue from multi-channel MRI. Both methods are based on recent advances in computer vision, in particular max-flow optimization that aims at solving the segmentation problem in continuous space. This allows for (approximately) globally optimal solvers to be employed in multi-region segmentation problems, without the particular drawbacks of their discrete counterparts, graph cuts, which typically present with metrication artefacts. Max-flow solvers are generally able to produce robust results, but are known for being computationally expensive, especially with large datasets, such as volume images. Additionally, we propose two new deformable registration methods based on Gauss-Newton optimization and smooth the resulting deformation fields via total-variation regularization to guarantee the problem is mathematically well-posed. We compare the performance of these two methods against four highly ranked and well-known deformable registration methods on four publicly available databases and are able to demonstrate a highly accurate performance with low run times. The best performing variant is subsequently used in a multi-atlas segmentation pipeline for the segmentation of brain tissue and facilitates fast run times for this computationally expensive approach. All proposed methods are implemented using GPGPU for a substantial increase in computational performance and so facilitate deployment into clinical work flows. We evaluate all proposed algorithms in terms of run times, accuracy, repeatability and errors arising from user interactions and we demonstrate that these methods are able to outperform established methods. The presented approaches demonstrate high performance in comparison with established methods in terms of accuracy and repeatability while largely reducing run times due to the employment of GPU hardware
- …