3,569 research outputs found

    Convergence Stability of Depth-Depth-Matching-Based Steepest Descent Method in Simulated Liver Surgery

    Get PDF
    We recently established that our digital potential function was globally stable at the point where a virtual liver coincided with its real counterpart. In particular, because three rotational degrees of freedom are frequently used in a surgical operation on a real liver, stability of the potential function concerning three rotational degrees of freedom was carefully verified in the laboratory, using fluorescent lamps and sunlight. We achieved the same stability for several simulated liver operations using a 3D printed viscoelastic liver in a surgical operating room equipped with two light-emitting diode shadowless lamps. As a result, with increasing number of lamps, stability of our depth-depth matching in the steepest descendent algorithm improved because the lamps did not emit an infrared spectrum such as the one emitted by our depth camera. Furthermore, the slower the angular velocity in a surgical sequence, the more overall stability improved

    Image-Aligned Dynamic Liver Reconstruction Using Intra-Operative Field of Views for Minimal Invasive Surgery

    Get PDF
    Available online on 30 November 2018. Author's post-print on open access repository after an embargo period of 12 months2019-11-3

    Fusion of interventional ultrasound & X-ray

    Get PDF
    In einer immer Ă€lter werdenden Bevölkerung wird die Behandlung von strukturellen Herzkrankheiten zunehmend wichtiger. Verbesserte medizinische Bildgebung und die EinfĂŒhrung neuer Kathetertechnologien fĂŒhrten dazu, dass immer mehr herkömmliche chirurgische Eingriffe am offenen Herzen durch minimal invasive Methoden abgelöst werden. Diese modernen Interventionen mĂŒssen durch verschiedenste Bildgebungsverfahren navigiert werden. Hierzu werden hauptsĂ€chlich Röntgenfluoroskopie und transösophageale Echokardiografie (TEE) eingesetzt. Röntgen bietet eine gute Visualisierung der eingefĂŒhrten Katheter, was essentiell fĂŒr eine gute Navigation ist. TEE hingegen bietet die Möglichkeit der Weichteilgewebedarstellung und kann damit vor allem zur Darstellung von anatomischen Strukturen, wie z.B. Herzklappen, genutzt werden. Beide ModalitĂ€ten erzeugen Bilder in Echtzeit und werden fĂŒr die erfolgreiche DurchfĂŒhrung minimal invasiver Herzchirurgie zwingend benötigt. Üblicherweise sind beide Systeme eigenstĂ€ndig und nicht miteinander verbunden. Es ist anzunehmen, dass eine Bildfusion beider Welten einen großen Vorteil fĂŒr die behandelnden Operateure erzeugen kann, vor allem eine verbesserte Kommunikation im Behandlungsteam. Ebenso können sich aus der Anwendung heraus neue chirurgische Worfklows ergeben. Eine direkte Fusion beider Systeme scheint nicht möglich, da die Bilddaten eine zu unterschiedliche Charakteristik aufweisen. Daher kommt in dieser Arbeit eine indirekte Registriermethode zum Einsatz. Die TEE-Sonde ist wĂ€hrend der Intervention stĂ€ndig im Fluoroskopiebild sichtbar. Dadurch wird es möglich, die Sonde im Röntgenbild zu registrieren und daraus die 3D Position abzuleiten. Der Zusammenhang zwischen Ultraschallbild und Ultraschallsonde wird durch eine Kalibrierung bestimmt. In dieser Arbeit wurde die Methode der 2D-3D Registrierung gewĂ€hlt, um die TEE Sonde auf 2D Röntgenbildern zu erkennen. Es werden verschiedene BeitrĂ€ge prĂ€sentiert, welche einen herkömmlichen 2D-3D Registrieralgorithmus verbessern. Nicht nur im Bereich der Ultraschall-Röntgen-Fusion, sondern auch im Hinblick auf allgemeine Registrierprobleme. Eine eingefĂŒhrte Methode ist die der planaren Parameter. Diese verbessert die Robustheit und die Registriergeschwindigkeit, vor allem wĂ€hrend der Registrierung eines Objekts aus zwei nicht-orthogonalen Richtungen. Ein weiterer Ansatz ist der Austausch der herkömmlichen Erzeugung von sogenannten digital reconstructed radiographs. Diese sind zwar ein integraler Bestandteil einer 2D-3D Registrierung aber gleichzeitig sehr zeitaufwendig zu berechnen. Es fĂŒhrt zu einem erheblichen Geschwindigkeitsgewinn die herkömmliche Methode durch schnelles Rendering von Dreiecksnetzen zu ersetzen. Ebenso wird gezeigt, dass eine Kombination von schnellen lernbasierten Detektionsalgorithmen und 2D-3D Registrierung die Genauigkeit und die Registrierreichweite verbessert. Zum Abschluss werden die ersten Ergebnisse eines klinischen Prototypen prĂ€sentiert, welcher die zuvor genannten Methoden verwendet.Today, in an elderly community, the treatment of structural heart disease will become more and more important. Constant improvements of medical imaging technologies and the introduction of new catheter devices caused the trend to replace conventional open heart surgery by minimal invasive interventions. These advanced interventions need to be guided by different medical imaging modalities. The two main imaging systems here are X-ray fluoroscopy and Transesophageal  Echocardiography (TEE). While X-ray provides a good visualization of inserted catheters, which is essential for catheter navigation, TEE can display soft tissues, especially anatomical structures like heart valves. Both modalities provide real-time imaging and are necessary to lead minimal invasive heart surgery to success. Usually, the two systems are detached and not connected. It is conceivable that a fusion of both worlds can create a strong benefit for the physicians. It can lead to a better communication within the clinical team and can probably enable new surgical workflows. Because of the completely different characteristics of the image data, a direct fusion seems to be impossible. Therefore, an indirect registration of Ultrasound and X-ray images is used. The TEE probe is usually visible in the X-ray image during the described minimal-invasive interventions. Thereby, it becomes possible to register the TEE probe in the fluoroscopic images and to establish its 3D position. The relationship of the Ultrasound image to the Ultrasound probe is known by calibration. To register the TEE probe on 2D X-ray images, a 2D-3D registration approach is chosen in this thesis. Several contributions are presented, which are improving the common 2D-3D registration algorithm for the task of Ultrasound and X-ray fusion, but also for general 2D-3D registration problems. One presented approach is the introduction of planar parameters that increase robustness and speed during the registration of an object on two non-orthogonal views. Another approach is to replace the conventional generation of digital reconstructedradiographs, which is an integral part of 2D-3D registration but also a performance bottleneck, with fast triangular mesh rendering. This will result in a significant performance speed-up. It is also shown that a combination of fast learning-based detection algorithms with 2D-3D registration will increase the accuracy and the capture range, instead of employing them solely to the  registration/detection of a TEE probe. Finally, a first clinical prototype is presented which employs the presented approaches and first clinical results are shown

    Models and estimators for markerless human motion tracking

    Get PDF
    In this work, we analyze the diferent components of a model-based motion tracking system. The system consists in: a human body model, an estimator, and a likelihood or cost function

    EndoSLAM Dataset and An Unsupervised Monocular Visual Odometry and Depth Estimation Approach for Endoscopic Videos: Endo-SfMLearner

    Full text link
    Deep learning techniques hold promise to develop dense topography reconstruction and pose estimation methods for endoscopic videos. However, currently available datasets do not support effective quantitative benchmarking. In this paper, we introduce a comprehensive endoscopic SLAM dataset consisting of 3D point cloud data for six porcine organs, capsule and standard endoscopy recordings as well as synthetically generated data. A Panda robotic arm, two commercially available capsule endoscopes, two conventional endoscopes with different camera properties, and two high precision 3D scanners were employed to collect data from 8 ex-vivo porcine gastrointestinal (GI)-tract organs. In total, 35 sub-datasets are provided with 6D pose ground truth for the ex-vivo part: 18 sub-dataset for colon, 12 sub-datasets for stomach and 5 sub-datasets for small intestine, while four of these contain polyp-mimicking elevations carried out by an expert gastroenterologist. Synthetic capsule endoscopy frames from GI-tract with both depth and pose annotations are included to facilitate the study of simulation-to-real transfer learning algorithms. Additionally, we propound Endo-SfMLearner, an unsupervised monocular depth and pose estimation method that combines residual networks with spatial attention module in order to dictate the network to focus on distinguishable and highly textured tissue regions. The proposed approach makes use of a brightness-aware photometric loss to improve the robustness under fast frame-to-frame illumination changes. To exemplify the use-case of the EndoSLAM dataset, the performance of Endo-SfMLearner is extensively compared with the state-of-the-art. The codes and the link for the dataset are publicly available at https://github.com/CapsuleEndoscope/EndoSLAM. A video demonstrating the experimental setup and procedure is accessible through https://www.youtube.com/watch?v=G_LCe0aWWdQ.Comment: 27 pages, 16 figure

    Ultrasound and photoacoustic methods for anatomic and functional imaging in image guided radiation therapy

    Get PDF
    (MATERIAL and METHODS) First, we define the physical principals and optimal protocols that provide contrast when imaging with US and the transducer properties contributing to resolution limits. The US field of view (FOV) was characterized to determine the optimal settings with regard to imaging depth, focal region, with and without harmonic imaging, and artifact identification. This will allow us to determine the minimum errors expected when registering multimodal volumes (CT, US, CBCT). Next, we designed an in-house integrated US manipulator and platform to relate CT, 3D-US and linear accelerator coordinate systems. To validate our platform, an agar-based phantom with measured densities and speed-of-sound consistent with tissues surrounding the bladder was fabricated. This phantom was rotated relative to the CT and US coordinate systems and imaged with both modalities. These CT and 3D-US images were imported into the treatment planning system, where US-to-US and US-to-CT images were co-registered and the registration matrix used to re-align the phantom relative to the linear accelerator. The measured precision in the phantom setup, which is defined by the standard deviation of the transformation matrix components, was consistent with and exceeding acceptable clinical patient re-alignments (2 mm). Statistical errors from US-US registrations for different patient orientations ranged from 0.06-1.66 mm for x, y, and z translational components, and 0.00-1.05 degrees for rotational components. Statistical errors from US-CT registrations were 0.23-1.18 mm for the x, y and z translational components, and 0.08-2.52 degrees for the rotational components. The high precision in the multimodal registrations suggest the ability to use US for patient positioning when targeting abdominal structures. We are now testing this on a dog patient to obtain both inter and intra-fractional positional errors. The objective of this experiment is to confirm Hill’s equation describing the relationship between hemoglobin saturation (SaO2) and the partial pressure of dissolved oxygen (pO2). The relationship is modeled as a sigmoidal curve that is a function of two parameters – the Hill coefficient, n, and the net association constant of HbO2, K (or pO2 at 50% SaO2). The goal is to noninvasively measure SaO2 in breast tumors in mice using photoacoustic computed tomographic (PCT) imaging and compare those measurements to a gold standard for pO2 using the OxyLite probe. First, a calibration study was performed to measure the SaO2 (co-oximeter) and pO2 (Oxylite probe) in blood using Hill’s equation (P50=23.2 mmHg and n=2.26). Next, non-invasive localized measurements of SaO2 in MDA-MD-231 and MCF7 breast tumors using PCT spectroscopic methods were compared to pO 2 levels using Oxylite probe. The fitted results for MCF7 and MDA-MD-231 data resulted in a P50 of 17.2 mmHg and 20.7 mmHg and a n of 1.76 and 1.63, respectively. The lower value of the P50 is consistent with tumors being more acidic than healthy tissue. Current work applying photon fluence corrections and image artifact reduction is expected to improve the quality of the results. In summary, this study demonstrates that photoacoustic imaging can be used to monitor tumor oxygenation, and its potential use to investigate the effectiveness of radiation therapy and the ability to adapt therapeutic protocols

    Airborne Navigation by Fusing Inertial and Camera Data

    Get PDF
    Unmanned aircraft systems (UASs) are often used as measuring system. Therefore, precise knowledge of their position and orientation are required. This thesis provides research in the conception and realization of a system which combines GPS-assisted inertial navigation systems with the advances in the area of camera-based navigation. It is presented how these complementary approaches can be used in a joint framework. In contrast to widely used concepts utilizing only one of the two approaches, a more robust overall system is realized. The presented algorithms are based on the mathematical concepts of rigid body motions. After derivation of the underlying equations, the methods are evaluated in numerical studies and simulations. Based on the results, real-world systems are used to collect data, which is evaluated and discussed. Two approaches for the system calibration, which describes the offsets between the coordinate systems of the sensors, are proposed. The first approach integrates the parameters of the system calibration in the classical bundle adjustment. The optimization is presented very descriptive in a graph based formulation. Required is a high precision INS and data from a measurement flight. In contrast to classical methods, a flexible flight course can be used and no cost intensive ground control points are required. The second approach enables the calibration of inertial navigation systems with a low positional accuracy. Line observations are used to optimize the rotational part of the offsets. Knowledge of the offsets between the coordinate systems of the sensors allows transforming measurements bidirectional. This is the basis for a fusion concept combining measurements from the inertial navigation system with an approach for the visual navigation. As a result, more robust estimations of the own position and orientation are achieved. Moreover, the map created from the camera images is georeferenced. It is shown how this map can be used to navigate an unmanned aerial system back to its starting position in the case of a disturbed or failed GPS reception. The high precision of the map allows the navigation through previously unexplored area by taking into consideration the maximal drift for the camera-only navigation. The evaluated concept provides insight into the possibility of the robust navigation of unmanned aerial systems with complimentary sensors. The constantly increasing computing power allows the evaluation of big amounts of data and the development of new concept to fuse the information. Future navigation systems will use the data of all available sensors to achieve the best navigation solution at any time

    Optical Coherence Tomography guided Laser-Cochleostomy

    Get PDF
    Despite the high precision of laser, it remains challenging to control the laser-bone ablation without injuring the underlying critical structures. Providing an axial resolution on micrometre scale, OCT is a promising candidate for imaging microstructures beneath the bone surface and monitoring the ablation process. In this work, a bridge connecting these two technologies is established. A closed-loop control of laser-bone ablation under the monitoring with OCT has been successfully realised

    Medical SLAM in an autonomous robotic system

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-operative morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilities by observing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted instruments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This thesis addresses the ambitious goal of achieving surgical autonomy, through the study of the anatomical environment by Initially studying the technology present and what is needed to analyze the scene: vision sensors. A novel endoscope for autonomous surgical task execution is presented in the first part of this thesis. Which combines a standard stereo camera with a depth sensor. This solution introduces several key advantages, such as the possibility of reconstructing the 3D at a greater distance than traditional endoscopes. Then the problem of hand-eye calibration is tackled, which unites the vision system and the robot in a single reference system. Increasing the accuracy in the surgical work plan. In the second part of the thesis the problem of the 3D reconstruction and the algorithms currently in use were addressed. In MIS, simultaneous localization and mapping (SLAM) can be used to localize the pose of the endoscopic camera and build ta 3D model of the tissue surface. Another key element for MIS is to have real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy. Starting from the ORB-SLAM algorithm we have modified the architecture to make it usable in an anatomical environment by adding the registration of the pre-operative information of the intervention to the map obtained from the SLAM. Once it has been proven that the slam algorithm is usable in an anatomical environment, it has been improved by adding semantic segmentation to be able to distinguish dynamic features from static ones. All the results in this thesis are validated on training setups, which mimics some of the challenges of real surgery and on setups that simulate the human body within Autonomous Robotic Surgery (ARS) and Smart Autonomous Robotic Assistant Surgeon (SARAS) projects
    • 

    corecore