12 research outputs found

    Optical Coherence Tomography guided Laser-Cochleostomy

    Get PDF
    Despite the high precision of laser, it remains challenging to control the laser-bone ablation without injuring the underlying critical structures. Providing an axial resolution on micrometre scale, OCT is a promising candidate for imaging microstructures beneath the bone surface and monitoring the ablation process. In this work, a bridge connecting these two technologies is established. A closed-loop control of laser-bone ablation under the monitoring with OCT has been successfully realised

    Image Registration to Map Endoscopic Video to Computed Tomography for Head and Neck Radiotherapy Patients

    Get PDF
    The purpose of this work was to explore the feasibility of registering endoscopic video to radiotherapy treatment plans for patients with head and neck cancer without physical tracking of the endoscope during the examination. Endoscopy-CT registration would provide a clinical tool that could be used to enhance the treatment planning process and would allow for new methods to study the incidence of radiation-related toxicity. Endoscopic video frames were registered to CT by optimizing virtual endoscope placement to maximize the similarity between the frame and the virtual image. Virtual endoscopic images were rendered using a polygonal mesh created by segmenting the airways of the head and neck with a density threshold. The optical properties of the virtual endoscope were matched to a calibrated model of the real endoscope. A novel registration algorithm was developed that takes advantage of physical constraints on the endoscope to effectively search the airways of the head and neck for the desired virtual endoscope coordinates. This algorithm was tested on rigid phantoms with embedded point markers and protruding bolus material. In these tests, the median registration accuracy was 3.0 mm for point measurements and 3.5 mm for surface measurements. The algorithm was also tested on four endoscopic examinations of three patients, in which it achieved a median registration accuracy of 9.9 mm. The uncertainties caused by the non-rigid anatomy of the head and neck and differences in patient positioning between endoscopic examinations and CT scans were examined by taking repeated measurements after placing the virtual endoscope in surface meshes created from different CT scans. Non-rigid anatomy introduced errors on the order of 1-3 mm. Patient positioning had a larger impact, introducing errors on the order of 3.5-4.5 mm. Endoscopy-CT registration in the head and neck is possible, but large registration errors were found in patients. The uncertainty analyses suggest a lower limit of 3-5 mm. Further development is required to achieve an accuracy suitable for clinical use

    Fast widefield techniques for fluorescence and phase endomicroscopy

    Full text link
    Thesis (Ph.D.)--Boston UniversityEndomicroscopy is a recent development in biomedical optics which gives researchers and physicians microscope-resolution views of intact tissue to complement macroscopic visualization during endoscopy screening. This thesis presents HiLo endomicroscopy and oblique back-illumination endomicroscopy, fast widefield imaging techniques with fluorescence and phase contrast, respectively. Fluorescence imaging in thick tissue is often hampered by strong out-of-focus background signal. Laser scanning confocal endomicroscopy has been developed for optically-sectioned imaging free from background, but reliance on mechanical scanning fundamentally limits the frame rate and represents significant complexity and expense. HiLo is a fast, simple, widefield fluorescence imaging technique which rejects out-of-focus background signal without the need for scanning. It works by acquiring two images of the sample under uniform and structured illumination and synthesizing an optically sectioned result with real-time image processing. Oblique back-illumination microscopy (OBM) is a label-free technique which allows, for the first time, phase gradient imaging of sub-surface morphology in thick scattering tissue with a reflection geometry. OBM works by back-illuminating the sample with the oblique diffuse reflectance from light delivered via off-axis optical fibers. The use of two diametrically opposed illumination fibers allows simultaneous and independent measurement of phase gradients and absorption contrast. Video-rate single-exposure operation using wavelength multiplexing is demonstrated

    Multimodal optical systems for clinical oncology

    Get PDF
    This thesis presents three multimodal optical (light-based) systems designed to improve the capabilities of existing optical modalities for cancer diagnostics and theranostics. Optical diagnostic and therapeutic modalities have seen tremendous success in improving the detection, monitoring, and treatment of cancer. For example, optical spectroscopies can accurately distinguish between healthy and diseased tissues, fluorescence imaging can light up tumours for surgical guidance, and laser systems can treat many epithelial cancers. However, despite these advances, prognoses for many cancers remain poor, positive margin rates following resection remain high, and visual inspection and palpation remain crucial for tumour detection. The synergistic combination of multiple optical modalities, as presented here, offers a promising solution. The first multimodal optical system (Chapter 3) combines Raman spectroscopic diagnostics with photodynamic therapy using a custom-built multimodal optical probe. Crucially, this system demonstrates the feasibility of nanoparticle-free theranostics, which could simplify the clinical translation of cancer theranostic systems without sacrificing diagnostic or therapeutic benefit. The second system (Chapter 4) applies computer vision to Raman spectroscopic diagnostics to achieve spatial spectroscopic diagnostics. It provides an augmented reality display of the surgical field-of-view, overlaying spatially co-registered spectroscopic diagnoses onto imaging data. This enables the translation of Raman spectroscopy from a 1D technique to a 2D diagnostic modality and overcomes the trade-off between diagnostic accuracy and field-of-view that has limited optical systems to date. The final system (Chapter 5) integrates fluorescence imaging and Raman spectroscopy for fluorescence-guided spatial spectroscopic diagnostics. This facilitates macroscopic tumour identification to guide accurate spectroscopic margin delineation, enabling the spectroscopic examination of suspicious lesions across large tissue areas. Together, these multimodal optical systems demonstrate that the integration of multiple optical modalities has potential to improve patient outcomes through enhanced tumour detection and precision-targeted therapies.Open Acces

    OPTICAL NAVIGATION TECHNIQUES FOR MINIMALLY INVASIVE ROBOTIC SURGERIES

    Get PDF
    Minimally invasive surgery (MIS) involves small incisions in a patient's body, leading to reduced medical risk and shorter hospital stays compared to open surgeries. For these reasons, MIS has experienced increased demand across different types of surgery. MIS sometimes utilizes robotic instruments to complement human surgical manipulation to achieve higher precision than can be obtained with traditional surgeries. Modern surgical robots perform within a master-slave paradigm, in which a robotic slave replicates the control gestures emanating from a master tool manipulated by a human surgeon. Presently, certain human errors due to hand tremors or unintended acts are moderately compensated at the tool manipulation console. However, errors due to robotic vision and display to the surgeon are not equivalently addressed. Current vision capabilities within the master-slave robotic paradigm are supported by perceptual vision through a limited binocular view, which considerably impacts the hand-eye coordination of the surgeon and provides no quantitative geometric localization for robot targeting. These limitations lead to unexpected surgical outcomes, and longer operating times compared to open surgery. To improve vision capabilities within an endoscopic setting, we designed and built several image guided robotic systems, which obtained sub-millimeter accuracy. With this improved accuracy, we developed a corresponding surgical planning method for robotic automation. As a demonstration, we prototyped an autonomous electro-surgical robot that employed quantitative 3D structural reconstruction with near infrared registering and tissue classification methods to localize optimal targeting and suturing points for minimally invasive surgery. Results from validation of the cooperative control and registration between the vision system in a series of in vivo and in vitro experiments are presented and the potential enhancement to autonomous robotic minimally invasive surgery by utilizing our technique will be discussed

    Fluorescence and Diffuse Reflectance Spectroscopy and Endoscopy for Tissue Analysis

    No full text
    Biophotonics techniques are showing great potential for practical tissue diagnosis, capable of localised optical spectroscopy as well as wide field imaging. Many of those are generally based on the same concept: the spectral information they enable to acquire encloses clues on the tissue biochemistry and biostructure and these clues carry diagnostic information. Biophotonics techniques present the added advantage to incorporate easily miniaturisable hardware allowing several modalities to be set up on the same systems and authorizing their use during minimally invasive surgery (MIS) procedures. The work presented in this thesis aims to build on these advantages to design biophotonics instruments for tissue diagnosis. Fluorescence and diffuse reflectance, the two modalities of interest in this work, were implemented in their single point spectroscopic and imaging declinations. Two “platforms”, a spectroscopic probe setup and an optical imaging laparoscope, were built; they included either one of the two aforementioned modalities or the two of them together. The spectroscopic probe system was assembled to detect lesions in the digestive tract. In its first version, the setup included a dual laser illumination system to carry out an ex vivo fluorescence study of non-alcoholic fatty liver diseases (NAFLD) in the mouse model. Outcomes of the study demonstrated that healthy livers could be distinguished from NAFLD livers with high classification accuracy. Then, the same fluorescence probe inserted in a force adaptive robotic endoscope was applied on a fluorescence phantom and a liver specimen to prove the feasibility of recording spectra at multiple points with controlled scanning pattern and probe/sample pressure (known to affect the spectra shape). This approach proposed therefore a convincing method to perform intraoperative fluorescence measurements. The fluorescence setup was subsequently modified into a combined fluorescence/diffuse reflectance spectroscopic probe and demonstrated as an efficient method to separate normal and diseased tissue samples from the human gastrointestinal tract. Following the single point spectroscopy work, imaging studies were conducted with a spectrally resolved laparoscope. The system, featuring a CCD/filter wheel unit clipped on a traditional laparoscope was validated on fluorescence phantoms and employed in two experiments. The first one, building on the spectroscopy study of the gastrointestinal tract, was originally aimed at locating tumour in the oesophagus but a lack of tissue availability prevented us from doing so. The system design and validation on fluorophores phantoms were nevertheless described. In the second one, the underarm of a pig was imaged after injection of a nerve contrast agent in order to test the feasibility of in vivo nerve delineation. Fluorescence was detected from the region of interest but no clear contrast between the nerve and the surrounding muscle tissue could be detected. Finally, the fluorescence imaging laparoscope was modified into a hyperspectral reflectance imaging laparoscope to perform tissue vasculature studies. It was first characterized and tested on haemoglobin phantoms with varying concentrations and oxygen saturations and then employed in vivo to follow the haemoglobin concentration and oxygen saturation temporal evolutions of a porcine intestine subsequently to the pig’s termination. A decrease in oxygen saturation was observed. The last experiment consisted in monitoring the tissue re-oxygenation of a rabbit uterus transplant on the recipient animal, a successful tissue re-perfusion after the graft was highlighted

    A Modular and Open-Source Framework for Virtual Reality Visualisation and Interaction in Bioimaging

    Get PDF
    Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data from analysis of such data or simulations. The advent of new imaging technologies, such as lightsheet microscopy, has resulted in the users being confronted with an ever-growing amount of data, with even terabytes of imaging data created within a day. With the possibility of gentler and more high-performance imaging, the spatiotemporal complexity of the model systems or processes of interest is increasing as well. Visualisation is often the first step in making sense of this data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualisations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools. In this work we present scenery, a modular and extensible visualisation framework for the Java VM that can handle mesh and large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms, and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features, and discuss its use with VR/AR hardware and in distributed rendering. In addition to the visualisation framework, we present a series of case studies, where scenery can provide tangible benefit in developmental and systems biology: With Bionic Tracking, we demonstrate a new technique for tracking cells in 4D volumetric datasets via tracking eye gaze in a virtual reality headset, with the potential to speed up manual tracking tasks by an order of magnitude. We further introduce ideas to move towards virtual reality-based laser ablation and perform a user study in order to gain insight into performance, acceptance and issues when performing ablation tasks with virtual reality hardware in fast developing specimen. To tame the amount of data originating from state-of-the-art volumetric microscopes, we present ideas how to render the highly-efficient Adaptive Particle Representation, and finally, we present sciview, an ImageJ2/Fiji plugin making the features of scenery available to a wider audience.:Abstract Foreword and Acknowledgements Overview and Contributions Part 1 - Introduction 1 Fluorescence Microscopy 2 Introduction to Visual Processing 3 A Short Introduction to Cross Reality 4 Eye Tracking and Gaze-based Interaction Part 2 - VR and AR for System Biology 5 scenery — VR/AR for Systems Biology 6 Rendering 7 Input Handling and Integration of External Hardware 8 Distributed Rendering 9 Miscellaneous Subsystems 10 Future Development Directions Part III - Case Studies C A S E S T U D I E S 11 Bionic Tracking: Using Eye Tracking for Cell Tracking 12 Towards Interactive Virtual Reality Laser Ablation 13 Rendering the Adaptive Particle Representation 14 sciview — Integrating scenery into ImageJ2 & Fiji Part IV - Conclusion 15 Conclusions and Outlook Backmatter & Appendices A Questionnaire for VR Ablation User Study B Full Correlations in VR Ablation Questionnaire C Questionnaire for Bionic Tracking User Study List of Tables List of Figures Bibliography SelbststĂ€ndigkeitserklĂ€run
    corecore