10 research outputs found

    The Inherent Power: An Obscure Doctrine Confronts Due Process

    Get PDF

    Higher Order of Motion Magnification for Vessel Identification in Surgical Video

    Get PDF
    Locating vessels during surgery is critical for avoiding inadvertent damage, yet vasculature can be difficult to identify. Video motion magnification can potentially highlight vessels by exaggerating subtle motion embedded within the video to become perceivable to the surgeon. In this paper, we explore a physiological model of artery distension to extend motion magnification to incorporate higher orders of motion, leveraging the difference in acceleration over time (jerk) in pulsatile motion to highlight the vascular pulse wave. Our method is compared to first and second order motion based Eulerian video magnification algorithms. Using data from a surgical video retrieved during a robotic prostatectomy, we show that our method can accentuate cardio-physiological features and produce a more succinct and clearer video for motion magnification, with more similarities in areas without motion to the source video at large magnifications. We validate the approach with a Structure Similarity (SSIM) and Peak Signal to Noise Ratio (PSNR) assessment of three videos at an increasing working distance, using three different levels of optical magnification. Spatio-temporal cross sections are presented to show the effectiveness of our proposal and video samples are provided to demonstrates qualitatively our results

    Augmented Reality needle ablation guidance tool for Irreversible Electroporation in the pancreas

    Get PDF
    Irreversible electroporation (IRE) is a soft tissue ablation technique suitable for treatment of inoperable tumours in the pancreas. The process involves applying a high voltage electric field to the tissue containing the mass using needle electrodes, leaving cancerous cells irreversibly damaged and vulnerable to apoptosis. Efficacy of the treatment depends heavily on the accuracy of needle placement and requires a high degree of skill from the operator. In this paper, we describe an Augmented Reality (AR) system designed to overcome the challenges associated with planning and guiding the needle insertion process. Our solution, based on the HoloLens (Microsoft, USA) platform, tracks the position of the headset, needle electrodes and ultrasound (US) probe in space. The proof of concept implementation of the system uses this tracking data to render real-time holographic guides on the HoloLens, giving the user insight into the current progress of needle insertion and an indication of the target needle trajectory. The operator's field of view is augmented using visual guides and real-time US feed rendered on a holographic plane, eliminating the need to consult external monitors. Based on these early prototypes, we are aiming to develop a system that will lower the skill level required for IRE while increasing overall accuracy of needle insertion and, hence, the likelihood of successful treatment.Comment: 6 pages, 5 figures. Proc. SPIE 10576 (2018) Copyright 2018 Society of Photo Optical Instrumentation Engineers (SPIE). One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this publication for a fee or for commercial purposes, or modification of the contents of the publication are prohibite

    Synthetic white balancing for intra-operative hyperspectral imaging

    Full text link
    Hyperspectral imaging shows promise for surgical applications to non-invasively provide spatially-resolved, spectral information. For calibration purposes, a white reference image of a highly-reflective Lambertian surface should be obtained under the same imaging conditions. Standard white references are not sterilizable, and so are unsuitable for surgical environments. We demonstrate the necessity for in situ white references and address this by proposing a novel, sterile, synthetic reference construction algorithm. The use of references obtained at different distances and lighting conditions to the subject were examined. Spectral and color reconstructions were compared with standard measurements qualitatively and quantitatively, using ΔE\Delta E and normalised RMSE respectively. The algorithm forms a composite image from a video of a standard sterile ruler, whose imperfect reflectivity is compensated for. The reference is modelled as the product of independent spatial and spectral components, and a scalar factor accounting for gain, exposure, and light intensity. Evaluation of synthetic references against ideal but non-sterile references is performed using the same metrics alongside pixel-by-pixel errors. Finally, intraoperative integration is assessed though cadaveric experiments. Improper white balancing leads to increases in all quantitative and qualitative errors. Synthetic references achieve median pixel-by-pixel errors lower than 6.5% and produce similar reconstructions and errors to an ideal reference. The algorithm integrated well into surgical workflow, achieving median pixel-by-pixel errors of 4.77%, while maintaining good spectral and color reconstruction.Comment: 22 pages, 10 figure

    Integrated multi-modality image-guided navigation for neurosurgery: open-source software platform using state-of-the-art clinical hardware.

    Get PDF
    PURPOSE: Image-guided surgery (IGS) is an integral part of modern neuro-oncology surgery. Navigated ultrasound provides the surgeon with reconstructed views of ultrasound data, but no commercial system presently permits its integration with other essential non-imaging-based intraoperative monitoring modalities such as intraoperative neuromonitoring. Such a system would be particularly useful in skull base neurosurgery. METHODS: We established functional and technical requirements of an integrated multi-modality IGS system tailored for skull base surgery with the ability to incorporate: (1) preoperative MRI data and associated 3D volume reconstructions, (2) real-time intraoperative neurophysiological data and (3) live reconstructed 3D ultrasound. We created an open-source software platform to integrate with readily available commercial hardware. We tested the accuracy of the system's ultrasound navigation and reconstruction using a polyvinyl alcohol phantom model and simulated the use of the complete navigation system in a clinical operating room using a patient-specific phantom model. RESULTS: Experimental validation of the system's navigated ultrasound component demonstrated accuracy of [Formula: see text] and a frame rate of 25 frames per second. Clinical simulation confirmed that system assembly was straightforward, could be achieved in a clinically acceptable time of [Formula: see text] and performed with a clinically acceptable level of accuracy. CONCLUSION: We present an integrated open-source research platform for multi-modality IGS. The present prototype system was tailored for neurosurgery and met all minimum design requirements focused on skull base surgery. Future work aims to optimise the system further by addressing the remaining target requirements

    Surgical Video Motion Magnification with Suppression of Instrument Artefacts

    Get PDF
    Video motion magnification could directly highlight subsurface blood vessels in endoscopic video in order to prevent inadvertent damage and bleeding. Applying motion filters to the full surgical image is however sensitive to residual motion from the surgical instruments and can impede practical application due to aberration motion artefacts. By storing the temporal filter response from local spatial frequency information for a single cardiovascular cycle prior to tool introduction to the scene, a filter can be used to determine if motion magnification should be active for a spatial region of the surgical image. In this paper, we propose a strategy to reduce aberration due to non-physiological motion for surgical video motion magnification. We present promising results on endoscopic transnasal transsphenoidal pituitary surgery with a quantitative comparison to recent methods using Structural Similarity (SSIM), as well as qualitative analysis by comparing spatio-temporal cross sections of the videos and individual frames.Comment: Early accept to the Internation Conference on Medical Imaging Computing and Computer Assisted Intervention (MICCAI) 2020 Presentation available here: https://www.youtube.com/watch?v=kKI_Ygny76Q Supplementary video available here: https://www.youtube.com/watch?v=8DUkcHI149

    Synthetic white balancing for intra-operative hyperspectral imaging

    Full text link
    PURPOSE Hyperspectral imaging shows promise for surgical applications to non-invasively provide spatially resolved, spectral information. For calibration purposes, a white reference image of a highly reflective Lambertian surface should be obtained under the same imaging conditions. Standard white references are not sterilizable and so are unsuitable for surgical environments. We demonstrate the necessity for in situ white references and address this by proposing a novel, sterile, synthetic reference construction algorithm. APPROACH The use of references obtained at different distances and lighting conditions to the subject were examined. Spectral and color reconstructions were compared with standard measurements qualitatively and quantitatively, using and normalized RMSE, respectively. The algorithm forms a composite image from a video of a standard sterile ruler, whose imperfect reflectivity is compensated for. The reference is modeled as the product of independent spatial and spectral components, and a scalar factor accounting for gain, exposure, and light intensity. Evaluation of synthetic references against ideal but non-sterile references is performed using the same metrics alongside pixel-by-pixel errors. Finally, intraoperative integration is assessed though cadaveric experiments. RESULTS Improper white balancing leads to increases in all quantitative and qualitative errors. Synthetic references achieve median pixel-by-pixel errors lower than 6.5% and produce similar reconstructions and errors to an ideal reference. The algorithm integrated well into surgical workflow, achieving median pixel-by-pixel errors of 4.77% while maintaining good spectral and color reconstruction. CONCLUSIONS We demonstrate the importance of in situ white referencing and present a novel synthetic referencing algorithm. This algorithm is suitable for surgery while maintaining the quality of classical data reconstruction
    corecore