16 research outputs found
Imaging biomarkers associated with extra-axial intracranial tumors: a systematic review
Extra-axial brain tumors are extra-cerebral tumors and are usually benign. The choice of treatment for extra-axial tumors is often dependent on the growth of the tumor, and imaging plays a significant role in monitoring growth and clinical decision-making. This motivates the investigation of imaging biomarkers for these tumors that may be incorporated into clinical workflows to inform treatment decisions. The databases from Pubmed, Web of Science, Embase, and Medline were searched from 1 January 2000 to 7 March 2022, to systematically identify relevant publications in this area. All studies that used an imaging tool and found an association with a growth-related factor, including molecular markers, grade, survival, growth/progression, recurrence, and treatment outcomes, were included in this review. We included 42 studies, comprising 22 studies (50%) of patients with meningioma; 17 studies (38.6%) of patients with pituitary tumors; three studies (6.8%) of patients with vestibular schwannomas; and two studies (4.5%) of patients with solitary fibrous tumors. The included studies were explicitly and narratively analyzed according to tumor type and imaging tool. The risk of bias and concerns regarding applicability were assessed using QUADAS-2. Most studies (41/44) used statistics-based analysis methods, and a small number of studies (3/44) used machine learning. Our review highlights an opportunity for future work to focus on machine learning-based deep feature identification as biomarkers, combining various feature classes such as size, shape, and intensity.Systematic Review Registration: PROSPERO, CRD4202230692
Spatial gradient consistency for unsupervised learning of hyperspectral demosaicking: Application to surgical imaging
Hyperspectral imaging has the potential to improve intraoperative decision
making if tissue characterisation is performed in real-time and with
high-resolution. Hyperspectral snapshot mosaic sensors offer a promising
approach due to their fast acquisition speed and compact size. However, a
demosaicking algorithm is required to fully recover the spatial and spectral
information of the snapshot images. Most state-of-the-art demosaicking
algorithms require ground-truth training data with paired snapshot and
high-resolution hyperspectral images, but such imagery pairs with the exact
same scene are physically impossible to acquire in intraoperative settings. In
this work, we present a fully unsupervised hyperspectral image demosaicking
algorithm which only requires exemplar snapshot images for training purposes.
We regard hyperspectral demosaicking as an ill-posed linear inverse problem
which we solve using a deep neural network. We take advantage of the spectral
correlation occurring in natural scenes to design a novel inter spectral band
regularisation term based on spatial gradient consistency. By combining our
proposed term with standard regularisation techniques and exploiting a standard
data fidelity term, we obtain an unsupervised loss function for training deep
neural networks, which allows us to achieve real-time hyperspectral image
demosaicking. Quantitative results on hyperspetral image datasets show that our
unsupervised demosaicking approach can achieve similar performance to its
supervised counter-part, and significantly outperform linear demosaicking. A
qualitative user study on real snapshot hyperspectral surgical images confirms
the results from the quantitative analysis. Our results suggest that the
proposed unsupervised algorithm can achieve promising hyperspectral
demosaicking in real-time thus advancing the suitability of the modality for
intraoperative use
Deep Reinforcement Learning Based System for Intraoperative Hyperspectral Video Autofocusing
Hyperspectral imaging (HSI) captures a greater level of spectral detail than
traditional optical imaging, making it a potentially valuable intraoperative
tool when precise tissue differentiation is essential. Hardware limitations of
current optical systems used for handheld real-time video HSI result in a
limited focal depth, thereby posing usability issues for integration of the
technology into the operating room. This work integrates a focus-tunable liquid
lens into a video HSI exoscope, and proposes novel video autofocusing methods
based on deep reinforcement learning. A first-of-its-kind robotic focal-time
scan was performed to create a realistic and reproducible testing dataset. We
benchmarked our proposed autofocus algorithm against traditional policies, and
found our novel approach to perform significantly () better than
traditional techniques ( mean absolute focal error compared to
). In addition, we performed a blinded usability trial by having
two neurosurgeons compare the system with different autofocus policies, and
found our novel approach to be the most favourable, making our system a
desirable addition for intraoperative HSI.Comment: To be presented at MICCAI 202
Synthetic white balancing for intra-operative hyperspectral imaging
Hyperspectral imaging shows promise for surgical applications to
non-invasively provide spatially-resolved, spectral information. For
calibration purposes, a white reference image of a highly-reflective Lambertian
surface should be obtained under the same imaging conditions. Standard white
references are not sterilizable, and so are unsuitable for surgical
environments. We demonstrate the necessity for in situ white references and
address this by proposing a novel, sterile, synthetic reference construction
algorithm. The use of references obtained at different distances and lighting
conditions to the subject were examined. Spectral and color reconstructions
were compared with standard measurements qualitatively and quantitatively,
using and normalised RMSE respectively. The algorithm forms a
composite image from a video of a standard sterile ruler, whose imperfect
reflectivity is compensated for. The reference is modelled as the product of
independent spatial and spectral components, and a scalar factor accounting for
gain, exposure, and light intensity. Evaluation of synthetic references against
ideal but non-sterile references is performed using the same metrics alongside
pixel-by-pixel errors. Finally, intraoperative integration is assessed though
cadaveric experiments. Improper white balancing leads to increases in all
quantitative and qualitative errors. Synthetic references achieve median
pixel-by-pixel errors lower than 6.5% and produce similar reconstructions and
errors to an ideal reference. The algorithm integrated well into surgical
workflow, achieving median pixel-by-pixel errors of 4.77%, while maintaining
good spectral and color reconstruction.Comment: 22 pages, 10 figure
Lightfield hyperspectral imaging in neuro-oncology surgery: an IDEAL 0 and 1 study
IntroductionHyperspectral imaging (HSI) has shown promise in the field of intra-operative imaging and tissue differentiation as it carries the capability to provide real-time information invisible to the naked eye whilst remaining label free. Previous iterations of intra-operative HSI systems have shown limitations, either due to carrying a large footprint limiting ease of use within the confines of a neurosurgical theater environment, having a slow image acquisition time, or by compromising spatial/spectral resolution in favor of improvements to the surgical workflow. Lightfield hyperspectral imaging is a novel technique that has the potential to facilitate video rate image acquisition whilst maintaining a high spectral resolution. Our pre-clinical and first-in-human studies (IDEAL 0 and 1, respectively) demonstrate the necessary steps leading to the first in-vivo use of a real-time lightfield hyperspectral system in neuro-oncology surgery.MethodsA lightfield hyperspectral camera (Cubert Ultris ×50) was integrated in a bespoke imaging system setup so that it could be safely adopted into the open neurosurgical workflow whilst maintaining sterility. Our system allowed the surgeon to capture in-vivo hyperspectral data (155 bands, 350–1,000 nm) at 1.5 Hz. Following successful implementation in a pre-clinical setup (IDEAL 0), our system was evaluated during brain tumor surgery in a single patient to remove a posterior fossa meningioma (IDEAL 1). Feedback from the theater team was analyzed and incorporated in a follow-up design aimed at implementing an IDEAL 2a study.ResultsFocusing on our IDEAL 1 study results, hyperspectral information was acquired from the cerebellum and associated meningioma with minimal disruption to the neurosurgical workflow. To the best of our knowledge, this is the first demonstration of HSI acquisition with 100+ spectral bands at a frame rate over 1Hz in surgery.DiscussionThis work demonstrated that a lightfield hyperspectral imaging system not only meets the design criteria and specifications outlined in an IDEAL-0 (pre-clinical) study, but also that it can translate into clinical practice as illustrated by a successful first in human study (IDEAL 1). This opens doors for further development and optimisation, given the increasing evidence that hyperspectral imaging can provide live, wide-field, and label-free intra-operative imaging and tissue differentiation