49 research outputs found
A clinically translatable hyperspectral endoscopy (HySE) system for imaging the gastrointestinal tract.
Hyperspectral imaging (HSI) enables visualisation of morphological and biochemical information, which could improve disease diagnostic accuracy. Unfortunately, the wide range of image distortions that arise during flexible endoscopy in the clinic have made integration of HSI challenging. To address this challenge, we demonstrate a hyperspectral endoscope (HySE) that simultaneously records intrinsically co-registered hyperspectral and standard-of-care white light images, which allows image distortions to be compensated computationally and an accurate hyperspectral data cube to be reconstructed as the endoscope moves in the lumen. Evaluation of HySE performance shows excellent spatial, spectral and temporal resolution and high colour fidelity. Application of HySE enables: quantification of blood oxygenation levels in tissue mimicking phantoms; differentiation of spectral profiles from normal and pathological ex vivo human tissues; and recording of hyperspectral data under freehand motion within an intact ex vivo pig oesophagus model. HySE therefore shows potential for enabling HSI in clinical endoscopy
Cable-driven parallel robot for transoral laser phonosurgery
Transoral laser phonosurgery (TLP) is a common surgical procedure in otolaryngology.
Currently, two techniques are commonly used: free beam and fibre delivery. For free beam
delivery, in combination with laser scanning techniques, accurate laser pattern scanning can
be achieved. However, a line-of-sight to the target is required. A suspension laryngoscope is
adopted to create a straight working channel for the scanning laser beam, which could
introduce lesions to the patient, and the manipulability and ergonomics are poor. For the fibre
delivery approach, a flexible fibre is used to transmit the laser beam, and the distal tip of the
laser fibre can be manipulated by a flexible robotic tool. The issues related to the limitation
of the line-of-sight can be avoided. However, the laser scanning function is currently lost in
this approach, and the performance is inferior to that of the laser scanning technique in the
free beam approach.
A novel cable-driven parallel robot (CDPR), LaryngoTORS, has been developed for TLP.
By using a curved laryngeal blade, a straight suspension laryngoscope will not be necessary
to use, which is expected to be less traumatic to the patient. Semi-autonomous free path
scanning can be executed, and high precision and high repeatability of the free path can be
achieved. The performance has been verified in various bench and ex vivo tests. The technical
feasibility of the LaryngoTORS robot for TLP was considered and evaluated in this thesis.
The LaryngoTORS robot has demonstrated the potential to offer an acceptable and feasible
solution to be used in real-world clinical applications of TLP.
Furthermore, the LaryngoTORS robot can combine with fibre-based optical biopsy
techniques. Experiments of probe-based confocal laser endomicroscopy (pCLE) and
hyperspectral fibre-optic sensing were performed. The LaryngoTORS robot demonstrates the
potential to be utilised to apply the fibre-based optical biopsy of the larynx.Open Acces
Recommended from our members
Deep learning applied to hyperspectral endoscopy for online spectral classification
Abstract: Hyperspectral imaging (HSI) is being explored in endoscopy as a tool to extract biochemical information that may improve contrast for early cancer detection in the gastrointestinal tract. Motion artefacts during medical endoscopy have traditionally limited HSI application, however, recent developments in the field have led to real-time HSI deployments. Unfortunately, traditional HSI analysis methods remain unable to rapidly process the volume of hyperspectral data in order to provide real-time feedback to the operator. Here, a convolutional neural network (CNN) is proposed to enable online classification of data obtained during HSI endoscopy. A five-layered CNN was trained and fine-tuned on a dataset of 300 hyperspectral endoscopy images acquired from a planar Macbeth ColorChecker chart and was able to distinguish between its 18 constituent colors with an average accuracy of 94.3% achieved at 8.8 fps. Performance was then tested on a set of images simulating an endoscopy environment, consisting of color charts warped inside a rigid tube mimicking a lumen. The algorithm proved robust to such variations, with classification accuracies over 90% being obtained despite the variations, with an average drop in accuracy of 2.4% being registered at the points of longest working distance and most inclination. For further validation of the color-based classification system, ex vivo videos of a methylene blue dyed pig esophagus and images of different disease stages in the human esophagus were analyzed, showing spatially distinct color classifications. These results suggest that the CNN has potential to provide color-based classification during real-time HSI in endoscopy
Recommended from our members
Deep learning applied to hyperspectral endoscopy for online spectral classification
Abstract: Hyperspectral imaging (HSI) is being explored in endoscopy as a tool to extract biochemical information that may improve contrast for early cancer detection in the gastrointestinal tract. Motion artefacts during medical endoscopy have traditionally limited HSI application, however, recent developments in the field have led to real-time HSI deployments. Unfortunately, traditional HSI analysis methods remain unable to rapidly process the volume of hyperspectral data in order to provide real-time feedback to the operator. Here, a convolutional neural network (CNN) is proposed to enable online classification of data obtained during HSI endoscopy. A five-layered CNN was trained and fine-tuned on a dataset of 300 hyperspectral endoscopy images acquired from a planar Macbeth ColorChecker chart and was able to distinguish between its 18 constituent colors with an average accuracy of 94.3% achieved at 8.8 fps. Performance was then tested on a set of images simulating an endoscopy environment, consisting of color charts warped inside a rigid tube mimicking a lumen. The algorithm proved robust to such variations, with classification accuracies over 90% being obtained despite the variations, with an average drop in accuracy of 2.4% being registered at the points of longest working distance and most inclination. For further validation of the color-based classification system, ex vivo videos of a methylene blue dyed pig esophagus and images of different disease stages in the human esophagus were analyzed, showing spatially distinct color classifications. These results suggest that the CNN has potential to provide color-based classification during real-time HSI in endoscopy
Automatic Recognition of Colon and Esophagogastric Cancer with Machine Learning and Hyperspectral Imaging
There are approximately 1.8 million diagnoses of colorectal cancer, 1 million diagnoses of stomach cancer, and 0.6 million diagnoses of esophageal cancer each year globally. An automatic computer-assisted diagnostic (CAD) tool to rapidly detect colorectal and esophagogastric cancer tissue in optical images would be hugely valuable to a surgeon during an intervention. Based on a colon dataset with 12 patients and an esophagogastric dataset of 10 patients, several state-of-the-art machine learning methods have been trained to detect cancer tissue using hyperspectral imaging (HSI), including Support Vector Machines (SVM) with radial basis function kernels, Multi-Layer Perceptrons (MLP) and 3D Convolutional Neural Networks (3DCNN). A leave-one-patient-out cross-validation (LOPOCV) with and without combining these sets was performed. The ROC-AUC score of the 3DCNN was slightly higher than the MLP and SVM with a difference of 0.04 AUC. The best performance was achieved with the 3DCNN for colon cancer and esophagogastric cancer detection with a high ROC-AUC of 0.93. The 3DCNN also achieved the best DICE scores of 0.49 and 0.41 on the colon and esophagogastric datasets, respectively. These scores were significantly improved using a patient-specific decision threshold to 0.58 and 0.51, respectively. This indicates that, in practical use, an HSI-based CAD system using an interactive decision threshold is likely to be valuable. Experiments were also performed to measure the benefits of combining the colorectal and esophagogastric datasets (22 patients), and this yielded significantly better results with the MLP and SVM models
Surgical spectral imaging
Recent technological developments have resulted in the availability of miniaturised spectral imaging sensors capable of operating in the multi- (MSI) and hyperspectral imaging (HSI) regimes. Simultaneous advances in image-processing techniques and artificial intelligence (AI), especially in machine learning and deep learning, have made these data-rich modalities highly attractive as a means of extracting biological information non-destructively. Surgery in particular is poised to benefit from this, as spectrally-resolved tissue optical properties can offer enhanced contrast as well as diagnostic and guidance information during interventions. This is particularly relevant for procedures where inherent contrast is low under standard white light visualisation. This review summarises recent work in surgical spectral imaging (SSI) techniques, taken from Pubmed, Google Scholar and arXiv searches spanning the period 2013–2019. New hardware, optimised for use in both open and minimally-invasive surgery (MIS), is described, and recent commercial activity is summarised. Computational approaches to extract spectral information from conventional colour images are reviewed, as tip-mounted cameras become more commonplace in MIS. Model-based and machine learning methods of data analysis are discussed in addition to simulation, phantom and clinical validation experiments. A wide variety of surgical pilot studies are reported but it is apparent that further work is needed to quantify the clinical value of MSI/HSI. The current trend toward data-driven analysis emphasises the importance of widely-available, standardised spectral imaging datasets, which will aid understanding of variability across organs and patients, and drive clinical translation
Hyperspectral Data Analysis in R: The hsdar Package
Hyperspectral remote sensing is a promising tool for a variety of applications including ecology, geology, analytical chemistry and medical research. This article presents the new hsdar package for R statistical software, which performs a variety of analysis steps taken during a typical hyperspectral remote sensing approach. The package introduces a new class for efficiently storing large hyperspectral data sets such as hyperspectral cubes within R. The package includes several important hyperspectral analysis tools such as continuum removal, normalized ratio indices and integrates two widely used radiation transfer models. In addition, the package provides methods to directly use the functionality of the caret package for machine learning tasks. Two case studies demonstrate the package's range of functionality: First, plant leaf chlorophyll content is estimated and second, cancer in the human larynx is detected from hyperspectral data
Multimodal endoscopic system based on multispectral and photometric stereo imaging and analysis
We propose a multimodal endoscopic system based on white light (WL), multispectral (MS), and photometric stereo (PS) imaging for the examination of colorectal cancer (CRC). Recently, the enhancement of the diagnostic accuracy of CRC colonoscopy has been reported; however, tumor diagnosis for a variety of lesion types remains challenging using current endoscopy. In this study, we demonstrate that our developed system can simultaneously discriminate tumor distributions and provide three-dimensional (3D) morphological information about the colon surface using the WL, MS, and PS imaging modalities. The results demonstrate that the proposed system has considerable potential for CRC diagnosis. © 2019, OSA - The Optical Society. All rights reserved.1
A Review on Advances in Intra-operative Imaging for Surgery and Therapy: Imagining the Operating Room of the Future
none4openZaffino, Paolo; Moccia, Sara; De Momi, Elena; Spadea, Maria FrancescaZaffino, Paolo; Moccia, Sara; De Momi, Elena; Spadea, Maria Francesc