135 research outputs found

    Towards real-time detection of squamous pre-cancers from oesophageal endoscopic videos

    Get PDF
    This study investigates the feasibility of applying state of the art deep learning techniques to detect precancerous stages of squamous cell carcinoma (SCC) cancer in real time to address the challenges while diagnosing SCC with subtle appearance changes as well as video processing speed. Two deep learning models are implemented, which are to determine artefact of video frames and to detect, segment and classify those no-artefact frames respectively. For detection of SCC, both mask-RCNN and YOLOv3 architectures are implemented. In addition, in order to ascertain one bounding box being detected for one region of interest instead of multiple duplicated boxes, a faster non-maxima suppression technique (NMS) is applied on top of predictions. As a result, this developed system can process videos at 16-20 frames per second. Three classes are classified, which are ‘suspicious’, ‘high grade’ and ‘cancer’ of SCC. With the resolution of 1920x1080 pixels of videos, the average processing time while apply YOLOv3 is in the range of 0.064-0.101 seconds per frame, i.e. 10-15 frames per second, while running under Windows 10 operating system with 1 GPU (GeForce GTX 1060). The averaged accuracies for classification and detection are 85% and 74% respectively. Since YOLOv3 only provides bounding boxes, to delineate lesioned regions, mask-RCNN is also evaluated. While better detection result is achieved with 77% accuracy, the classification accuracy is similar to that by YOLOYv3 with 84%. However, the processing speed is more than 10 times slower with an average of 1.2 second per frame due to creation of masks. The accuracy of segmentation by mask-RCNN is 63%. These results are based on the date sets of 350 images. Further improvement is hence in need in the future by collecting, annotating or augmenting more datasets

    Seeing the Big Picture: System Architecture Trends in Endoscopy and LED-Based hyperspectral Subsystem Intergration

    Get PDF
    Early-stage colorectal lesions remain difficult to detect. Early development of neoplasia tends to be small (less than 10 mm) and flat and difficult to distinguish from surrounding mucosa. Additionally, optical diagnosis of neoplasia as benign or malignant is problematic. Low rates of detection of these lesions allow for continued growth in the colorectum and increased risk of cancer formation. Therefore, it is crucial to detect neoplasia and other non-neoplastic lesions to determine risk and guide future treatment. Technology for detection needs to enhance contrast of subtle tissue differences in the colorectum and track multiple biomarkers simultaneously. This work implements one such technology with the potential to achieve the desired multi-contrast outcome for endoscopic screenings: hyperspectral imaging. Traditional endoscopic imaging uses a white light source and a RGB detector to visualize the colorectum using reflected light. Hyperspectral imaging (HSI) acquires an image over a range of individual wavelength bands to create an image hypercube with a wavelength dimension much deeper and more sensitive than that of an RGB image. A hypercube can consist of reflectance or fluorescence (or both) spectra depending on the filtering optics involved. Prior studies using HSI in endoscopy have normally involved ex vivo tissues or xiv optics that created a trade-off between spatial resolution, spectral discrimination and temporal sampling. This dissertation describes the systems design of an alternative HSI endoscopic imaging technology that can provide high spatial resolution, high spectral distinction and video-rate acquisition in vivo. The hyperspectral endoscopic system consists of a novel spectral illumination source for image acquisition dependent on the fluorescence excitation (instead of emission). Therefore, this work represents a novel contribution to the field of endoscopy in combining excitation-scanning hyperspectral imaging and endoscopy. This dissertation describes: 1) systems architecture of the endoscopic system in review of previous iterations and theoretical next-generation options, 2) feasibility testing of a LED-based hyperspectral endoscope system and 3) another LED-based spectral illuminator on a microscope platform to test multi-spectral contrast imaging. The results of the architecture point towards an endoscopic system with more complex imaging and increased computational capabilities. The hyperspectral endoscope platform proved feasibility of a LED-based spectral light source with a multi-furcated solid light guide. Another LED-based design was tested successfully on a microscope platform with a dual mirror array similar to telescope designs. Both feasibility tests emphasized optimization of coupling optics and combining multiple diffuse light sources to a common output. These results should lead to enhanced imagery for endoscopic tissue discrimination and future optical diagnosis for routine colonoscopy

    Experimental Evaluation and Analysis of LED Illumination Source for Endoscopy Imaging

    Get PDF
    The minimally invasive surgery uses a small instrument with camera and light to fit the tiny cut in the skin. The selection of the light depends on the power and driving current of the circuit. It can also help in the standardization of the camera and capture the tissues' true-colour image. This paper presents the LED source analysis used in the clinical endoscopes for surgery and the human body's medical examination. Initially, a LED source selection mechanism generating intense illuminance in a visible band is proposed. A low-cost prototype model is developed to analyze the wavelength and illuminance of three different LEDs types. An effect on variation in LED illumination is investigated by changing the distance between the Borescope and LED source. True-colour image generation and tissue contrast are more important in medical diagnostics. Therefore, a sigmoid function improving the whole contrast ratio of the captured image in real-time is presented. The results of spectrum and wavelength for a current variation are presented. Type 3 LED produces higher illumination (i.e., 395 Klux) and peak wavelength (i.e., 622.05 nm) than other LEDs, while type-2 LED has better FWHM for the blue colour spectrum. The modification in the sigmoid function enhances the image with 34.25 peak PSNR producing a true-colour image

    Fusion of colour contrasted images for early detection of oesophageal squamous cell dysplasia from endoscopic videos in real time

    Get PDF
    Standard white light (WL) endoscopy often misses precancerous oesophageal changes due to their only subtle differences to the surrounding normal mucosa. While deep learning (DL) based decision support systems benefit to a large extent, they face two challenges, which are limited annotated data sets and insufficient generalisation. This paper aims to fuse a DL system with human perception by exploiting computational enhancement of colour contrast. Instead of employing conventional data augmentation techniques by alternating RGB values of an image, this study employs a human colour appearance model, CIECAM, to enhance the colours of an image. When testing on a frame of endoscopic videos, the developed system firstly generates its contrast-enhanced image, then processes both original and enhanced images one after another to create initial segmentation masks. Finally, fusion takes place on the assembled list of masks obtained from both images to determine the finishing bounding boxes, segments and class labels that are rendered on the original video frame, through the application of non-maxima suppression technique (NMS). This deep learning system is built upon real-time instance segmentation network Yolact. In comparison with the same system without fusion, the sensitivity and specificity for detecting early stage of oesophagus cancer, i.e. low-grade dysplasia (LGD) increased from 75% and 88% to 83% and 97%, respectively. The video processing/play back speed is 33.46 frames per second. The main contribution includes alleviation of data source dependency of existing deep learning systems and the fusion of human perception for data augmentation

    Using Fluorescence – Polarization Endoscopy in Detection of Precancerous and Cancerous Lesions in Colon and Pancreatic Cancer

    Get PDF
    Colitis-associated cancer (CAC) arises from premalignant flat lesions of the colon, which are difficult to detect with current endoscopic screening approaches. We have developed a complementary fluorescence and polarization reporting strategy that combines the unique biochemical and physical properties of dysplasia and cancer for real time detection of these lesions. Utilizing a new thermoresponsive sol-gel formulation with targeted molecular probe allowed topical application and detection of precancerous and cancerous lesions during endoscopy. Incorporation of nanowire-filtered polarization imaging into NIR fluorescence endoscopy served as a validation strategy prior to obtaining biopsies. In order to reduce repeat surgeries arising from incomplete tumor resection, we demonstrated the efficacy of the targeted molecular probe towards margins of sporadic colorectal cancer (SCC). Fluorescence-polarization microscopy using circular polarized (CP) light served as a rapid, supplementary tool for assessment and validation of excised tissue to ensure complete tumor resection for examining tumor margins prior to H&E-based pathological diagnosis. We extended our platform towards non-invasive directed detection of pancreatic cancer utilizing fluorescence molecular tomography (FMT) and NIR laparoscopy using identified targeted molecular probe. We were able to non-invasively distinguished between pancreatitis and pancreatic cancer and guide pancreatic tumor resection using NIR laparoscopy

    Assessing generalisability of deep learning-based polyp detection and segmentation methods through a computer vision challenge

    Get PDF
    Polyps are well-known cancer precursors identified by colonoscopy. However, variability in their size, appearance, and location makes the detection of polyps challenging. Moreover, colonoscopy surveillance and removal of polyps are highly operator-dependent procedures and occur in a highly complex organ topology. There exists a high missed detection rate and incomplete removal of colonic polyps. To assist in clinical procedures and reduce missed rates, automated methods for detecting and segmenting polyps using machine learning have been achieved in past years. However, the major drawback in most of these methods is their ability to generalise to out-of-sample unseen datasets from different centres, populations, modalities, and acquisition systems. To test this hypothesis rigorously, we, together with expert gastroenterologists, curated a multi-centre and multi-population dataset acquired from six different colonoscopy systems and challenged the computational expert teams to develop robust automated detection and segmentation methods in a crowd-sourcing Endoscopic computer vision challenge. This work put forward rigorous generalisability tests and assesses the usability of devised deep learning methods in dynamic and actual clinical colonoscopy procedures. We analyse the results of four top performing teams for the detection task and five top performing teams for the segmentation task. Our analyses demonstrate that the top-ranking teams concentrated mainly on accuracy over the real-time performance required for clinical applicability. We further dissect the devised methods and provide an experiment-based hypothesis that reveals the need for improved generalisability to tackle diversity present in multi-centre datasets and routine clinical procedures

    An Investigation of the Diagnostic Potential of Autofluorescence Lifetime Spectroscopy and Imaging for Label-Free Contrast of Disease

    Get PDF
    The work presented in this thesis aimed to study the application of fluorescence lifetime spectroscopy (FLS) and fluorescence lifetime imaging microscopy (FLIM) to investigate their potential for diagnostic contrast of diseased tissue with a particular emphasis on autofluorescence (AF) measurements of gastrointestinal (GI) disease. Initially, an ex vivo study utilising confocal FLIM was undertaken with 420 nm excitation to characterise the fluorescence lifetime (FL) images obtained from 71 GI samples from 35 patients. A significant decrease in FL was observed between normal colon and polyps (p = 0.024), and normal colon and inflammatory bowel disease (IBD) (p = 0.015). Confocal FLIM was also performed on 23 bladder samples. A longer, although not significant, FL for cancer was observed, in paired specimens (n = 5) instilled with a photosensitizer. The first in vivo study was a clinical investigation of skin cancer using a fibre-optic FL spectrofluorometer and involved the interrogation of 27 lesions from 25 patients. A significant decrease in the FL of basal cell carcinomas compared to healthy tissue was observed (p = 0.002) with 445 nm excitation. A novel clinically viable FLS fibre-optic probe was then applied ex vivo to measure 60 samples collected from 23 patients. In a paired analysis of neoplastic polyps and normal colon obtained from the same region of the colon in the same patient (n = 12), a significant decrease in FL was observed (p = 0.021) with 435 nm excitation. In contrast, with 375 nm excitation, the mean FL of IBD specimens (n = 4) was found to be longer than that of normal tissue, although not statistically significant. Finally, the FLS system was applied in vivo in 17 patients, with initial data indicating that 435 nm excitation results in AF lifetimes that are broadly consistent with ex vivo studies, although no diagnostically significant differences were observed in the signals obtained in vivo.Open Acces

    Vision-based retargeting for endoscopic navigation

    Get PDF
    Endoscopy is a standard procedure for visualising the human gastrointestinal tract. With the advances in biophotonics, imaging techniques such as narrow band imaging, confocal laser endomicroscopy, and optical coherence tomography can be combined with normal endoscopy for assisting the early diagnosis of diseases, such as cancer. In the past decade, optical biopsy has emerged to be an effective tool for tissue analysis, allowing in vivo and in situ assessment of pathological sites with real-time feature-enhanced microscopic images. However, the non-invasive nature of optical biopsy leads to an intra-examination retargeting problem, which is associated with the difficulty of re-localising a biopsied site consistently throughout the whole examination. In addition to intra-examination retargeting, retargeting of a pathological site is even more challenging across examinations, due to tissue deformation and changing tissue morphologies and appearances. The purpose of this thesis is to address both the intra- and inter-examination retargeting problems associated with optical biopsy. We propose a novel vision-based framework for intra-examination retargeting. The proposed framework is based on combining visual tracking and detection with online learning of the appearance of the biopsied site. Furthermore, a novel cascaded detection approach based on random forests and structured support vector machines is developed to achieve efficient retargeting. To cater for reliable inter-examination retargeting, the solution provided in this thesis is achieved by solving an image retrieval problem, for which an online scene association approach is proposed to summarise an endoscopic video collected in the first examination into distinctive scenes. A hashing-based approach is then used to learn the intrinsic representations of these scenes, such that retargeting can be achieved in subsequent examinations by retrieving the relevant images using the learnt representations. For performance evaluation of the proposed frameworks, extensive phantom, ex vivo and in vivo experiments have been conducted, with results demonstrating the robustness and potential clinical values of the methods proposed.Open Acces
    • 

    corecore