18 research outputs found

    Triplet network for classification of benign and pre-malignant polyps

    Get PDF
    Colorectal polyps are critical indicators of colorectal cancer (CRC). Classification of polyps during colonoscopy is still a challenge for which many medical experts have come up with visual models, albeit with limited success. An early detection of CRC prevents further complications in the colon, which makes identification of abnormal tissue a crucial step during routinary colonoscopy. In this paper, a classification approach is proposed to differentiate between benign and pre-malignant polyps using features learned from a Triplet Network architecture. The study includes a total of 154 patients, with 203 different polyps. For each polyp an image is acquired with White Light (WL), and additionally with two recent endoscopic modalities:Blue Laser Imaging (BLI) and Linked Color Imaging (LCI). The network is trained with the associated triplet loss, allowing the learning of non-linear features, which prove to be a highly discriminative embedding, leading to excellent results with simple linear classifiers. Additionally, the acquisition of multiple polyps with WL, BLI and LCI, enables the combination of the posterior probabilities, yielding a more robust classification result. Threefold cross-validation is employed as validation method and accuracy, sensitivity, specificity and area under the curve (AUC) are computed as evaluation metrics. While our approach achieves a similar classification performance compared to state-of-the-art methods, it has a much lower inference time (from hours to seconds, on a single GPU). The increased robustness and much faster execution facilitates future advances towards patient safety and may avoid time-consuming and costly histhological assessment.</p

    Polyp malignancy classification with CNN features based on Blue Laser and Linked Color Imaging

    Get PDF
    In-vivo classification of benign and pre-malignant polyps is a laborious task that requires histophatology confirmation. In an effort to improve the quality of clinical diagnosis, medical experts have come up with visual models with only limited success. In this paper, a classification approach is proposed to differentiate between polypmalignancy, using features extracted from the Global Average Pooling (GAP) layer of a pre-trained Convolutional Neural Network (CNNs) . Two recently developed endoscopic modalities are used to improve the pipeline prediction: Blue Laser Imaging (BLI) and Linked Color Imaging (LCI). Furthermore, a new strategy of per-class data augmentation is adopted to tackle the differences of unbalanced class distribution. The results are compared with a more general approach, showing how artificial examples can improve results on highly unbalanced problems. For the same reason, the combined features for each patient are extracted and trained using several machine learning classifiers without CNNs. Moreover to speed up computation, a recent GPU based Support Vector Machine (SVM) scheme is employed to substantially decrease the overload during training time. The presented methodology shows the feasibility of using the LCI and BLI techniques for automatic polypmalignancy classification and facilitates future advances to limit the need for time-consuming and costly histopathological assessment

    Ensemble of Deep Convolutional Neural Networks for Classification of Early Barrett’s Neoplasia Using Volumetric Laser Endomicroscopy

    No full text
    Barrett&rsquo;s esopaghagus (BE) is a known precursor of esophageal adenocarcinoma (EAC). Patients with BE undergo regular surveillance to early detect stages of EAC. Volumetric laser endomicroscopy (VLE) is a novel technology incorporating a second-generation form of optical coherence tomography and is capable of imaging the inner tissue layers of the esophagus over a 6 cm length scan. However, interpretation of full VLE scans is still a challenge for human observers. In this work, we train an ensemble of deep convolutional neural networks to detect neoplasia in 45 BE patients, using a dataset of images acquired with VLE in a multi-center study. We achieve an area under the receiver operating characteristic curve (AUC) of 0.96 on the unseen test dataset and we compare our results with previous work done with VLE analysis, where only AUC of 0.90 was achieved via cross-validation on 18 BE patients. Our method for detecting neoplasia in BE patients facilitates future advances on patient treatment and provides clinicians with new assisting solutions to process and better understand VLE data

    A CNN CADx System for Multimodal Classification of Colorectal Polyps Combining WL, BLI, and LCI Modalities

    No full text
    Colorectal polyps are critical indicators of colorectal cancer (CRC). Blue Laser Imaging and Linked Color Imaging are two modalities that allow improved visualization of the colon. In conjunction with the Blue Laser Imaging (BLI) Adenoma Serrated International Classification (BASIC) classification, endoscopists are capable of distinguishing benign and pre-malignant polyps. Despite these advancements, this classification still prevails a high misclassification rate for pre-malignant colorectal polyps. This work proposes a computer aided diagnosis (CADx) system that exploits the additional information contained in two novel imaging modalities, enabling more informative decision-making during colonoscopy. We train and benchmark six commonly used CNN architectures and compare the results with 19 endoscopists that employed the standard clinical classification model (BASIC). The proposed CADx system for classifying colorectal polyps achieves an area under the curve (AUC) of 0.97. Furthermore, we incorporate visual explanatory information together with a probability score, jointly computed from White Light, Blue Laser Imaging, and Linked Color Imaging. Our CADx system for automatic polyp malignancy classification facilitates future advances towards patient safety and may reduce time-consuming and costly histology assessment

    Manifold learning for cardiac image analysis: application to temporal enhancement and 3D heart reconstruction from freehand ultrasound

    No full text
    Treball de fi de grau en Sistemes AudiovisualsTutor: Gemma Piella FenoyManifold learning is increasingly being used to recover the underlying structure of medical image data. In this work, manifold learning algorithms are applied to extract the non-linear relationship between the frames of one cycle of a beating heart. The use of these techniques allows the characterization of the images according to their cardiac phase and their position which can be useful for computer-aided detection, diagnosis and therapy. Two ways of using this non-linear embedded information from 2D echocardiography images are presented. On the one hand, to increase the temporal resolution of the sequence and therefore to allow for a better analysis. On the other hand, to provide for a 3D visualization of the heartLes tècniques de manifold learning han suposat una nova forma de descobrir informació i estructures de dades dins l’àmbit de la imatge mèdica. Aquest treball presenta una nova forma d’usar les tècniques de manifold , extraient informació no – lineal entre frames d’un cicle cardíac. L’ús d’aquestes tècniques ha permès la caracterització de les imatges d’acord amb la fase, així com la seva posició dins del cicle cardíac , útil per a la ràpida detecció , diagnòstic i teràpia de possible malalties del cor. Dos maneres d’usar l’ informació no lineal de les ecocardiografies 2D és presentada. Per una banda per incrementar la resolució temporal d’una seqüència cardíaca. Per l’altre banda per oferir una visualització tridimensional del cor

    Spectral and temporal features as the estimators of the irrelevant speech effect

    Get PDF
    The distractive effects on cognitive processes ascribed to the nature of sound havebeen studied in the paradigm of ”irrelevant sound,” where test participants perform cognitivetasks in the presence of background noise. By comparing the test scores for different acousticstimulus conditions in such experiments, the ”irrelevant sound (speech) effect” (ISE) can bequantified. The ISE is often explained by the changing state hypothesis: the distinctivesegmentation of sound tokens; where tokens may be understood as sound segments that canbe distinguished from each other in temporal and/or spectral characteristics. A sequence ofsounds consisting of differing tokens produces much more disruption than a steady-statesound. The present work investigates the relationship between the features from bothtemporal and spectral domains and the ISE, predicting separately the magnitude of the effectwith two estimators: The Average Modulation Transfer Function (AMTF) and the FrequencyDomain Correlation Coefficient (FDCC). The first parameter is a measure for temporalvariations in a sound, whilst the latter measures spectral variability in the sounds. Backgroundstimuli are synthesized from a pulse train in which modified and unmodified pulses alternate.In order to manipulate the temporal and spectral features in the stimuli, a numericaloptimization method was used to generate two sets of background stimuli where one of thetwo descriptors was always kept constant and the other was varied in a systematic way.Therefore, stimulus sets used in this study allow the separate estimation of the role of the twoestimators on cognitive performance in tasks involving serial ordering of short-term memorycontent.\u3cbr/\u3e\u3cbr/\u3

    Multi-modal classification of polyp malignancy using CNN features with balanced class augmentation

    No full text
    Colorectal polyps are an indicator of colorectal cancer (CRC). Classification of polyps during colonoscopy is still a challenge for which many medical experts have come up with visual models albeit with limited success. In this paper, a classification approach is proposed to differentiate between polyp malignancy, using features extracted from the Global Average Pooling (GAP) layer of a Convolutional Neural Network (CNNs). Two recent endoscopic modalities are used to improve the algorithm prediction: Blue Laser Imaging (BLI) and Linked Color Imaging (LCI). Furthermore, a new strategy of per-class data augmentation is adopted to tackle an unbalanced class distribution and to improve the decision of the classifiers. As a result, we increase the performance compared to state-of-the-art methods (0.97 vs 0.90 AUC). Our method for automatic polyp malignancy classification facilitates future advances towards patient safety and may avoid time-consuming and costly histopathological assessment

    Towards ultrahigh resolution OCT based endoscopical pituitary gland and adenoma screening: a performance parameter evaluation

    No full text
    Ultrahigh resolution optical coherence tomography (UHR-OCT) for differentiating pituitary gland versus adenoma tissue has been investigated for the first time, indicating more than 80% accuracy. For biomarker identification, OCT images of paraffin embedded tissue are correlated to histopathological slices. The identified biomarkers are verified on fresh biopsies. Additionally, an approach, based on resolution modified UHR-OCT ex vivo data, investigating optical performance parameters for the realization in an in vivo endoscope is presented and evaluated. The identified morphological features–cell groups with reticulin framework–detectable with UHR-OCT showcase a promising differentiation ability, encouraging endoscopic OCT probe development for in vivo application

    Ensemble of deep convolutional neural networks for classification of early Barrett’s neoplasia using volumetric laser endomicroscopy

    No full text
    Barrett’s esopaghagus (BE) is a known precursor of esophageal adenocarcinoma (EAC). Patients with BE undergo regular surveillance to early detect stages of EAC. Volumetric laser endomicroscopy (VLE) is a novel technology incorporating a second-generation form of optical coherence tomography and is capable of imaging the inner tissue layers of the esophagus over a 6 cm length scan. However, interpretation of full VLE scans is still a challenge for human observers. In this work, we train an ensemble of deep convolutional neural networks to detect neoplasia in 45 BE patients, using a dataset of images acquired with VLE in a multi-center study. We achieve an area under the receiver operating characteristic curve (AUC) of 0.96 on the unseen test dataset and we compare our results with previous work done with VLE analysis, where only AUC of 0.90 was achieved via cross-validation on 18 BE patients. Our method for detecting neoplasia in BE patients facilitates future advances on patient treatment and provides clinicians with new assisting solutions to process and better understand VLE data

    Automatic image and text-based description for colorectal polyps using BASIC classification

    Get PDF
    Colorectal polyps (CRP) are precursor lesions of colorectal cancer (CRC). Correct identification of CRPs during in-vivo colonoscopy is supported by the endoscopist's expertise and medical classification models. A recent developed classification model is the Blue light imaging Adenoma Serrated International Classification (BASIC) which describes the differences between non-neoplastic and neoplastic lesions acquired with blue light imaging (BLI). Computer-aided detection (CADe) and diagnosis (CADx) systems are efficient at visually assisting with medical decisions but fall short at translating decisions into relevant clinical information. The communication between machine and medical expert is of crucial importance to improve diagnosis of CRP during in-vivo procedures. In this work, the combination of a polyp image classification model and a language model is proposed to develop a CADx system that automatically generates text comparable to the human language employed by endoscopists. The developed system generates equivalent sentences as the human-reference and describes CRP images acquired with white light (WL), blue light imaging (BLI) and linked color imaging (LCI). An image feature encoder and a BERT module are employed to build the AI model and an external test set is used to evaluate the results and compute the linguistic metrics. The experimental results show the construction of complete sentences with an established metric scores of BLEU-1 = 0.67, ROUGE-L = 0.83 and METEOR = 0.50. The developed CADx system for automatic CRP image captioning facilitates future advances towards automatic reporting and may help reduce time-consuming histology assessment
    corecore