1,542 research outputs found

    Histopathological image analysis : a review

    Get PDF
    Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe

    Assessment of algorithms for mitosis detection in breast cancer histopathology images

    Get PDF
    The proliferative activity of breast tumors, which is routinely estimated by counting of mitotic figures in hematoxylin and eosin stained histology sections, is considered to be one of the most important prognostic markers. However, mitosis counting is laborious, subjective and may suffer from low inter-observer agreement. With the wider acceptance of whole slide images in pathology labs, automatic image analysis has been proposed as a potential solution for these issues. In this paper, the results from the Assessment of Mitosis Detection Algorithms 2013 (AMIDA13) challenge are described. The challenge was based on a data set consisting of 12 training and 11 testing subjects, with more than one thousand annotated mitotic figures by multiple observers. Short descriptions and results from the evaluation of eleven methods are presented. The top performing method has an error rate that is comparable to the inter-observer agreement among pathologists

    Input significance analysis: feature ranking through synaptic weights manipulation for ANNS-based classifiers

    Get PDF
    Due to the ANNs architecture, the ISA methods that can manipulate synaptic weights selectedare Connection Weights (CW) and Garson’s Algorithm (GA). The ANNs-based classifiers thatcan provide such manipulation are Multi-Layer Perceptron (MLP) and Evolving Fuzzy NeuralNetworks (EFuNNs). The goals for this work are firstly to identify which of the twoclassifiers works best with the filtered/ranked data, secondly is to test the FR method by usinga selected dataset taken from the UCI Machine Learning Repository and in an onlineenvironment and lastly to attest the FR results by using another selected dataset taken fromthe same source and in the same environment. There are three groups of experimentsconducted to accomplish these goals. The results are promising when FR is applied, someefficiency and accuracy are noticeable compared to the original data.Keywords: artificial neural networks, input significance analysis; feature selection; featureranking; connection weights; Garson’s algorithm; multi-layer perceptron; evolving fuzzyneural networks

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Cuantificación de glándulas en imágenes histopatológicas de cáncer gástrico

    Get PDF
    Automatic detection and quantification of glands in gastric cancer may contribute to objectively measure the lesion severity, to develop strategies for early diagnosis, and most importantly to improve the patient categorization; however, gland quantification is a highly subjective task, prone to error due to the high biopsy traffic and the experience of each expert. The present master’s dissertation is composed by three chapters that carry to an objective identification of glands. In the first chapter of this document we present a new approach for segmentation of glandular nuclei based on nuclear local and contextual (neighborhood) information “NLCI”. A Gradient-BoostedRegression-Tree classifier is trained to distinguish between glandular nuclei and non glandular nuclei. Validation was carried out using 45.702 annotated nuclei from 90 fields of view (patches) extracted from whole slide images of patients diagnosed with gastric cancer. NLCI achieved an accuracy of 0.977 and an F-measure of 0.955, while R-CNN yielded corresponding accuracy and F-measures of 0.923 and 0.719, respectively. In second chapter we presents an entire framework for automatic detection of glands in gastric cancer images. By selecting gland candidates from a binarized version of the hematoxylin channel. Next, the gland’s shape and nuclei are characterized using local features which feed a Random-Cross-validation method classifier trained previously with images manually annotated by an expert. Validation was carried out using a data-set with 1.330 from seven fields of view extracted from patients diagnosed with gastric cancer whole slide images. Results showed an accuracy of 93 % using a linear classifier. Finally, in the third chapter analyzing gland and their glandular nuclei most relevant features, since predict if a patient will survive more than a year after being diagnosed with gastric cancer. A feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy “mRMR” approach selects those features that correlated better with patient survival. A data set with 668 Fields of View (FoV), 2.076 glandular structures from 14 whole slide images were extracted from patient diagnosed with gastric cancer. Results showed an accuracy of 78.57 % using a QDA Linear & Quadratic Discriminant Analysis was training with Leave-one-out e.g training with thirteen cases and leaving a separate case to validate.La detección y cuantificación automática de las glándulas en el cáncer gástrico puede contribuir a medir objetivamente la gravedad de la lesión, desarrollar estrategias para el diagnóstico precoz y lo que es más importante, mejorar la categorización del paciente; sin embargo, su cuantificación es una tarea altamente subjetiva, propensa a errores debido al alto tráfico de biopsias y a la experiencia de cada experto. La presente disertación de maestría está compuesta por tres capítulos los cuales llevan a la cuantificación objetiva de glándulas. En el primer capítulo del documento se presenta un nuevo enfoque para la segmentación de los núcleos glandulares en base a la información nuclear local y contextual (vecindario). Se entrenó un Gradient-Boosted-Regression-Tree para distinguir entre núcleos glandulares y núcleos no glandulares. La validación se llevó con 45.702 núcleos anotados manualmente de 90 campos de visión (parches) extraídos de imágenes de biopsias completas de pacientes diagnosticados con cáncer gástrico. NLCI logró una precisión de 0.977% y un F-Score de 0.955%, mientras que fast R-CNN arrojó una precisión de 0.923% y un F-Score y 0.719%. En el segundo capítulo se presenta un marco completo para detección automática de glándulas en imágenes de cáncer gástrico. Las glándulas candidatas se seleccionan de una versión binarizada del canal de hematoxilina. A continuación, la forma y los núcleos de las glándulas se caracterizan y se alimenta un clasificador Random Cross Validation, entrenado previamente con imágenes anotadas manualmente por un experto. La validación se realizó en un conjunto de datos con 1.330 parches extraídos de siete biopsias de pacientes diagnosticados con cáncer gástrico. Los resultados mostraron una precisión del 93% utilizando un clasificador lineal. Finalmente, el tercer capítulo analiza las características más relevantes de las glándulas y sus núcleos glandulares, para predecir la sobrevida a un año de un paciente diagnosticado con cáncer gástrico. Una selección de características basada en información mutua: criterios de dependencia máxima, máxima relevancia y mínima redundancia (mRMR) escogen las características correlacionadas con la supervivencia del paciente. Se extrajo un conjunto de datos con 668 campos de visión (FoV), 2.076 estructuras glandulares de 14 imágenes completas de pacientes diagnosticados con cáncer gástrico. Los resultados mostraron una precisión del 76.3% usando un Análisis Discriminante Lineal y Cuadrático (QDA) y un esquema de evaluación entrenando con trece casos y dejando un caso aparte para validar.Magíster en Ingeniería Biomédica. Línea de investigación: Procesamiento de señale

    Deep weakly-supervised learning methods for classification and localization in histology images: a survey

    Full text link
    Using state-of-the-art deep learning models for cancer diagnosis presents several challenges related to the nature and availability of labeled histology images. In particular, cancer grading and localization in these images normally relies on both image- and pixel-level labels, the latter requiring a costly annotation process. In this survey, deep weakly-supervised learning (WSL) models are investigated to identify and locate diseases in histology images, without the need for pixel-level annotations. Given training data with global image-level labels, these models allow to simultaneously classify histology images and yield pixel-wise localization scores, thereby identifying the corresponding regions of interest (ROI). Since relevant WSL models have mainly been investigated within the computer vision community, and validated on natural scene images, we assess the extent to which they apply to histology images which have challenging properties, e.g. very large size, similarity between foreground/background, highly unstructured regions, stain heterogeneity, and noisy/ambiguous labels. The most relevant models for deep WSL are compared experimentally in terms of accuracy (classification and pixel-wise localization) on several public benchmark histology datasets for breast and colon cancer -- BACH ICIAR 2018, BreaKHis, CAMELYON16, and GlaS. Furthermore, for large-scale evaluation of WSL models on histology images, we propose a protocol to construct WSL datasets from Whole Slide Imaging. Results indicate that several deep learning models can provide a high level of classification accuracy, although accurate pixel-wise localization of cancer regions remains an issue for such images. Code is publicly available.Comment: 35 pages, 18 figure

    Texture Analysis and Machine Learning to Predict Pulmonary Ventilation from Thoracic Computed Tomography

    Get PDF
    Chronic obstructive pulmonary disease (COPD) leads to persistent airflow limitation, causing a large burden to patients and the health care system. Thoracic CT provides an opportunity to observe the structural pathophysiology of COPD, whereas hyperpolarized gas MRI provides images of the consequential ventilation heterogeneity. However, hyperpolarized gas MRI is currently limited to research centres, due to the high cost of gas and polarization equipment. Therefore, I developed a pipeline using texture analysis and machine learning methods to create predicted ventilation maps based on non-contrast enhanced, single-volume thoracic CT. In a COPD cohort, predicted ventilation maps were qualitatively and quantitatively related to ground-truth MRI ventilation, and both maps were related to important patient lung function and quality-of-life measures. This study is the first to demonstrate the feasibility of predicting hyperpolarized MRI-based ventilation from single-volume, breath-hold thoracic CT, which has potential to translate pulmonary ventilation information to widely available thoracic CT imaging

    Medical Image Classification using Deep Learning Techniques and Uncertainty Quantification

    Get PDF
    The emergence of medical image analysis using deep learning techniques has introduced multiple challenges in terms of developing robust and trustworthy systems for automated grading and diagnosis. Several works have been presented to improve classification performance. However, these methods lack the diversity of capturing different levels of contextual information among image regions, strategies to present diversity in learning by using ensemble-based techniques, or uncertainty measures for predictions generated from automated systems. Consequently, the presented methods provide sub-optimal results which is not enough for clinical practice. To enhance classification performance and introduce trustworthiness, deep learning techniques and uncertainty quantification methods are required to provide diversity in contextual learning and the initial stage of explainability, respectively. This thesis aims to explore and develop novel deep learning techniques escorted by uncertainty quantification for developing actionable automated grading and diagnosis systems. More specifically, the thesis provides the following three main contributions. First, it introduces a novel entropy-based elastic ensemble of Deep Convolutional Neural Networks (DCNNs) architecture termed as 3E-Net for classifying grades of invasive breast carcinoma microscopic images. 3E-Net is based on a patch-wise network for feature extraction and image-wise networks for final image classification and uses an elastic ensemble based on Shannon Entropy as an uncertainty quantification method for measuring the level of randomness in image predictions. As the second contribution, the thesis presents a novel multi-level context and uncertainty-aware deep learning architecture named MCUa for the classification of breast cancer microscopic images. MCUa consists of multiple feature extractors and multi-level context-aware models in a dynamic ensemble fashion to learn the spatial dependencies among image patches and enhance the learning diversity. Also, the architecture uses Monte Carlo (MC) dropout for measuring the uncertainty of image predictions and deciding whether an input image is accurate based on the generated uncertainty score. The third contribution of the thesis introduces a novel model agnostic method (AUQantO) that establishes an actionable strategy for optimising uncertainty quantification for deep learning architectures. AUQantO method works on optimising a hyperparameter threshold, which is compared against uncertainty scores from Shannon entropy and MC-dropout. The optimal threshold is achieved based on single- and multi-objective functions which are optimised using multiple optimisation methods. A comprehensive set of experiments have been conducted using multiple medical imaging datasets and multiple novel evaluation metrics to prove the effectiveness of our three contributions to clinical practice. First, 3E-Net versions achieved an accuracy of 96.15% and 99.50% on invasive breast carcinoma dataset. The second contribution, MCUa, achieved an accuracy of 98.11% on Breast cancer histology images dataset. Lastly, AUQantO showed significant improvements in performance of the state-of-the-art deep learning models with an average accuracy improvement of 1.76% and 2.02% on Breast cancer histology images dataset and an average accuracy improvement of 5.67% and 4.24% on Skin cancer dataset using two uncertainty quantification techniques. AUQantO demonstrated the ability to generate the optimal number of excluded images in a particular dataset
    corecore