1,018 research outputs found

    Broadband hyperspectral imaging for breast tumor detection using spectral and spatial information

    Get PDF
    Complete tumor removal during breast-conserving surgery remains challenging due to the lack of optimal intraoperative margin assessment techniques. Here, we use hyperspectral imaging for tumor detection in fresh breast tissue. We evaluated different wavelength ranges and two classification algorithms; a pixel-wise classification algorithm and a convolutional neural network that combines spectral and spatial information. The highest classification performance was obtained using the full wavelength range (450-1650nm). Adding spatial information mainly improved the differentiation of tissue classes within the malignant and healthy classes. High sensitivity and specificity were accomplished, which offers potential for hyperspectral imaging as a margin assessment technique to improve surgical outcome. (C) 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreemen

    MILD-Net: Minimal Information Loss Dilated Network for Gland Instance Segmentation in Colon Histology Images

    Get PDF
    The analysis of glandular morphology within colon histopathology images is an important step in determining the grade of colon cancer. Despite the importance of this task, manual segmentation is laborious, time-consuming and can suffer from subjectivity among pathologists. The rise of computational pathology has led to the development of automated methods for gland segmentation that aim to overcome the challenges of manual segmentation. However, this task is non-trivial due to the large variability in glandular appearance and the difficulty in differentiating between certain glandular and non-glandular histological structures. Furthermore, a measure of uncertainty is essential for diagnostic decision making. To address these challenges, we propose a fully convolutional neural network that counters the loss of information caused by max-pooling by re-introducing the original image at multiple points within the network. We also use atrous spatial pyramid pooling with varying dilation rates for preserving the resolution and multi-level aggregation. To incorporate uncertainty, we introduce random transformations during test time for an enhanced segmentation result that simultaneously generates an uncertainty map, highlighting areas of ambiguity. We show that this map can be used to define a metric for disregarding predictions with high uncertainty. The proposed network achieves state-of-the-art performance on the GlaS challenge dataset and on a second independent colorectal adenocarcinoma dataset. In addition, we perform gland instance segmentation on whole-slide images from two further datasets to highlight the generalisability of our method. As an extension, we introduce MILD-Net+ for simultaneous gland and lumen segmentation, to increase the diagnostic power of the network.Comment: Initial version published at Medical Imaging with Deep Learning (MIDL) 201

    Structure Prediction for Gland Segmentation with Hand-Crafted and Deep Convolutional Features

    Get PDF
    We present a novel method to segment instances of glandular structures from colon histopathology images. We use a structure learning approach which represents local spatial configurations of class labels, capturing structural information normally ignored by sliding-window methods. This allows us to reveal different spatial structures of pixel labels (e.g., locations between adjacent glands, or far from glands), and to identify correctly neighboring glandular structures as separate instances. Exemplars of label structures are obtained via clustering and used to train support vector machine classifiers. The label structures predicted are then combined and post-processed to obtain segmentation maps. We combine hand-crafted, multi-scale image features with features computed by a deep convolutional network trained to map images to segmentation maps. We evaluate the proposed method on the public domain GlaS data set, which allows extensive comparisons with recent, alternative methods. Using the GlaS contest protocol, our method achieves the overall best performance

    Evaluation of a pipeline for simulation, reconstruction, and classification in ultrasound-aided diffuse optical tomography of breast tumors

    Get PDF
    Significance: Diffuse optical tomography is an ill-posed problem. Combination with ultrasound can improve the results of diffuse optical tomography applied to the diagnosis of breast cancer and allow for classification of lesions. Aim: To provide a simulation pipeline for the assessment of reconstruction and classification methods for diffuse optical tomography with concurrent ultrasound information. Approach: A set of breast digital phantoms with benign and malignant lesions was simulated building on the software VICTRE. Acoustic and optical properties were assigned to the phantoms for the generation of B-mode images and optical data. A reconstruction algorithm based on a two-region nonlinear fitting and incorporating the ultrasound information was tested. Machine learning classification methods were applied to the reconstructed values to discriminate lesions into benign and malignant after reconstruction. Results: The approach allowed us to generate realistic US and optical data and to test a two-region reconstruction method for a large number of realistic simulations. When information is extracted from ultrasound images, at least 75% of lesions are correctly classified. With ideal two-region separation, the accuracy is higher than 80%. Conclusions: A pipeline for the generation of realistic ultrasound and diffuse optics data was implemented. Machine learning methods applied to a optical reconstruction with a nonlinear optical model and morphological information permit to discriminate malignant lesions from benign ones

    Evaluation of a pipeline for simulation, reconstruction, and classification in ultrasound-aided diffuse optical tomography of breast tumors

    Get PDF
    SIGNIFICANCE: Diffuse optical tomography is an ill-posed problem. Combination with ultrasound can improve the results of diffuse optical tomography applied to the diagnosis of breast cancer and allow for classification of lesions. AIM: To provide a simulation pipeline for the assessment of reconstruction and classification methods for diffuse optical tomography with concurrent ultrasound information. APPROACH: A set of breast digital phantoms with benign and malignant lesions was simulated building on the software VICTRE. Acoustic and optical properties were assigned to the phantoms for the generation of B-mode images and optical data. A reconstruction algorithm based on a two-region nonlinear fitting and incorporating the ultrasound information was tested. Machine learning classification methods were applied to the reconstructed values to discriminate lesions into benign and malignant after reconstruction. RESULTS: The approach allowed us to generate realistic US and optical data and to test a two-region reconstruction method for a large number of realistic simulations. When information is extracted from ultrasound images, at least 75% of lesions are correctly classified. With ideal two-region separation, the accuracy is higher than 80%. CONCLUSIONS: A pipeline for the generation of realistic ultrasound and diffuse optics data was implemented. Machine learning methods applied to a optical reconstruction with a nonlinear optical model and morphological information permit to discriminate malignant lesions from benign ones
    corecore