90 research outputs found
Suggestive Annotation: A Deep Active Learning Framework for Biomedical Image Segmentation
Image segmentation is a fundamental problem in biomedical image analysis.
Recent advances in deep learning have achieved promising results on many
biomedical image segmentation benchmarks. However, due to large variations in
biomedical images (different modalities, image settings, objects, noise, etc),
to utilize deep learning on a new application, it usually needs a new set of
training data. This can incur a great deal of annotation effort and cost,
because only biomedical experts can annotate effectively, and often there are
too many instances in images (e.g., cells) to annotate. In this paper, we aim
to address the following question: With limited effort (e.g., time) for
annotation, what instances should be annotated in order to attain the best
performance? We present a deep active learning framework that combines fully
convolutional network (FCN) and active learning to significantly reduce
annotation effort by making judicious suggestions on the most effective
annotation areas. We utilize uncertainty and similarity information provided by
FCN and formulate a generalized version of the maximum set cover problem to
determine the most representative and uncertain areas for annotation. Extensive
experiments using the 2015 MICCAI Gland Challenge dataset and a lymph node
ultrasound image segmentation dataset show that, using annotation suggestions
by our method, state-of-the-art segmentation performance can be achieved by
using only 50% of training data.Comment: Accepted at MICCAI 201
Detecting and Classifying Nuclei on a Budget
The benefits of deep neural networks can be hard to realise in medical imaging tasks because training sample sizes are often modest. Pre-training on large data sets and subsequent transfer learning to specific tasks with limited labelled training data has proved a successful strategy in other domains. Here, we implement and test this idea for detecting and classifying nuclei in histology, important tasks that enable quantifiable characterisation of prostate cancer. We pre-train a convolutional neural network for nucleus detection on a large colon histology dataset, and examine the effects of fine-tuning this network with different amounts of prostate histology data. Results show promise for clinical translation. However, we find that transfer learning is not always a viable option when training deep neural networks for nucleus classification. As such, we also demonstrate that semi-supervised ladder networks are a suitable alternative for learning a nucleus classifier with limited data
A New Hybrid Method for Gland Segmentation in Histology Images
Gland segmentation has become an important task in biomedical image analysis. An accurate gland segmentation could be instrumental in designing of personalised treatments, potentially leading to improved patient survival rate. Different gland instance segmentation architectures have been tested in the work reported here. A hybrid method that combines two-level classification has been described. The proposed method achieved very good image-level classification results with 100% classification accuracy on the available test data. Therefore, the overall performance of the proposed hybrid method highly depends on the results of the pixel-level classification. Diverse image features reflecting various morphological gland structures visible in histology images have been tested in order to improve the performance of the gland instance segmentation. Based on the reported experimental results, the hybrid approach, which combines two-level classification, achieved overall the best results among the tested methods
Nuclei Detection Using Mixture Density Networks
Nuclei detection is an important task in the histology domain as it is a main
step toward further analysis such as cell counting, cell segmentation, study of
cell connections, etc. This is a challenging task due to the complex texture of
histology image, variation in shape, and touching cells. To tackle these
hurdles, many approaches have been proposed in the literature where deep
learning methods stand on top in terms of performance. Hence, in this paper, we
propose a novel framework for nuclei detection based on Mixture Density
Networks (MDNs). These networks are suitable to map a single input to several
possible outputs and we utilize this property to detect multiple seeds in a
single image patch. A new modified form of a cost function is proposed for
training and handling patches with missing nuclei. The probability maps of the
nuclei in the individual patches are next combined to generate the final
image-wide result. The experimental results show the state-of-the-art
performance on complex colorectal adenocarcinoma dataset.Comment: 8 pages, 3 figure
Learning to Segment Microscopy Images with Lazy Labels
The need for labour intensive pixel-wise annotation is a major limitation of
many fully supervised learning methods for segmenting bioimages that can
contain numerous object instances with thin separations. In this paper, we
introduce a deep convolutional neural network for microscopy image
segmentation. Annotation issues are circumvented by letting the network being
trainable on coarse labels combined with only a very small number of images
with pixel-wise annotations. We call this new labelling strategy `lazy' labels.
Image segmentation is stratified into three connected tasks: rough inner region
detection, object separation and pixel-wise segmentation. These tasks are
learned in an end-to-end multi-task learning framework. The method is
demonstrated on two microscopy datasets, where we show that the model gives
accurate segmentation results even if exact boundary labels are missing for a
majority of annotated data. It brings more flexibility and efficiency for
training deep neural networks that are data hungry and is applicable to
biomedical images with poor contrast at the object boundaries or with diverse
textures and repeated patterns
Structure Preserving Stain Normalization of Histopathology Images Using Self Supervised Semantic Guidance
© 2020, Springer Nature Switzerland AG. Although generative adversarial network (GAN) based style transfer is state of the art in histopathology color-stain normalization, they do not explicitly integrate structural information of tissues. We propose a self-supervised approach to incorporate semantic guidance into a GAN based stain normalization framework and preserve detailed structural information. Our method does not require manual segmentation maps which is a significant advantage over existing methods. We integrate semantic information at different layers between a pre-trained semantic network and the stain color normalization network. The proposed scheme outperforms other color normalization methods leading to better classification and segmentation performance
Recommended from our members
Evaluation of Colour Pre-processing on Patch-Based Classification of H&E-Stained Images
This paper compares the effects of colour pre-processing on the classification performance of H&E-stained images. Variations in the tissue preparation procedures, acquisition systems, stain conditions and reagents are all source of artifacts that can affect negatively computer-based classification. Pre-processing methods such as colour constancy, transfer and deconvolution have been proposed to compensate the artifacts. In this paper we compare quantitatively the combined effect of six colour pre-processing procedures and 12 colour texture descriptors on patch-based classification of H&E-stained images. We found that colour pre-processing had negative effects on accuracy in most cases – particularly when used with colour descriptors. However, some pre-processing procedures proved beneficial when employed in conjunction with classic texture descriptors such as co-occurrence matrices, Gabor filters and Local Binary Patterns
Bayesian hierarchical clustering for studying cancer gene expression data with unknown statistics
Clustering analysis is an important tool in studying gene expression data. The Bayesian hierarchical clustering (BHC) algorithm can automatically infer the number of clusters and uses Bayesian model selection to improve clustering quality. In this paper, we present an extension of the BHC algorithm. Our Gaussian BHC (GBHC) algorithm represents data as a mixture of Gaussian distributions. It uses normal-gamma distribution as a conjugate prior on the mean and precision of each of the Gaussian components. We tested GBHC over 11 cancer and 3 synthetic datasets. The results on cancer datasets show that in sample clustering, GBHC on average produces a clustering partition that is more concordant with the ground truth than those obtained from other commonly used algorithms. Furthermore, GBHC frequently infers the number of clusters that is often close to the ground truth. In gene clustering, GBHC also produces a clustering partition that is more biologically plausible than several other state-of-the-art methods. This suggests GBHC as an alternative tool for studying gene expression data. The implementation of GBHC is available at https://sites.
google.com/site/gaussianbhc
Uncertainty driven pooling network for microvessel segmentation in routine histology images
Lymphovascular invasion (LVI) and tumor angiogenesis are correlated with metastasis, cancer recurrence and poor patient survival. In most of the cases, the LVI quantification and angiogenic analysis is based on microvessel segmentation and density estimation in immunohistochemically (IHC) stained tissues. However, in routine H&E stained images, the microvessels display a high level of heterogeneity in terms of size, shape, morphology and texture which makes microvessel segmentation a non-trivial task. Manual delineation of microvessels for biomarker analysis is labor-intensive, time consuming, irreproducible and can suffer from subjectivity among pathologists. Moreover, it is often beneficial to account for the uncertainty of a prediction when making a diagnosis. To address these challenges, we proposed a framework for microvessel segmentation in H&E stained histology images. The framework extends DeepLabV3+ by using an improved dice coefficient based custom loss function and also incorporating an uncertainty prediction mechanism. The proposed method uses an aligned Xception model, followed by atrous spatial pyramid pooling for feature extraction at multiple scales. This architecture counters the challenge of segmenting blood vessels of varying morphological appearance. To incorporate uncertainty, random transformations are introduced at test time for a superior segmentation result and simultaneous uncertainty map generation, highlighting ambiguous regions. The method is evaluated using 1167 images of size 512×512 pixels, extracted from 13 WSIs of oral squamous cell carcinoma (OSCC) tissue at 20x magnification. The proposed net-work achieves state-of-the-art performance compared to current semantic segmentation deep neural networks (FCN-8, U-Net, SegNet and DeepLabV3+)
- …