291 research outputs found

    Noise-Tolerant Deep Learning for Histopathological Image Segmentation

    Get PDF
    Developing an effective algorithm based on the handcrafted features from histological images (histo-images) is difficult due to the complexity of histo-images. Deep network models have achieved promising performances, as it is capable of capturing high-level features. However, a major hurdle hindering the application of deep learning in histo-image segmentation is to obtain large ground-truth data for training. Taking the segmentations from simple off-the-shelf algorithms as training data will be a new way to address this hurdle. The output from the off-the-shelf segmentations is considered to be noisy data, which requires a new learning scheme for deep learning segmentation. Existing works on noisy label deep learning are largely for image classification. In this thesis, we study whether and how integrating imperfect or noisy “ground-truth” from off-the-shelf segmentation algorithms may help achieve better performance so that the deep learning can be applied to histo-image segmentation with the manageable effort. Two noise-tolerant deep learning architectures are proposed in this thesis. One is based on the Noisy at Random (NAR) Model, and the other is based on the Noisy Not at Random (NNAR) Model. The largest difference between the two is that NNAR based architecture assumes the label noise is dependent on features of the image. Unlike most existing works, we study how to integrate multiple types of noisy data into one specific model. The proposed method has extensive application when segmentations from multiple off-the-shelf algorithms are available. The implementation of the NNAR based architecture demonstrates its effectiveness and superiority over off-the-shelf and other existing deep-learningbased image segmentation algorithms

    Noise-Tolerant Deep Learning for Histopathological Image Segmentation

    Get PDF
    Developing an effective algorithm based on the handcrafted features from histological images (histo-images) is difficult due to the complexity of histo-images. Deep network models have achieved promising performances, as it is capable of capturing high-level features. However, a major hurdle hindering the application of deep learning in histo-image segmentation is to obtain large ground-truth data for training. Taking the segmentations from simple off-the-shelf algorithms as training data will be a new way to address this hurdle. The output from the off-the-shelf segmentations is considered to be noisy data, which requires a new learning scheme for deep learning segmentation. Existing works on noisy label deep learning are largely for image classification. In this thesis, we study whether and how integrating imperfect or noisy “ground-truth” from off-the-shelf segmentation algorithms may help achieve better performance so that the deep learning can be applied to histo-image segmentation with the manageable effort. Two noise-tolerant deep learning architectures are proposed in this thesis. One is based on the Noisy at Random (NAR) Model, and the other is based on the Noisy Not at Random (NNAR) Model. The largest difference between the two is that NNAR based architecture assumes the label noise is dependent on features of the image. Unlike most existing works, we study how to integrate multiple types of noisy data into one specific model. The proposed method has extensive application when segmentations from multiple off-the-shelf algorithms are available. The implementation of the NNAR based architecture demonstrates its effectiveness and superiority over off-the-shelf and other existing deep-learningbased image segmentation algorithms

    Asymmetric Co-Training with Explainable Cell Graph Ensembling for Histopathological Image Classification

    Full text link
    Convolutional neural networks excel in histopathological image classification, yet their pixel-level focus hampers explainability. Conversely, emerging graph convolutional networks spotlight cell-level features and medical implications. However, limited by their shallowness and suboptimal use of high-dimensional pixel data, GCNs underperform in multi-class histopathological image classification. To make full use of pixel-level and cell-level features dynamically, we propose an asymmetric co-training framework combining a deep graph convolutional network and a convolutional neural network for multi-class histopathological image classification. To improve the explainability of the entire framework by embedding morphological and topological distribution of cells, we build a 14-layer deep graph convolutional network to handle cell graph data. For the further utilization and dynamic interactions between pixel-level and cell-level information, we also design a co-training strategy to integrate the two asymmetric branches. Notably, we collect a private clinically acquired dataset termed LUAD7C, including seven subtypes of lung adenocarcinoma, which is rare and more challenging. We evaluated our approach on the private LUAD7C and public colorectal cancer datasets, showcasing its superior performance, explainability, and generalizability in multi-class histopathological image classification

    Deep Learning Techniques for Multi-Dimensional Medical Image Analysis

    Get PDF

    Deep Learning Techniques for Multi-Dimensional Medical Image Analysis

    Get PDF

    Deep Learning-based Approach for the Semantic Segmentation of Bright Retinal Damage

    Full text link
    Regular screening for the development of diabetic retinopathy is imperative for an early diagnosis and a timely treatment, thus preventing further progression of the disease. The conventional screening techniques based on manual observation by qualified physicians can be very time consuming and prone to error. In this paper, a novel automated screening model based on deep learning for the semantic segmentation of exudates in color fundus images is proposed with the implementation of an end-to-end convolutional neural network built upon UNet architecture. This encoder-decoder network is characterized by the combination of a contracting path and a symmetrical expansive path to obtain precise localization with the use of context information. The proposed method was validated on E-OPHTHA and DIARETDB1 public databases achieving promising results compared to current state-of-theart methods.This paper was supported by the European Union’s Horizon 2020 research and innovation programme under the Project GALAHAD [H2020-ICT2016-2017, 732613]. The work of Adri´an Colomer has been supported by the Spanish Government under a FPI Grant [BES-2014-067889]. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.Silva, C.; Colomer, A.; Naranjo Ornedo, V. (2018). Deep Learning-based Approach for the Semantic Segmentation of Bright Retinal Damage. En Intelligent Data Engineering and Automated Learning – IDEAL 2018. Springer. 164-173. https://doi.org/10.1007/978-3-030-03493-1_18S164173World Health Organization: Diabetes fact sheet. Sci. Total Environ. 20, 1–88 (2011)Verma, L., Prakash, G., Tewari, H.K.: Diabetic retinopathy: time for action. No complacency please! Bull. World Health Organ. 80(5), 419–419 (2002)Sopharak, A.: Machine learning approach to automatic exudate detection in retinal images from diabetic patients. J. Mod. Opt. 57(2), 124–135 (2010)Imani, E., Pourreza, H.R.: A novel method for retinal exudate segmentation using signal separation algorithm. Comput. Methods Programs Biomed. 133, 195–205 (2016)Haloi, M., Dandapat, S., Sinha, R.: A Gaussian scale space approach for exudates detection, classification and severity prediction. arXiv preprint arXiv:1505.00737 (2015)Welfer, D., Scharcanski, J., Marinho, D.R.: A coarse-to-fine strategy for automatically detecting exudates in color eye fundus images. Comput. Med. Imaging Graph. 34(3), 228–235 (2010)Harangi, B., Hajdu, A.: Automatic exudate detection by fusing multiple active contours and regionwise classification. Comput. Biol. Med. 54, 156–171 (2014)Sopharak, A., Uyyanonvara, B., Barman, S.: Automatic exudate detection from non-dilated diabetic retinopathy retinal images using fuzzy C-means clustering. Sensors 9(3), 2148–2161 (2009)Havaei, M., Davy, A., Warde-Farley, D.: Brain tumor segmentation with deep neural networks. Med. Image Anal. 35, 18–31 (2017)Liskowski, P., Krawiec, K.: Segmenting retinal blood vessels with deep neural networks. IEEE Trans. Med. Imag. 35(11), 2369–2380 (2016)Pratt, H., Coenen, F., Broadbent, D.M., Harding, S.P.: Convolutional neural networks for diabetic retinopathy. Procedia Comput. Sci. 90, 200–205 (2016)Gulshan, V., Peng, L., Coram, M.: Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316(22), 2402–2410 (2016)Prentašić, P., Lončarić, S.: Detection of exudates in fundus photographs using deep neural networks and anatomical landmark detection fusion. Comput. Methods Programs Biomed. 137, 281–292 (2016)Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28Garcia-Garcia, A., Orts-Escolano, S., Oprea, S., Villena-Martinez, V., Garcia-Rodriguez, J.: A review on deep learning techniques applied to semantic segmentation, pp. 1–23. arXiv preprint arXiv:1704.06857 (2017)Deng, Z., Fan, H., Xie, F., Cui, Y., Liu, J.: Segmentation of dermoscopy images based on fully convolutional neural network. In: IEEE International Conference on Image Processing (ICIP 2017), pp. 1732–1736. IEEE (2017)Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440. IEEE (2014)Li, W., Qian, X., Ji, J.: Noise-tolerant deep learning for histopathological image segmentation, vol. 510 (2017)Chen, H., Qi, X., Yu, L.: DCAN: deep contour-aware networks for object instance segmentation from histology images. Med. Image Anal. 36, 135–146 (2017)Walter, T., Klein, J.C., Massin, P., Erginay, A.: A contribution of image processing to the diagnosis of diabetic retinopathy-detection of exudates in color fundus images of the human retina. IEEE Trans. Med. Imaging 21(10), 1236–1243 (2002)Morales, S., Naranjo, V., Angulo, U., Alcaniz, M.: Automatic detection of optic disc based on PCA and mathematical morphology. IEEE Trans. Med. Imaging 32(4), 786–796 (2013)Zhang, X., Thibault, G., Decencière, E.: Exudate detection in color retinal images for mass screening of diabetic retinopathy. Med. Image Anal. 18(7), 1026–1043 (2014

    Encrypted federated learning for secure decentralized collaboration in cancer image analysis.

    Get PDF
    Artificial intelligence (AI) has a multitude of applications in cancer research and oncology. However, the training of AI systems is impeded by the limited availability of large datasets due to data protection requirements and other regulatory obstacles. Federated and swarm learning represent possible solutions to this problem by collaboratively training AI models while avoiding data transfer. However, in these decentralized methods, weight updates are still transferred to the aggregation server for merging the models. This leaves the possibility for a breach of data privacy, for example by model inversion or membership inference attacks by untrusted servers. Somewhat-homomorphically-encrypted federated learning (SHEFL) is a solution to this problem because only encrypted weights are transferred, and model updates are performed in the encrypted space. Here, we demonstrate the first successful implementation of SHEFL in a range of clinically relevant tasks in cancer image analysis on multicentric datasets in radiology and histopathology. We show that SHEFL enables the training of AI models which outperform locally trained models and perform on par with models which are centrally trained. In the future, SHEFL can enable multiple institutions to co-train AI models without forsaking data governance and without ever transmitting any decryptable data to untrusted servers

    Segmentation-aware Image Denoising Without Knowing True Segmentation

    Get PDF
    Recent works have discussed application-driven image restoration neural networks capable of not only removing noise in images but also preserving their semantic-aware details, making them suitable for various high-level computer vision tasks as the pre-processing step. However, such approaches require extra annotations for their high-level vision tasks in order to train the joint pipeline using hybrid losses, yet the availability of those annotations is often limited to a few image sets, thereby restricting the general applicability of these methods to simply denoise more unseen and unannotated images. Motivated by this, we propose a segmentation-aware image denoising model dubbed U-SAID, based on a novel unsupervised approach with a pixel-wise uncertainty loss. U-SAID does not require any ground-truth segmentation map, and thus can be applied to any image dataset. It is capable of generating denoised images with comparable or even better quality than that of its supervised counterpart and even more general “application-agnostic” denoisers, and its denoised results show stronger robustness for subsequent semantic segmentation tasks. Moreover, plugging its “universal” denoiser without fine-tuning, we demonstrate the superior generalizability of U-SAID in three-folds: (1) denoising unseen types of images; (2) denoising as preprocessing for segmenting unseen noisy images; and (3) denoising for unseen high-level tasks. Extensive experiments were conducted to assess the effectiveness and robustness of the proposed U-SAID model against various popular image sets
    • …
    corecore