247 research outputs found
Longitudinal detection of radiological abnormalities with time-modulated LSTM
Convolutional neural networks (CNNs) have been successfully employed in
recent years for the detection of radiological abnormalities in medical images
such as plain x-rays. To date, most studies use CNNs on individual examinations
in isolation and discard previously available clinical information. In this
study we set out to explore whether Long-Short-Term-Memory networks (LSTMs) can
be used to improve classification performance when modelling the entire
sequence of radiographs that may be available for a given patient, including
their reports. A limitation of traditional LSTMs, though, is that they
implicitly assume equally-spaced observations, whereas the radiological exams
are event-based, and therefore irregularly sampled. Using both a simulated
dataset and a large-scale chest x-ray dataset, we demonstrate that a simple
modification of the LSTM architecture, which explicitly takes into account the
time lag between consecutive observations, can boost classification
performance. Our empirical results demonstrate improved detection of commonly
reported abnormalities on chest x-rays such as cardiomegaly, consolidation,
pleural effusion and hiatus hernia.Comment: Submitted to 4th MICCAI Workshop on Deep Learning in Medical Imaging
Analysi
Zoom-in-Net: Deep Mining Lesions for Diabetic Retinopathy Detection
We propose a convolution neural network based algorithm for simultaneously
diagnosing diabetic retinopathy and highlighting suspicious regions. Our
contributions are two folds: 1) a network termed Zoom-in-Net which mimics the
zoom-in process of a clinician to examine the retinal images. Trained with only
image-level supervisions, Zoomin-Net can generate attention maps which
highlight suspicious regions, and predicts the disease level accurately based
on both the whole image and its high resolution suspicious patches. 2) Only
four bounding boxes generated from the automatically learned attention maps are
enough to cover 80% of the lesions labeled by an experienced ophthalmologist,
which shows good localization ability of the attention maps. By clustering
features at high response locations on the attention maps, we discover
meaningful clusters which contain potential lesions in diabetic retinopathy.
Experiments show that our algorithm outperform the state-of-the-art methods on
two datasets, EyePACS and Messidor.Comment: accepted by MICCAI 201
Lesion detection and Grading of Diabetic Retinopathy via Two-stages Deep Convolutional Neural Networks
We propose an automatic diabetic retinopathy (DR) analysis algorithm based on
two-stages deep convolutional neural networks (DCNN). Compared to existing
DCNN-based DR detection methods, the proposed algorithm have the following
advantages: (1) Our method can point out the location and type of lesions in
the fundus images, as well as giving the severity grades of DR. Moreover, since
retina lesions and DR severity appear with different scales in fundus images,
the integration of both local and global networks learn more complete and
specific features for DR analysis. (2) By introducing imbalanced weighting map,
more attentions will be given to lesion patches for DR grading, which
significantly improve the performance of the proposed algorithm. In this study,
we label 12,206 lesion patches and re-annotate the DR grades of 23,595 fundus
images from Kaggle competition dataset. Under the guidance of clinical
ophthalmologists, the experimental results show that our local lesion detection
net achieve comparable performance with trained human observers, and the
proposed imbalanced weighted scheme also be proved to significantly improve the
capability of our DCNN-based DR grading algorithm
Combining Fine- and Coarse-Grained Classifiers for Diabetic Retinopathy Detection
Visual artefacts of early diabetic retinopathy in retinal fundus images are
usually small in size, inconspicuous, and scattered all over retina. Detecting
diabetic retinopathy requires physicians to look at the whole image and fixate
on some specific regions to locate potential biomarkers of the disease.
Therefore, getting inspiration from ophthalmologist, we propose to combine
coarse-grained classifiers that detect discriminating features from the whole
images, with a recent breed of fine-grained classifiers that discover and pay
particular attention to pathologically significant regions. To evaluate the
performance of this proposed ensemble, we used publicly available EyePACS and
Messidor datasets. Extensive experimentation for binary, ternary and quaternary
classification shows that this ensemble largely outperforms individual image
classifiers as well as most of the published works in most training setups for
diabetic retinopathy detection. Furthermore, the performance of fine-grained
classifiers is found notably superior than coarse-grained image classifiers
encouraging the development of task-oriented fine-grained classifiers modelled
after specialist ophthalmologists.Comment: Pages 12, Figures
Orientation-dependent solid solution strengthening in zirconium: a nanoindentation study
Orientation-dependent solid solution strengthening was explored through a combined microtexture plus nanoindentation study. Pure zirconium (6N purity crystal-bar Zr) and commercial Zircaloy-2 were investigated for comparison. Local mechanical properties were estimated through finite element (FE) simulations of the unloading part of the nanoindentation load–displacement response. Combinations of ‘averaging’ scheme and constitutive relationship were used to resolve uncertainty of FE-extracted mechanical properties. Comparing the two grades, non-basal oriented grains showed an overall hardening and increase in elastic modulus. In contrast, insignificant change was observed for basal (or near-basal) oriented grains. The strengthening of non-basal orientations appeared via elimination of the lowest hardness/stiffness values without a shift in the peak value. Such asymmetric development brought out the clear picture of orientation-dependent solid solution strengthening in zirconium
Synthesis of Positron Emission Tomography (PET) Images via Multi-channel Generative Adversarial Networks (GANs)
Positron emission tomography (PET) image synthesis plays an important role,
which can be used to boost the training data for computer aided diagnosis
systems. However, existing image synthesis methods have problems in
synthesizing the low resolution PET images. To address these limitations, we
propose multi-channel generative adversarial networks (M-GAN) based PET image
synthesis method. Different to the existing methods which rely on using
low-level features, the proposed M-GAN is capable to represent the features in
a high-level of semantic based on the adversarial learning concept. In
addition, M-GAN enables to take the input from the annotation (label) to
synthesize the high uptake regions e.g., tumors and from the computed
tomography (CT) images to constrain the appearance consistency and output the
synthetic PET images directly. Our results on 50 lung cancer PET-CT studies
indicate that our method was much closer to the real PET images when compared
with the existing methods.Comment: 9 pages, 2 figure
An Explainable AI-Based Computer Aided Detection System for Diabetic Retinopathy Using Retinal Fundus Images
Diabetic patients have a high risk of developing diabetic retinopathy (DR), which is one of the major causes of blindness. With early detection and the right treatment patients may be spared from losing their vision. We propose a computer-aided detection system, which uses retinal fundus images as input and it detects all types of lesions that define diabetic retinopathy. The aim of our system is to assist eye specialists by automatically detecting the healthy retinas and referring the images of the unhealthy ones. For the latter cases, the system offers an interactive tool where the doctor can examine the local lesions that our system marks as suspicious. The final decision remains in the hands of the ophthalmologists. Our approach consists of a multi-class detector, that is able to locate and recognize all candidate DR-defining lesions. If the system detects at least one lesion, then the image is marked as unhealthy. The lesion detector is built on the faster R-CNN ResNet 101 architecture, which we train by transfer learning. We evaluate our approach on three benchmark data sets, namely Messidor-2, IDRiD, and E-Ophtha by measuring the sensitivity (SE) and specificity (SP) based on the binary classification of healthy and unhealthy images. The results that we obtain for Messidor-2 and IDRiD are (SE: 0.965, SP: 0.843), and (SE: 0.83, SP: 0.94), respectively. For the E-Ophtha data set we follow the literature and perform two experiments, one where we detect only lesions of the type micro aneurysms (SE: 0.939, SP: 0.82) and the other when we detect only exudates (SE: 0.851, SP: 0.971). Besides the high effectiveness that we achieve, the other important contribution of our work is the interactive tool, which we offer to the medical experts, highlighting all suspicious lesions detected by the proposed system.<br/
Hybrid Deep Learning Gaussian Process for Diabetic Retinopathy Diagnosis and Uncertainty Quantification
Diabetic Retinopathy (DR) is one of the microvascular complications of
Diabetes Mellitus, which remains as one of the leading causes of blindness
worldwide. Computational models based on Convolutional Neural Networks
represent the state of the art for the automatic detection of DR using eye
fundus images. Most of the current work address this problem as a binary
classification task. However, including the grade estimation and quantification
of predictions uncertainty can potentially increase the robustness of the
model. In this paper, a hybrid Deep Learning-Gaussian process method for DR
diagnosis and uncertainty quantification is presented. This method combines the
representational power of deep learning, with the ability to generalize from
small datasets of Gaussian process models. The results show that uncertainty
quantification in the predictions improves the interpretability of the method
as a diagnostic support tool. The source code to replicate the experiments is
publicly available at https://github.com/stoledoc/DLGP-DR-Diagnosis
Improving Lesion Segmentation for Diabetic Retinopathy using Adversarial Learning
Diabetic Retinopathy (DR) is a leading cause of blindness in working age
adults. DR lesions can be challenging to identify in fundus images, and
automatic DR detection systems can offer strong clinical value. Of the publicly
available labeled datasets for DR, the Indian Diabetic Retinopathy Image
Dataset (IDRiD) presents retinal fundus images with pixel-level annotations of
four distinct lesions: microaneurysms, hemorrhages, soft exudates and hard
exudates. We utilize the HEDNet edge detector to solve a semantic segmentation
task on this dataset, and then propose an end-to-end system for pixel-level
segmentation of DR lesions by incorporating HEDNet into a Conditional
Generative Adversarial Network (cGAN). We design a loss function that adds
adversarial loss to segmentation loss. Our experiments show that the addition
of the adversarial loss improves the lesion segmentation performance over the
baseline.Comment: Accepted to International Conference on Image Analysis and
Recognition, ICIAR 2019. Published at
https://doi.org/10.1007/978-3-030-27272-2_29 Code:
https://github.com/zoujx96/DR-segmentatio
Deep Learning-based Approach for the Semantic Segmentation of Bright Retinal Damage
Regular screening for the development of diabetic retinopathy is imperative for an early diagnosis and a timely treatment, thus
preventing further progression of the disease. The conventional screening techniques based on manual observation by qualified physicians can
be very time consuming and prone to error. In this paper, a novel automated screening model based on deep learning for the semantic segmentation of exudates in color fundus images is proposed with the implementation of an end-to-end convolutional neural network built upon UNet architecture. This encoder-decoder network is characterized by the
combination of a contracting path and a symmetrical expansive path to
obtain precise localization with the use of context information. The proposed method was validated on E-OPHTHA and DIARETDB1 public
databases achieving promising results compared to current state-of-theart methods.This paper was supported by the European Union’s Horizon
2020 research and innovation programme under the Project GALAHAD [H2020-ICT2016-2017, 732613]. The work of Adri´an Colomer has been supported by the Spanish
Government under a FPI Grant [BES-2014-067889]. We gratefully acknowledge the
support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this
research.Silva, C.; Colomer, A.; Naranjo Ornedo, V. (2018). Deep Learning-based Approach for the Semantic Segmentation of Bright Retinal Damage. En Intelligent Data Engineering and Automated Learning – IDEAL 2018. Springer. 164-173. https://doi.org/10.1007/978-3-030-03493-1_18S164173World Health Organization: Diabetes fact sheet. Sci. Total Environ. 20, 1–88 (2011)Verma, L., Prakash, G., Tewari, H.K.: Diabetic retinopathy: time for action. No complacency please! Bull. World Health Organ. 80(5), 419–419 (2002)Sopharak, A.: Machine learning approach to automatic exudate detection in retinal images from diabetic patients. J. Mod. Opt. 57(2), 124–135 (2010)Imani, E., Pourreza, H.R.: A novel method for retinal exudate segmentation using signal separation algorithm. Comput. Methods Programs Biomed. 133, 195–205 (2016)Haloi, M., Dandapat, S., Sinha, R.: A Gaussian scale space approach for exudates detection, classification and severity prediction. arXiv preprint arXiv:1505.00737 (2015)Welfer, D., Scharcanski, J., Marinho, D.R.: A coarse-to-fine strategy for automatically detecting exudates in color eye fundus images. Comput. Med. Imaging Graph. 34(3), 228–235 (2010)Harangi, B., Hajdu, A.: Automatic exudate detection by fusing multiple active contours and regionwise classification. Comput. Biol. Med. 54, 156–171 (2014)Sopharak, A., Uyyanonvara, B., Barman, S.: Automatic exudate detection from non-dilated diabetic retinopathy retinal images using fuzzy C-means clustering. Sensors 9(3), 2148–2161 (2009)Havaei, M., Davy, A., Warde-Farley, D.: Brain tumor segmentation with deep neural networks. Med. Image Anal. 35, 18–31 (2017)Liskowski, P., Krawiec, K.: Segmenting retinal blood vessels with deep neural networks. IEEE Trans. Med. Imag. 35(11), 2369–2380 (2016)Pratt, H., Coenen, F., Broadbent, D.M., Harding, S.P.: Convolutional neural networks for diabetic retinopathy. Procedia Comput. Sci. 90, 200–205 (2016)Gulshan, V., Peng, L., Coram, M.: Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316(22), 2402–2410 (2016)Prentašić, P., Lončarić, S.: Detection of exudates in fundus photographs using deep neural networks and anatomical landmark detection fusion. Comput. Methods Programs Biomed. 137, 281–292 (2016)Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28Garcia-Garcia, A., Orts-Escolano, S., Oprea, S., Villena-Martinez, V., Garcia-Rodriguez, J.: A review on deep learning techniques applied to semantic segmentation, pp. 1–23. arXiv preprint arXiv:1704.06857 (2017)Deng, Z., Fan, H., Xie, F., Cui, Y., Liu, J.: Segmentation of dermoscopy images based on fully convolutional neural network. In: IEEE International Conference on Image Processing (ICIP 2017), pp. 1732–1736. IEEE (2017)Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440. IEEE (2014)Li, W., Qian, X., Ji, J.: Noise-tolerant deep learning for histopathological image segmentation, vol. 510 (2017)Chen, H., Qi, X., Yu, L.: DCAN: deep contour-aware networks for object instance segmentation from histology images. Med. Image Anal. 36, 135–146 (2017)Walter, T., Klein, J.C., Massin, P., Erginay, A.: A contribution of image processing to the diagnosis of diabetic retinopathy-detection of exudates in color fundus images of the human retina. IEEE Trans. Med. Imaging 21(10), 1236–1243 (2002)Morales, S., Naranjo, V., Angulo, U., Alcaniz, M.: Automatic detection of optic disc based on PCA and mathematical morphology. IEEE Trans. Med. Imaging 32(4), 786–796 (2013)Zhang, X., Thibault, G., Decencière, E.: Exudate detection in color retinal images for mass screening of diabetic retinopathy. Med. Image Anal. 18(7), 1026–1043 (2014
- …