298 research outputs found

    Incorporating spatial information for microaneurysm detection in retinal images

    Get PDF
    The presence of microaneurysms(MAs) in retinal images is a pathognomonic sign of Diabetic Retinopathy (DR). This is one of the leading causes of blindness in the working population worldwide. This paper introduces a novel algorithm that combines information from spatial views of the retina for the purpose of MA detection. Most published research in the literature has addressed the problem of detecting MAs from single retinal images. This work proposes the incorporation of information from two spatial views during the detection process. The algorithm is evaluated using 160 images from 40 patients seen as part of a UK diabetic eye screening programme which contained 207 MAs. An improvement in performance compared to detection from an algorithm that relies on a single image is shown as an increase of 2% ROC score, hence demonstrating the potential of this method

    Automated fundus image quality assessment and segmentation of optic disc using convolutional neural networks

    Get PDF
    An automated fundus image analysis is used as a tool for the diagnosis of common retinal diseases. A good quality fundus image results in better diagnosis and hence discarding the degraded fundus images at the time of screening itself provides an opportunity to retake the adequate fundus photographs, which save both time and resources. In this paper, we propose a novel fundus image quality assessment (IQA) model using the convolutional neural network (CNN) based on the quality of optic disc (OD) visibility. We localize the OD by transfer learning with Inception v-3 model. Precise segmentation of OD is done using the GrabCut algorithm. Contour operations are applied to the segmented OD to approximate it to the nearest circle for finding its center and diameter. For training the model, we are using the publicly available fundus databases and a private hospital database. We have attained excellent classification accuracy for fundus IQA on DRIVE, CHASE-DB, and HRF databases. For the OD segmentation, we have experimented our method on DRINS-DB, DRISHTI-GS, and RIM-ONE v.3 databases and compared the results with existing state-of-the-art methods. Our proposed method outperforms existing methods for OD segmentation on Jaccard index and F-score metrics

    A novel automated approach of multi-modality retinal image registration and fusion

    Get PDF
    Biomedical image registration and fusion are usually scene dependent, and require intensive computational effort. A novel automated approach of feature-based control point detection and area-based registration and fusion of retinal images has been successfully designed and developed. The new algorithm, which is reliable and time-efficient, has an automatic adaptation from frame to frame with few tunable threshold parameters. The reference and the to-be-registered images are from two different modalities, i.e. angiogram grayscale images and fundus color images. The relative study of retinal images enhances the information on the fundus image by superimposing information contained in the angiogram image. Through the thesis research, two new contributions have been made to the biomedical image registration and fusion area. The first contribution is the automatic control point detection at the global direction change pixels using adaptive exploratory algorithm. Shape similarity criteria are employed to match the control points. The second contribution is the heuristic optimization algorithm that maximizes Mutual-Pixel-Count (MPC) objective function. The initially selected control points are adjusted during the optimization at the sub-pixel level. A global maxima equivalent result is achieved by calculating MPC local maxima with an efficient computation cost. The iteration stops either when MPC reaches the maximum value, or when the maximum allowable loop count is reached. To our knowledge, it is the first time that the MPC concept has been introduced into biomedical image fusion area as the measurement criteria for fusion accuracy. The fusion image is generated based on the current control point coordinates when the iteration stops. The comparative study of the presented automatic registration and fusion scheme against Centerline Control Point Detection Algorithm, Genetic Algorithm, RMSE objective function, and other existing data fusion approaches has shown the advantage of the new approach in terms of accuracy, efficiency, and novelty

    Retinal Image Quality Classification Using Neurobiological Models of the Human Visual System

    Get PDF
    Retinal image quality assessment (IQA) algorithms use different hand crafted features without considering the important role of the human visual system (HVS). We solve the IQA problem using the principles behind the working of the HVS. Unsupervised information from local saliency maps and supervised information from trained convolutional neural networks (CNNs) are combined to make a final decision on image quality. A novel algorithm is proposed that calculates saliency values for every image pixel at multiple scales to capture global and local image information. This extracts generalized image information in an unsupervised manner while CNNs provide a principled approach to feature learning without the need to define hand-crafted features. The individual classification decisions are fused by weighting them according to their confidence scores. Experimental results on real datasets demonstrate the superior performance of our proposed algorithm over competing methods

    Deep Learning-based Approach for the Semantic Segmentation of Bright Retinal Damage

    Full text link
    Regular screening for the development of diabetic retinopathy is imperative for an early diagnosis and a timely treatment, thus preventing further progression of the disease. The conventional screening techniques based on manual observation by qualified physicians can be very time consuming and prone to error. In this paper, a novel automated screening model based on deep learning for the semantic segmentation of exudates in color fundus images is proposed with the implementation of an end-to-end convolutional neural network built upon UNet architecture. This encoder-decoder network is characterized by the combination of a contracting path and a symmetrical expansive path to obtain precise localization with the use of context information. The proposed method was validated on E-OPHTHA and DIARETDB1 public databases achieving promising results compared to current state-of-theart methods.This paper was supported by the European Union’s Horizon 2020 research and innovation programme under the Project GALAHAD [H2020-ICT2016-2017, 732613]. The work of Adri´an Colomer has been supported by the Spanish Government under a FPI Grant [BES-2014-067889]. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.Silva, C.; Colomer, A.; Naranjo Ornedo, V. (2018). Deep Learning-based Approach for the Semantic Segmentation of Bright Retinal Damage. En Intelligent Data Engineering and Automated Learning – IDEAL 2018. Springer. 164-173. https://doi.org/10.1007/978-3-030-03493-1_18S164173World Health Organization: Diabetes fact sheet. Sci. Total Environ. 20, 1–88 (2011)Verma, L., Prakash, G., Tewari, H.K.: Diabetic retinopathy: time for action. No complacency please! Bull. World Health Organ. 80(5), 419–419 (2002)Sopharak, A.: Machine learning approach to automatic exudate detection in retinal images from diabetic patients. J. Mod. Opt. 57(2), 124–135 (2010)Imani, E., Pourreza, H.R.: A novel method for retinal exudate segmentation using signal separation algorithm. Comput. Methods Programs Biomed. 133, 195–205 (2016)Haloi, M., Dandapat, S., Sinha, R.: A Gaussian scale space approach for exudates detection, classification and severity prediction. arXiv preprint arXiv:1505.00737 (2015)Welfer, D., Scharcanski, J., Marinho, D.R.: A coarse-to-fine strategy for automatically detecting exudates in color eye fundus images. Comput. Med. Imaging Graph. 34(3), 228–235 (2010)Harangi, B., Hajdu, A.: Automatic exudate detection by fusing multiple active contours and regionwise classification. Comput. Biol. Med. 54, 156–171 (2014)Sopharak, A., Uyyanonvara, B., Barman, S.: Automatic exudate detection from non-dilated diabetic retinopathy retinal images using fuzzy C-means clustering. Sensors 9(3), 2148–2161 (2009)Havaei, M., Davy, A., Warde-Farley, D.: Brain tumor segmentation with deep neural networks. Med. Image Anal. 35, 18–31 (2017)Liskowski, P., Krawiec, K.: Segmenting retinal blood vessels with deep neural networks. IEEE Trans. Med. Imag. 35(11), 2369–2380 (2016)Pratt, H., Coenen, F., Broadbent, D.M., Harding, S.P.: Convolutional neural networks for diabetic retinopathy. Procedia Comput. Sci. 90, 200–205 (2016)Gulshan, V., Peng, L., Coram, M.: Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316(22), 2402–2410 (2016)Prentašić, P., Lončarić, S.: Detection of exudates in fundus photographs using deep neural networks and anatomical landmark detection fusion. Comput. Methods Programs Biomed. 137, 281–292 (2016)Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28Garcia-Garcia, A., Orts-Escolano, S., Oprea, S., Villena-Martinez, V., Garcia-Rodriguez, J.: A review on deep learning techniques applied to semantic segmentation, pp. 1–23. arXiv preprint arXiv:1704.06857 (2017)Deng, Z., Fan, H., Xie, F., Cui, Y., Liu, J.: Segmentation of dermoscopy images based on fully convolutional neural network. In: IEEE International Conference on Image Processing (ICIP 2017), pp. 1732–1736. IEEE (2017)Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440. IEEE (2014)Li, W., Qian, X., Ji, J.: Noise-tolerant deep learning for histopathological image segmentation, vol. 510 (2017)Chen, H., Qi, X., Yu, L.: DCAN: deep contour-aware networks for object instance segmentation from histology images. Med. Image Anal. 36, 135–146 (2017)Walter, T., Klein, J.C., Massin, P., Erginay, A.: A contribution of image processing to the diagnosis of diabetic retinopathy-detection of exudates in color fundus images of the human retina. IEEE Trans. Med. Imaging 21(10), 1236–1243 (2002)Morales, S., Naranjo, V., Angulo, U., Alcaniz, M.: Automatic detection of optic disc based on PCA and mathematical morphology. IEEE Trans. Med. Imaging 32(4), 786–796 (2013)Zhang, X., Thibault, G., Decencière, E.: Exudate detection in color retinal images for mass screening of diabetic retinopathy. Med. Image Anal. 18(7), 1026–1043 (2014

    Bright Lesion Detection in Color Fundus Images Based on Texture Features

    Get PDF
    In this paper a computer aided screening system for the detection of bright lesions or exudates using color fundus images is proposed. The proposed screening system is used to identify the suspicious regions for bright lesions. A texture feature extraction method is also demonstrated to describe the characteristics of region of interest. In final stage the normal and abnormal images are classified using Support vector machine classifier. Our proposed system obtained the effective detection performance compared to some of the state–of–art methods

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table
    • …
    corecore