11 research outputs found

    Classification of neovascularization using convolutional neural network model

    Get PDF
    Neovascularization is a new vessel in the retina beside the artery-venous. Neovascularization can appear on the optic disk and the entire surface of the retina. The retina categorized in Proliferative Diabetic Retinopathy (PDR) if it has neovascularization. PDR is a severe Diabetic Retinopathy (DR). An image classification system between normal and neovascularization is here presented. The classification using Convolutional Neural Network (CNN) model and classification method such as Support Vector Machine, k-Nearest Neighbor, NaĂŻve Bayes classifier, Discriminant Analysis, and Decision Tree. By far, there are no data patches of neovascularization for the process of classification. Data consist of normal, New Vessel on the Disc (NVD) and New Vessel Elsewhere (NVE). Images are taken from 2 databases, MESSIDOR and Retina Image Bank. The patches are made from a manual crop on the image that has been marked by experts as neovascularization. The dataset consists of 100 data patches. The test results using three scenarios obtained a classification accuracy of 90%-100% with linear loss cross validation 0%-26.67%. The test performs using a single Graphical Processing Unit (GPU)

    Evaluasi Performasi Ruang Warna pada Klasifikasi Diabetic Retinophaty Menggunakan Convolution Neural Network

    Get PDF
    Semakin meningkatnya jumlah penderita diabetes menjadi salah satu faktor penyebab semakin tingginya penderita penyakit diabetic retinophaty. Salah satu citra yang digunakan oleh dokter mata untuk mengidentifikasi diabetic retinophaty adalah foto retina. Dalam penelitian ini dilakukan pengenalan penyakit diabetic retinophaty secara otomatis menggunakan citra fundus retina dan algoritme Convolutional Neural Network (CNN) yang merupakan variasi dari algoritme Deep Learning. Kendala yang ditemukan dalam proses pengenalan adalah warna retina yang cenderung merah kekuningan sehingga ruang warna RGB tidak menghasilkan akurasi yang optimal. Oleh karena itu, dalam penelitian ini dilakukan pengujian pada berbagai ruang warna untuk mendapatkan hasil yang lebih baik. Dari hasil uji coba menggunakan 1000 data pada ruang warna RGB, HSI, YUV dan L*a*b* memberikan hasil yang kurang optimal pada data seimbang dimana akurasi terbaik masih dibawah 50%. Namun pada data tidak seimbang menghasilkan akurasi yang cukup tinggi yaitu 83,53% pada ruang warna YUV dengan pengujian pada data latih dan akurasi 74,40% dengan data uji pada semua ruang warna. AbstractIncreasing the number of people with diabetes is one of the factors causing the high number of people with diabetic retinopathy. One of the images used by ophthalmologists to identify diabetic retinopathy is a retinal photo. In this research, the identification of diabetic retinopathy is done automatically using retinal fundus images and the Convolutional Neural Network (CNN) algorithm, which is a variation of the Deep Learning algorithm. The obstacle found in the recognition process is the color of the retina which tends to be yellowish red so that the RGB color space does not produce optimal accuracy. Therefore, in this research, various color spaces were tested to get better results. From the results of trials using 1000 images data in the color space of RGB, HSI, YUV and L * a * b * give suboptimal results on balanced data where the best accuracy is still below 50%. However, the unbalanced data gives a fairly high accuracy of 83.53% with training data on the YUV color space and 74,40% with testing data on all color spaces

    Detection of Neovascularization Based on Fractal and Texture Analysis with Interaction Effects in Diabetic Retinopathy

    Get PDF
    Diabetic retinopathy is a major cause of blindness. Proliferative diabetic retinopathy is a result of severe vascular complication and is visible as neovascularization of the retina. Automatic detection of such new vessels would be useful for the severity grading of diabetic retinopathy, and it is an important part of screening process to identify those who may require immediate treatment for their diabetic retinopathy. We proposed a novel new vessels detection method including statistical texture analysis (STA), high order spectrum analysis (HOS), fractal analysis (FA), and most importantly we have shown that by incorporating their associated interactions the accuracy of new vessels detection can be greatly improved. To assess its performance, the sensitivity, specificity and accuracy (AUC) are obtained. They are 96.3%, 99.1% and 98.5% (99.3%), respectively. It is found that the proposed method can improve the accuracy of new vessels detection significantly over previous methods. The algorithm can be automated and is valuable to detect relatively severe cases of diabetic retinopathy among diabetes patients.published_or_final_versio

    The automated detection of proliferative diabetic retinopathy using dual ensemble classification

    Get PDF
    Objective: Diabetic retinopathy (DR) is a retinal vascular disease that is caused by complications of diabetes. Proliferative diabetic retinopathy (PDR) is the advanced stage of the disease which carries a high risk of severe visual impairment. This stage is characterized by the growth of abnormal new vessels. We aim to develop a method for the automated detection of new vessels from retinal images. Methods: This method is based on a dual classification approach. Two vessel segmentation approaches are applied to create two separate binary vessel maps which each hold vital information. Local morphology, gradient and intensity features are measured using each binary vessel map to produce two separate 21-D feature vectors. Independent classification is performed for each feature vector using an ensemble system of bagged decision trees. These two independent outcomes are then combined to a produce a final decision. Results: Sensitivity and specificity results using a dataset of 60 images are 1.0000 and 0.9500 on a per image basis. Conclusions: The described automated system is capable of detecting the presence of new vessels

    Genetic algorithm based feature selection combined with dual classification for the automated detection of proliferative diabetic retinopathy

    Get PDF
    Proliferative diabetic retinopathy (PDR) is a condition that carries a high risk of severe visual impairment. The hallmark of PDR is the growth of abnormal new vessels. In this paper, an automated method for the detection of new vessels from retinal images is presented. This method is based on a dual classification approach. Two vessel segmentation approaches are applied to create two separate binary vessel map which each hold vital information. Local morphology features are measured from each binary vessel map to produce two separate 4-D feature vectors. Independent classification is performed for each feature vector using a support vector machine (SVM) classifier. The system then combines these individual outcomes to produce a final decision. This is followed by the creation of additional features to generate 21-D feature vectors, which feed into a genetic algorithm based feature selection approach with the objective of finding feature subsets that improve the performance of the classification. Sensitivity and specificity results using a dataset of 60 images are 0.9138 and 0.9600, respectively, on a per patch basis and 1.000 and 0.975, respectively, on a per image basis

    Automated detection of proliferative diabetic retinopathy from retinal images

    Get PDF
    Diabetic retinopathy (DR) is a retinal vascular disease associated with diabetes and it is one of the most common causes of blindness worldwide. Diabetic patients regularly attend retinal screening in which digital retinal images are captured. These images undergo thorough analysis by trained individuals, which can be a very time consuming and costly task due to the large diabetic population. Therefore, this is a field that would greatly benefit from the introduction of automated detection systems. This project aims to automatically detect proliferative diabetic retinopathy (PDR), which is the most advanced stage of the disease and poses a high risk of severe visual impairment. The hallmark of PDR is neovascularisation, the growth of abnormal new vessels. Their tortuous, convoluted and obscure appearance can make them difficult to detect. In this thesis, we present a methodology based on the novel approach of creating two different segmented vessel maps. Segmentation methods include a standard line operator approach and a novel modified line operator approach. The former targets the accurate segmentation of new vessels and the latter targets the reduction of false responses to non-vessel edges. Both generated binary vessel maps hold vital information which is processed separately using a dual classification framework. Features are measured from each binary vessel map to produce two separate feature sets. Independent classification is performed for each feature set using a support vector machine (SVM) classifier. The system then combines these individual classification outcomes to produce a final decision. The proposed methodology, using a dataset of 60 images, achieves a sensitivity of 100.00% and a specificity of 92.50% on a per image basis and a sensitivity of 87.93% and a specificity of 94.40% on a per patch basis. The thesis also presents an investigation into the search for the most suitable features for the classification of PDR. This entails the expansion of the feature vector, followed by feature selection using a genetic algorithm based approach. This provides an improvement in results, which now stand at a sensitivity and specificity 3 of 100.00% and 97.50% respectively on a per image basis and 91.38% and 96.00% respectively on a per patch basis. A final extension to the project sees the framework of dual classification further explored, by comparing the results of dual SVM classification with dual ensemble classification. The results of the dual ensemble approach are deemed inferior, achieving a sensitivity and specificity of 100.00% and 95.00% respectively on a per image basis and 81.03% and 95.20% respectively on a per patch basis

    Segmentation automatique des images de fibres d’ADN pour la quantification du stress réplicatif

    Get PDF
    La réplication de l’ADN est un processus complexe géré par une multitude d’interactions moléculaires permettant une transmission précise de l’information génétique de la cellule mère vers les cellules filles. Parmi les facteurs pouvant porter atteinte à la fidélité de ce processus, on trouve le Stress Réplicatif. Il s’agit de l’ensemble des phénomènes entraînant le ralentissement voire l’arrêt anormal des fourches de réplication. S’il n’est pas maîtrisé, le stress réplicatif peut causer des ruptures du double brin d’ADN ce qui peut avoir des conséquences graves sur la stabilité du génome, la survie de la cellule et conduire au développement de cancers, de maladies neurodégénératives ou d’autres troubles du développement. Il existe plusieurs techniques d’imagerie de l’ADN par fluorescence permettant l’évaluation de la progression des fourches de réplication au niveau moléculaire. Ces techniques reposent sur l’incorporation d’analogues de nucléotides tels que chloro- (CldU), iodo- (IdU), ou bromo-deoxyuridine (BrdU) dans le double brin en cours de réplication. L’expérience la plus classique repose sur l’incorporation successive de deux types d’analogues de nucléotides (IdU et CldU) dans le milieu cellulaire. Une fois ces nucléotides exogènes intégrés dans le double brin de l’ADN répliqué, on lyse les cellules et on répartit l’ADN sur une lame de microscope. Les brins contenant les nucléotides exogènes peuvent être imagés par immunofluorescence. L’image obtenue est constituée de deux couleurs correspondant à chacun des deux types d’analogues de nucléotides. La mesure des longueurs de chaque section fluorescente permet la quantification de la vitesse de progression des fourches de réplication et donc l’évaluation des effets du stress réplicatif. La mesure de la longueur des fibres fluorescentes d’ADN est généralement réalisée manuellement. Cette opération, en plus d’être longue et fastidieuse, peut être sujette à des variations inter- et intra- opérateurs provenant principalement de déférences dans le choix des fibres. La détection des fibres d’ADN est difficile car ces dernières sont souvent fragmentées en plusieurs morceaux espacés et peuvent s’enchevêtrer en agrégats. De plus, les fibres sont parfois difficile à distinguer du bruit en arrière-plan causé par les liaisons non-spécifiques des anticorps fluorescents. Malgré la profusion des algorithmes de segmentation de structures curvilignes (vaisseaux sanguins, réseaux neuronaux, routes, fissures sur béton...), très peu de travaux sont dédiés au traitement des images de fibres d’ADN. Nous avons mis au point un algorithme intitulé ADFA (Automated DNA Fiber Analysis) permettant la segmentation automatique des fibres d’ADN ainsi que la mesure de leur longueur respective. Cet algorithme se divise en trois parties : (i) Une extraction des objets de l’image par analyse des contours. Notre méthode de segmentation des contours se basera sur des techniques classiques d’analyse du gradient de l’image (Marr-Hildreth et de Canny). (ii) Un prolongement des objets adjacents afin de fusionner les fibres fragmentées. Nous avons développé une méthode de suivi (tracking) basée sur l’orientation et la continuité des objets adjacents. (iii) Une détermination du type d’analogue de nucléotide par comparaison des couleurs. Pour ce faire, nous analyserons les deux canaux (vert et rouge) de l’image le long de chaque fibre. Notre algorithme a été testé sur un grand nombre d’images de qualité variable et acquises à partir de différents contextes de stress réplicatif. La comparaison entre ADFA et plusieurs opérateurs humains montre une forte adéquation entre les deux approches à la fois à l’échelle de chaque fibre et à l’échelle plus globale de l’image. La comparaison d’échantillons soumis ou non soumis à un stress réplicatif a aussi permis de valider les performances de notre algorithme. Enfin, nous avons étudié l’impact du temps d’incubation du second analogue de nucléotide sur les résultats de l’algorithme. Notre algorithme est particulièrement efficace sur des images contenant des fibres d’ADN relativement courtes et peu fractionnées. En revanche, notre méthode de suivi montre des limites lorsqu’il s’agit de fusionner correctement de longues fibres fortement fragmentées et superposées à d’autres brins. Afin d’optimiser les performances d’ADFA, nous recommandons des temps d’incubation courts (20 à 30 minutes) pour chaque analogue de nucléotide dans le but d’obtenir des fibres courtes. Nous recommandons aussi de favoriser la dilution des brins sur la lame de microscope afin d’éviter la formation d’agrégats de fibres difficiles à distinguer. ADFA est disponible en libre accès et a pour vocation de servir de référence pour la mesure des brins d’ADN afin de pallier les problèmes de variabilités inter-opérateurs.----------ABSTRACTDNA replication is tightly regulated by a great number of molecular interactions that ensure accurate transmission of genetic information to daughter cells. Replicative Stress refers to all the processes undermining the fidelity of DNA replication by slowing down or stalling DNA replication forks. Indeed, stalled replication forks may “collapse” into highly-genotoxic double strand breaks (DSB) which engender chromosomal rearrangements and genomic instability. Thus, replicative stress can constitute a critical determinant in both cancer development and treatment. Replicative stress is also implicated in the molecular pathogenesis of aging and neurodegenerative disease, as well as developmental disorders. Several fluorescence imaging techniques enable the evaluation of replication forks progression at the level of individual DNA molecules. Those techniques rely on the incorporation of exogene nucleotide analogs in nascent DNA at replication forks in living cells. In a typical experiment, sequential incorporation of two nucleotide analogs, e.g., IdU and CldU, is performed. Following cell lysis and spreading of DNA on microscopy slides, DNA molecules are then imaged by immunofluorescence. The obtained image is made up of two colors corresponding to each one of the two nucleotide analogs. Measurement of the respective lengths of these labeled stretches of DNA permits quantification of replication fork progression. Evaluation of DNA fiber length is generally performed manually. This procedure is laborious and subject to inter- and intra-user variability stemming in part from unintended bias in the choice of fibers to be measured. DNA fiber extraction is difficult because strands are often fragmented in lots of subparts and can be tangled in clusters. Moreover, the extraction of fibers can be difficult when the background is noised by non specific staining. Despite the large number of segmentation algorithms dedicated to curvilinear structures (blood vessels, neural networks, roads, concrete tracks...), few studies address the treatment of DNA fiber images. We developed an algorithm called ADFA (Automated DNA Fiber Analysis) which automatically segments DNA fibers and measures their respective length. Our approach can be divided into three parts: 1. Object extraction by a robust contour detection. Our contour segmentation method relies on two classical gradient analyses (Marr and Hildreth, 1980; Canny, 1986) 2. Fusion of adjacent fragmented fibers by analysing their continuity. We developped a tracking approach based on the orientation and the continuity of adjacent fibers. 3. Detection of the nucleotide analog label (IdU or CldU). To do so, we analyse the color profile on both channels (green and red) along each fiber. ADFA was tested on a database of different images of varying quality, signal to noise ratio, or fiber length which were acquired from two different microscopes. The comparison between ADFA and manual segmentations shows a high correlation both at the scale of the fiber and at the scale of the image. Moreover, we validate our algorithm by comparing samples submitted to replicative stress and controls. Finally, we studied the impact of the incubation time of the second nucleotide analog pulse. The performances of our algorithm are optimised for images containing relatively short and not fragmented DNA fibers. Our tracking methods may be limited when connecting highly split fibers superimposed to other strands. Therefore, we recommend to reduce the incubation time of each nucleotide analog to about 20-30 minutes in order to obtain short fibers. We also recommend to foster the dilution of fibers on the slide to reduce clustering of fluorescent DNA molecules. ADFA is freely available as an open-source software. It might be used as a reference tool to solve inter-intra user variability

    Detection and Classification of Diabetic Retinopathy Pathologies in Fundus Images

    Get PDF
    Diabetic Retinopathy (DR) is a disease that affects up to 80% of diabetics around the world. It is the second greatest cause of blindness in the Western world, and one of the leading causes of blindness in the U.S. Many studies have demonstrated that early treatment can reduce the number of sight-threatening DR cases, mitigating the medical and economic impact of the disease. Accurate, early detection of eye disease is important because of its potential to reduce rates of blindness worldwide. Retinal photography for DR has been promoted for decades for its utility in both disease screening and clinical research studies. In recent years, several research centers have presented systems to detect pathology in retinal images. However, these approaches apply specialized algorithms to detect specific types of lesion in the retina. In order to detect multiple lesions, these systems generally implement multiple algorithms. Furthermore, some of these studies evaluate their algorithms on a single dataset, thus avoiding potential problems associated with the differences in fundus imaging devices, such as camera resolution. These methodologies primarily employ bottom-up approaches, in which the accurate segmentation of all the lesions in the retina is the basis for correct determination. A disadvantage of bottom-up approaches is that they rely on the accurate segmentation of all lesions in order to measure performance. On the other hand, top-down approaches do not depend on the segmentation of specific lesions. Thus, top-down methods can potentially detect abnormalities not explicitly used in their training phase. A disadvantage of these methods is that they cannot identify specific pathologies and require large datasets to build their training models. In this dissertation, I merged the advantages of the top-down and bottom-up approaches to detect DR with high accuracy. First, I developed an algorithm based on a top-down approach to detect abnormalities in the retina due to DR. By doing so, I was able to evaluate DR pathologies other than microaneurysms and exudates, which are the main focus of most current approaches. In addition, I demonstrated good generalization capacity of this algorithm by applying it to other eye diseases, such as age-related macular degeneration. Due to the fact that high accuracy is required for sight-threatening conditions, I developed two bottom-up approaches, since it has been proven that bottom-up approaches produce more accurate results than top-down approaches for particular structures. Consequently, I developed an algorithm to detect exudates in the macula. The presence of this pathology is considered to be a surrogate for clinical significant macular edema (CSME), a sight-threatening condition of DR. The analysis of the optic disc is usually not taken into account in DR screening systems. However, there is a pathology called neovascularization that is present in advanced stages of DR, making its detection of crucial clinical importance. In order to address this problem, I developed an algorithm to detect neovascularization in the optic disc. These algorithms are based on amplitude-modulation and frequency-modulation (AM-FM) representations, morphological image processing methods, and classification algorithms. The methods were tested on a diverse set of large databases and are considered to be the state-of the art in this field
    corecore