438 research outputs found

    Towards PACE-CAD Systems

    Get PDF
    Despite phenomenal advancements in the availability of medical image datasets and the development of modern classification algorithms, Computer-Aided Diagnosis (CAD) has had limited practical exposure in the real-world clinical workflow. This is primarily because of the inherently demanding and sensitive nature of medical diagnosis that can have far-reaching and serious repercussions in case of misdiagnosis. In this work, a paradigm called PACE (Pragmatic, Accurate, Confident, & Explainable) is presented as a set of some of must-have features for any CAD. Diagnosis of glaucoma using Retinal Fundus Images (RFIs) is taken as the primary use case for development of various methods that may enrich an ordinary CAD system with PACE. However, depending on specific requirements for different methods, other application areas in ophthalmology and dermatology have also been explored. Pragmatic CAD systems refer to a solution that can perform reliably in day-to-day clinical setup. In this research two, of possibly many, aspects of a pragmatic CAD are addressed. Firstly, observing that the existing medical image datasets are small and not representative of images taken in the real-world, a large RFI dataset for glaucoma detection is curated and published. Secondly, realising that a salient attribute of a reliable and pragmatic CAD is its ability to perform in a range of clinically relevant scenarios, classification of 622 unique cutaneous diseases in one of the largest publicly available datasets of skin lesions is successfully performed. Accuracy is one of the most essential metrics of any CAD system's performance. Domain knowledge relevant to three types of diseases, namely glaucoma, Diabetic Retinopathy (DR), and skin lesions, is industriously utilised in an attempt to improve the accuracy. For glaucoma, a two-stage framework for automatic Optic Disc (OD) localisation and glaucoma detection is developed, which marked new state-of-the-art for glaucoma detection and OD localisation. To identify DR, a model is proposed that combines coarse-grained classifiers with fine-grained classifiers and grades the disease in four stages with respect to severity. Lastly, different methods of modelling and incorporating metadata are also examined and their effect on a model's classification performance is studied. Confidence in diagnosing a disease is equally important as the diagnosis itself. One of the biggest reasons hampering the successful deployment of CAD in the real-world is that medical diagnosis cannot be readily decided based on an algorithm's output. Therefore, a hybrid CNN architecture is proposed with the convolutional feature extractor trained using point estimates and a dense classifier trained using Bayesian estimates. Evaluation on 13 publicly available datasets shows the superiority of this method in terms of classification accuracy and also provides an estimate of uncertainty for every prediction. Explainability of AI-driven algorithms has become a legal requirement after Europe’s General Data Protection Regulations came into effect. This research presents a framework for easy-to-understand textual explanations of skin lesion diagnosis. The framework is called ExAID (Explainable AI for Dermatology) and relies upon two fundamental modules. The first module uses any deep skin lesion classifier and performs detailed analysis on its latent space to map human-understandable disease-related concepts to the latent representation learnt by the deep model. The second module proposes Concept Localisation Maps, which extend Concept Activation Vectors by locating significant regions corresponding to a learned concept in the latent space of a trained image classifier. This thesis probes many viable solutions to equip a CAD system with PACE. However, it is noted that some of these methods require specific attributes in datasets and, therefore, not all methods may be applied on a single dataset. Regardless, this work anticipates that consolidating PACE into a CAD system can not only increase the confidence of medical practitioners in such tools but also serve as a stepping stone for the further development of AI-driven technologies in healthcare

    Técnicas de análise de imagens para detecção de retinopatia diabética

    Get PDF
    Orientadores: Anderson de Rezende Rocha. Jacques WainerTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Retinopatia Diabética (RD) é uma complicação a longo prazo do diabetes e a principal causa de cegueira da população ativa. Consultas regulares são necessárias para diagnosticar a retinopatia em um estágio inicial, permitindo um tratamento com o melhor prognóstico capaz de retardar ou até mesmo impedir a cegueira. Alavancados pela evolução da prevalência do diabetes e pelo maior risco que os diabéticos têm de desenvolver doenças nos olhos, diversos trabalhos com abordagens bem estabelecidas e promissoras vêm sendo desenvolvidos para triagem automática de retinopatia. Entretanto, a maior parte dos trabalhos está focada na detecção de lesões utilizando características visuais particulares de cada tipo de lesão. Além do mais, soluções artesanais para avaliação de necessidade de consulta e de identificação de estágios da retinopatia ainda dependem bastante das lesões, cujo repetitivo procedimento de detecção é complexo e inconveniente, mesmo se um esquema unificado for adotado. O estado da arte para avaliação automatizada de necessidade de consulta é composto por abordagens que propõem uma representação altamente abstrata obtida inteiramente por meio dos dados. Usualmente, estas abordagens recebem uma imagem e produzem uma resposta ¿ que pode ser resultante de um único modelo ou de uma combinação ¿ e não são facilmente explicáveis. Este trabalho objetivou melhorar a detecção de lesões e reforçar decisões relacionadas à necessidade de consulta, fazendo uso de avançadas representações de imagens em duas etapas. Nós também almejamos compor um modelo sofisticado e direcionado pelos dados para triagem de retinopatia, bem como incorporar aprendizado supervisionado de características com representação orientada por mapa de calor, resultando em uma abordagem robusta e ainda responsável para triagem automatizada. Finalmente, tivemos como objetivo a integração das soluções em dispositivos portáteis de captura de imagens de retina. Para detecção de lesões, propusemos abordagens de caracterização de imagens que possibilitem uma detecção eficaz de diferentes tipos de lesões. Nossos principais avanços estão centrados na modelagem de uma nova técnica de codificação para imagens de retina, bem como na preservação de informações no processo de pooling ou agregação das características obtidas. Decidir automaticamente pela necessidade de encaminhamento do paciente a um especialista é uma investigação ainda mais difícil e muito debatida. Nós criamos um método mais simples e robusto para decisões de necessidade de consulta, e que não depende da detecção de lesões. Também propusemos um modelo direcionado pelos dados que melhora significativamente o desempenho na tarefa de triagem da RD. O modelo produz uma resposta confiável com base em respostas (locais e globais), bem como um mapa de ativação que permite uma compreensão de importância de cada pixel para a decisão. Exploramos a metodologia de explicabilidade para criar um descritor local codificado em uma rica representação em nível médio. Os modelos direcionados pelos dados são o estado da arte para triagem de retinopatia diabética. Entretanto, mapas de ativação são essenciais para interpretar o aprendizado em termos de importância de cada pixel e para reforçar pequenas características discriminativas que têm potencial de melhorar o diagnósticoAbstract: Diabetic Retinopathy (DR) is a long-term complication of diabetes and the leading cause of blindness among working-age adults. A regular eye examination is necessary to diagnose DR at an early stage, when it can be treated with the best prognosis and the visual loss delayed or deferred. Leveraged by the continuous expansion of diabetics and by the increased risk that those people have to develop eye diseases, several works with well-established and promising approaches have been proposed for automatic screening. Therefore, most existing art focuses on lesion detection using visual characteristics specific to each type of lesion. Additionally, handcrafted solutions for referable diabetic retinopathy detection and DR stages identification still depend too much on the lesions, whose repetitive detection is complex and cumbersome to implement, even when adopting a unified detection scheme. Current art for automated referral assessment resides on highly abstract data-driven approaches. Usually, those approaches receive an image and spit the response out ¿ that might be resulting from only one model or ensembles ¿ and are not easily explainable. Hence, this work aims at enhancing lesion detection and reinforcing referral decisions with advanced handcrafted two-tiered image representations. We also intended to compose sophisticated data-driven models for referable DR detection and incorporate supervised learning of features with saliency-oriented mid-level image representations to come up with a robust yet accountable automated screening approach. Ultimately, we aimed at integrating our software solutions with simple retinal imaging devices. In the lesion detection task, we proposed advanced handcrafted image characterization approaches to detecting effectively different lesions. Our leading advances are centered on designing a novel coding technique for retinal images and preserving information in the pooling process. Automatically deciding on whether or not the patient should be referred to the ophthalmic specialist is a more difficult, and still hotly debated research aim. We designed a simple and robust method for referral decisions that does not rely upon lesion detection stages. We also proposed a novel and effective data-driven model that significantly improves the performance for DR screening. Our accountable data-driven model produces a reliable (local- and global-) response along with a heatmap/saliency map that enables pixel-based importance comprehension. We explored this methodology to create a local descriptor that is encoded into a rich mid-level representation. Data-driven methods are the state of the art for diabetic retinopathy screening. However, saliency maps are essential not only to interpret the learning in terms of pixel importance but also to reinforce small discriminative characteristics that have the potential to enhance the diagnosticDoutoradoCiência da ComputaçãoDoutor em Ciência da ComputaçãoCAPE

    MULTISTAGE CLASSIFICATION OF DIABETIC RETINOPATHY USING FUZZY-NEURAL NETWORK CLASSIFIER

    Get PDF
    Diabetic Retinopathy (DR) is complicated disorder in human retina which is affected due to an increasing amount of insulin in blood that results in vision impairment. Early detection of DR is used to support the patients to prevent blindness and to be aware of this disease. This paper proposes a novel technique for detecting DR using hybrid classifiers. It includes pre-processing of the image, segmentation of region of interest, feature extraction and classification. Retinal structures like microaneurysms, exudates, hemorrhages and blood vessels are segmented. Classification is performed with integration of Fuzzy logical System and Neural Network (NN) which improves the accuracy of classification. Experimentation is carried out with the MESSIDOR data set. Results are compared against various performance metrics like accuracy, sensitivity and specificity. An accuracy close to 100 percent and low average error rate of 0.012 are obtained using the proposed method. The results obtained are encouraging

    The Reading of Components of Diabetic Retinopathy: An Evolutionary Approach for Filtering Normal Digital Fundus Imaging in Screening and Population Based Studies

    Get PDF
    In any diabetic retinopathy screening program, about two-thirds of patients have no retinopathy. However, on average, it takes a human expert about one and a half times longer to decide an image is normal than to recognize an abnormal case with obvious features. In this work, we present an automated system for filtering out normal cases to facilitate a more effective use of grading time. The key aim with any such tool is to achieve high sensitivity and specificity to ensure patients' safety and service efficiency. There are many challenges to overcome, given the variation of images and characteristics to identify. The system combines computed evidence obtained from various processing stages, including segmentation of candidate regions, classification and contextual analysis through Hidden Markov Models. Furthermore, evolutionary algorithms are employed to optimize the Hidden Markov Models, feature selection and heterogeneous ensemble classifiers. In order to evaluate its capability of identifying normal images across diverse populations, a population-oriented study was undertaken comparing the software's output to grading by humans. In addition, population based studies collect large numbers of images on subjects expected to have no abnormality. These studies expect timely and cost-effective grading. Altogether 9954 previously unseen images taken from various populations were tested. All test images were masked so the automated system had not been exposed to them before. This system was trained using image subregions taken from about 400 sample images. Sensitivities of 92.2% and specificities of 90.4% were achieved varying between populations and population clusters. Of all images the automated system decided to be normal, 98.2% were true normal when compared to the manual grading results. These results demonstrate scalability and strong potential of such an integrated computational intelligence system as an effective tool to assist a grading service

    Detection of Neovascularization Based on Fractal and Texture Analysis with Interaction Effects in Diabetic Retinopathy

    Get PDF
    Diabetic retinopathy is a major cause of blindness. Proliferative diabetic retinopathy is a result of severe vascular complication and is visible as neovascularization of the retina. Automatic detection of such new vessels would be useful for the severity grading of diabetic retinopathy, and it is an important part of screening process to identify those who may require immediate treatment for their diabetic retinopathy. We proposed a novel new vessels detection method including statistical texture analysis (STA), high order spectrum analysis (HOS), fractal analysis (FA), and most importantly we have shown that by incorporating their associated interactions the accuracy of new vessels detection can be greatly improved. To assess its performance, the sensitivity, specificity and accuracy (AUC) are obtained. They are 96.3%, 99.1% and 98.5% (99.3%), respectively. It is found that the proposed method can improve the accuracy of new vessels detection significantly over previous methods. The algorithm can be automated and is valuable to detect relatively severe cases of diabetic retinopathy among diabetes patients.published_or_final_versio

    Genetic algorithm based feature selection combined with dual classification for the automated detection of proliferative diabetic retinopathy

    Get PDF
    Proliferative diabetic retinopathy (PDR) is a condition that carries a high risk of severe visual impairment. The hallmark of PDR is the growth of abnormal new vessels. In this paper, an automated method for the detection of new vessels from retinal images is presented. This method is based on a dual classification approach. Two vessel segmentation approaches are applied to create two separate binary vessel map which each hold vital information. Local morphology features are measured from each binary vessel map to produce two separate 4-D feature vectors. Independent classification is performed for each feature vector using a support vector machine (SVM) classifier. The system then combines these individual outcomes to produce a final decision. This is followed by the creation of additional features to generate 21-D feature vectors, which feed into a genetic algorithm based feature selection approach with the objective of finding feature subsets that improve the performance of the classification. Sensitivity and specificity results using a dataset of 60 images are 0.9138 and 0.9600, respectively, on a per patch basis and 1.000 and 0.975, respectively, on a per image basis

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    Level-set based adaptive-active contour segmentation technique with long short-term memory for diabetic retinopathy classification

    Get PDF
    Diabetic Retinopathy (DR) is a major type of eye defect that is caused by abnormalities in the blood vessels within the retinal tissue. Early detection by automatic approach using modern methodologies helps prevent consequences like vision loss. So, this research has developed an effective segmentation approach known as Level-set Based Adaptive-active Contour Segmentation (LBACS) to segment the images by improving the boundary conditions and detecting the edges using Level Set Method with Improved Boundary Indicator Function (LSMIBIF) and Adaptive-Active Counter Model (AACM). For evaluating the DR system, the information is collected from the publically available datasets named as Indian Diabetic Retinopathy Image Dataset (IDRiD) and Diabetic Retinopathy Database 1 (DIARETDB 1). Then the collected images are pre-processed using a Gaussian filter, edge detection sharpening, Contrast enhancement, and Luminosity enhancement to eliminate the noises/interferences, and data imbalance that exists in the available dataset. After that, the noise-free data are processed for segmentation by using the Level set-based active contour segmentation technique. Then, the segmented images are given to the feature extraction stage where Gray Level Co-occurrence Matrix (GLCM), Local ternary, and binary patterns are employed to extract the features from the segmented image. Finally, extracted features are given as input to the classification stage where Long Short-Term Memory (LSTM) is utilized to categorize various classes of DR. The result analysis evidently shows that the proposed LBACS-LSTM achieved better results in overall metrics. The accuracy of the proposed LBACS-LSTM for IDRiD and DIARETDB 1 datasets is 99.43% and 97.39%, respectively which is comparably higher than the existing approaches such as Three-dimensional semantic model, Delimiting Segmentation Approach Using Knowledge Learning (DSA-KL), K-Nearest Neighbor (KNN), Computer aided method and Chronological Tunicate Swarm Algorithm with Stacked Auto Encoder (CTSA-SAE)

    Retinal vessel segmentation using textons

    Get PDF
    Segmenting vessels from retinal images, like segmentation in many other medical image domains, is a challenging task, as there is no unified way that can be adopted to extract the vessels accurately. However, it is the most critical stage in automatic assessment of various forms of diseases (e.g. Glaucoma, Age-related macular degeneration, diabetic retinopathy and cardiovascular diseases etc.). Our research aims to investigate retinal image segmentation approaches based on textons as they provide a compact description of texture that can be learnt from a training set. This thesis presents a brief review of those diseases and also includes their current situations, future trends and techniques used for their automatic diagnosis in routine clinical applications. The importance of retinal vessel segmentation is particularly emphasized in such applications. An extensive review of previous work on retinal vessel segmentation and salient texture analysis methods is presented. Five automatic retinal vessel segmentation methods are proposed in this thesis. The first method focuses on addressing the problem of removing pathological anomalies (Drusen, exudates) for retinal vessel segmentation, which have been identified by other researchers as a problem and a common source of error. The results show that the modified method shows some improvement compared to a previously published method. The second novel supervised segmentation method employs textons. We propose a new filter bank (MR11) that includes bar detectors for vascular feature extraction and other kernels to detect edges and photometric variations in the image. The k-means clustering algorithm is adopted for texton generation based on the vessel and non-vessel elements which are identified by ground truth. The third improved supervised method is developed based on the second one, in which textons are generated by k-means clustering and texton maps representing vessels are derived by back projecting pixel clusters onto hand labelled ground truth. A further step is implemented to ensure that the best combinations of textons are represented in the map and subsequently used to identify vessels in the test set. The experimental results on two benchmark datasets show that our proposed method performs well compared to other published work and the results of human experts. A further test of our system on an independent set of optical fundus images verified its consistent performance. The statistical analysis on experimental results also reveals that it is possible to train unified textons for retinal vessel segmentation. In the fourth method a novel scheme using Gabor filter bank for vessel feature extraction is proposed. The ii method is inspired by the human visual system. Machine learning is used to optimize the Gabor filter parameters. The experimental results demonstrate that our method significantly enhances the true positive rate while maintaining a level of specificity that is comparable with other approaches. Finally, we proposed a new unsupervised texton based retinal vessel segmentation method using derivative of SIFT and multi-scale Gabor filers. The lack of sufficient quantities of hand labelled ground truth and the high level of variability in ground truth labels amongst experts provides the motivation for this approach. The evaluation results reveal that our unsupervised segmentation method is comparable with the best other supervised methods and other best state of the art methods

    Computational Intelligence in Healthcare

    Get PDF
    This book is a printed edition of the Special Issue Computational Intelligence in Healthcare that was published in Electronic
    corecore