110 research outputs found

    Machine Learning Approaches for Automated Glaucoma Detection using Clinical Data and Optical Coherence Tomography Images

    Full text link
    Glaucoma is a multi-factorial, progressive blinding optic-neuropathy. A variety of factors, including genetics, vasculature, anatomy, and immune factors, are involved. Worldwide more than 80 million people are affected by glaucoma, and around 300,000 in Australia, where 50% remain undiagnosed. Untreated glaucoma can lead to blindness. Early detection by Artificial intelligence (AI) is crucial to accelerate the diagnosis process and can prevent further vision loss. Many proposed AI systems have shown promising performance for automated glaucoma detection using two-dimensional (2D) data. However, only a few studies had optimistic outcomes for glaucoma detection and staging. Moreover, the automated AI system still faces challenges in diagnosing at the clinicians’ level due to the lack of interpretability of the ML algorithms and integration of multiple clinical data. AI technology would be welcomed by doctors and patients if the "black box" notion is overcome by developing an explainable, transparent AI system with similar pathological markers used by clinicians as the sign of early detection and progression of glaucomatous damage. Therefore, the thesis aimed to develop a comprehensive AI model to detect and stage glaucoma by incorporating a variety of clinical data and utilising advanced data analysis and machine learning (ML) techniques. The research first focuses on optimising glaucoma diagnostic features by combining structural, functional, demographic, risk factor, and optical coherence tomography (OCT) features. The significant features were evaluated using statistical analysis and trained in ML algorithms to observe the detection performance. Three crucial structural ONH OCT features: cross-sectional 2D radial B-scan, 3D vascular angiography and temporal-superior-nasal-inferior-temporal (TSNIT) B-scan, were analysed and trained in explainable deep learning (DL) models for automated glaucoma prediction. The explanation behind the decision making of DL models were successfully demonstrated using the feature visualisation. The structural features or distinguished affected regions of TSNIT OCT scans were precisely localised for glaucoma patients. This is consistent with the concept of explainable DL, which refers to the idea of making the decision-making processes of DL models transparent and interpretable to humans. However, artifacts and speckle noise often result in misinterpretation of the TSNIT OCT scans. This research also developed an automated DL model to remove the artifacts and noise from the OCT scans, facilitating error-free retinal layers segmentation, accurate tissue thickness estimation and image interpretation. Moreover, to monitor and grade glaucoma severity, the visual field (VF) test is commonly followed by clinicians for treatment and management. Therefore, this research uses the functional features extracted from VF images to train ML algorithms for staging glaucoma from early to advanced/severe stages. Finally, the selected significant features were used to design and develop a comprehensive AI model to detect and grade glaucoma stages based on the data quantity and availability. In the first stage, a DL model was trained with TSNIT OCT scans, and its output was combined with significant structural and functional features and trained in ML models. The best-performed ML model achieved an area under the curve (AUC): 0.98, an accuracy of 97.2%, a sensitivity of 97.9%, and a specificity of 96.4% for detecting glaucoma. The model achieved an overall accuracy of 90.7% and an F1 score of 84.0% for classifying normal, early, moderate, and advanced-stage glaucoma. In conclusion, this thesis developed and proposed a comprehensive, evidence-based AI model that will solve the screening problem for large populations and relieve experts from manually analysing a slew of patient data and associated misinterpretation problems. Moreover, this thesis demonstrated three structural OCT features that could be added as excellent diagnostic markers for precise glaucoma diagnosis

    Rank-based Decomposable Losses in Machine Learning: A Survey

    Full text link
    Recent works have revealed an essential paradigm in designing loss functions that differentiate individual losses vs. aggregate losses. The individual loss measures the quality of the model on a sample, while the aggregate loss combines individual losses/scores over each training sample. Both have a common procedure that aggregates a set of individual values to a single numerical value. The ranking order reflects the most fundamental relation among individual values in designing losses. In addition, decomposability, in which a loss can be decomposed into an ensemble of individual terms, becomes a significant property of organizing losses/scores. This survey provides a systematic and comprehensive review of rank-based decomposable losses in machine learning. Specifically, we provide a new taxonomy of loss functions that follows the perspectives of aggregate loss and individual loss. We identify the aggregator to form such losses, which are examples of set functions. We organize the rank-based decomposable losses into eight categories. Following these categories, we review the literature on rank-based aggregate losses and rank-based individual losses. We describe general formulas for these losses and connect them with existing research topics. We also suggest future research directions spanning unexplored, remaining, and emerging issues in rank-based decomposable losses.Comment: Accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI

    Automatic detection of glaucoma via fundus imaging and artificial intelligence: A review.

    Get PDF
    Glaucoma is a leading cause of irreversible vision impairment globally, and cases are continuously rising worldwide. Early detection is crucial, allowing timely intervention that can prevent further visual field loss. To detect glaucoma, examination of the optic nerve head via fundus imaging can be performed, at the center of which is the assessment of the optic cup and disc boundaries. Fundus imaging is non-invasive and low-cost; however, the image examination relies on subjective, time-consuming, and costly expert assessments. A timely question to ask is: "Can artificial intelligence mimic glaucoma assessments made by experts?". Specifically, can artificial intelligence automatically find the boundaries of the optic cup and disc (providing a so-called segmented fundus image) and then use the segmented image to identify glaucoma with high accuracy? We conducted a comprehensive review on artificial intelligence-enabled glaucoma detection frameworks that produce and use segmented fundus images and summarized the advantages and disadvantages of such frameworks. We identified 36 relevant papers from 2011-2021 and 2 main approaches: 1) logical rule-based frameworks, based on a set of rules; and 2) machine learning/statistical modelling based frameworks. We critically evaluated the state-of-art of the 2 approaches, identified gaps in the literature and pointed at areas for future research

    Towards PACE-CAD Systems

    Get PDF
    Despite phenomenal advancements in the availability of medical image datasets and the development of modern classification algorithms, Computer-Aided Diagnosis (CAD) has had limited practical exposure in the real-world clinical workflow. This is primarily because of the inherently demanding and sensitive nature of medical diagnosis that can have far-reaching and serious repercussions in case of misdiagnosis. In this work, a paradigm called PACE (Pragmatic, Accurate, Confident, & Explainable) is presented as a set of some of must-have features for any CAD. Diagnosis of glaucoma using Retinal Fundus Images (RFIs) is taken as the primary use case for development of various methods that may enrich an ordinary CAD system with PACE. However, depending on specific requirements for different methods, other application areas in ophthalmology and dermatology have also been explored. Pragmatic CAD systems refer to a solution that can perform reliably in day-to-day clinical setup. In this research two, of possibly many, aspects of a pragmatic CAD are addressed. Firstly, observing that the existing medical image datasets are small and not representative of images taken in the real-world, a large RFI dataset for glaucoma detection is curated and published. Secondly, realising that a salient attribute of a reliable and pragmatic CAD is its ability to perform in a range of clinically relevant scenarios, classification of 622 unique cutaneous diseases in one of the largest publicly available datasets of skin lesions is successfully performed. Accuracy is one of the most essential metrics of any CAD system's performance. Domain knowledge relevant to three types of diseases, namely glaucoma, Diabetic Retinopathy (DR), and skin lesions, is industriously utilised in an attempt to improve the accuracy. For glaucoma, a two-stage framework for automatic Optic Disc (OD) localisation and glaucoma detection is developed, which marked new state-of-the-art for glaucoma detection and OD localisation. To identify DR, a model is proposed that combines coarse-grained classifiers with fine-grained classifiers and grades the disease in four stages with respect to severity. Lastly, different methods of modelling and incorporating metadata are also examined and their effect on a model's classification performance is studied. Confidence in diagnosing a disease is equally important as the diagnosis itself. One of the biggest reasons hampering the successful deployment of CAD in the real-world is that medical diagnosis cannot be readily decided based on an algorithm's output. Therefore, a hybrid CNN architecture is proposed with the convolutional feature extractor trained using point estimates and a dense classifier trained using Bayesian estimates. Evaluation on 13 publicly available datasets shows the superiority of this method in terms of classification accuracy and also provides an estimate of uncertainty for every prediction. Explainability of AI-driven algorithms has become a legal requirement after Europe’s General Data Protection Regulations came into effect. This research presents a framework for easy-to-understand textual explanations of skin lesion diagnosis. The framework is called ExAID (Explainable AI for Dermatology) and relies upon two fundamental modules. The first module uses any deep skin lesion classifier and performs detailed analysis on its latent space to map human-understandable disease-related concepts to the latent representation learnt by the deep model. The second module proposes Concept Localisation Maps, which extend Concept Activation Vectors by locating significant regions corresponding to a learned concept in the latent space of a trained image classifier. This thesis probes many viable solutions to equip a CAD system with PACE. However, it is noted that some of these methods require specific attributes in datasets and, therefore, not all methods may be applied on a single dataset. Regardless, this work anticipates that consolidating PACE into a CAD system can not only increase the confidence of medical practitioners in such tools but also serve as a stepping stone for the further development of AI-driven technologies in healthcare

    Intelligent optic disc segmentation using improved particle swarm optimization and evolving ensemble models

    Get PDF
    In this research, we propose Particle Swarm Optimization (PSO)-enhanced ensemble deep neural networks for optic disc (OD) segmentation using retinal images. An improved PSO algorithm with six search mechanisms to diversify the search process is introduced. It consists of an accelerated super-ellipse action, a refined super-ellipse operation, a modified PSO operation, a random leader-based search operation, an average leader-based search operation and a spherical random walk mechanism for swarm leader enhancement. Owing to the superior segmentation capabilities of Mask R-CNN, transfer learning with a PSO-based hyper-parameter identification method is employed to generate the fine-tuned segmenters for OD segmentation. Specifically, we optimize the learning parameters, which include the learning rate and momentum of the transfer learning process, using the proposed PSO algorithm. To overcome the bias of single networks, an ensemble segmentation model is constructed. It incorporates the results of distinctive base segmenters using a pixel-level majority voting mechanism to generate the final segmentation outcome. The proposed ensemble network is evaluated using the Messidor and Drions data sets and is found to significantly outperform other deep ensemble networks and hybrid ensemble clustering models that are incorporated with both the original and state-of-the-art PSO variants. Additionally, the proposed method statistically outperforms existing studies on OD segmentation and other search methods for solving diverse unimodal and multimodal benchmark optimization functions and the detection of Diabetic Macular Edema

    Computational Analysis of Fundus Images: Rule-Based and Scale-Space Models

    Get PDF
    Fundus images are one of the most important imaging examinations in modern ophthalmology because they are simple, inexpensive and, above all, noninvasive. Nowadays, the acquisition and storage of highresolution fundus images is relatively easy and fast. Therefore, fundus imaging has become a fundamental investigation in retinal lesion detection, ocular health monitoring and screening programmes. Given the large volume and clinical complexity associated with these images, their analysis and interpretation by trained clinicians becomes a timeconsuming task and is prone to human error. Therefore, there is a growing interest in developing automated approaches that are affordable and have high sensitivity and specificity. These automated approaches need to be robust if they are to be used in the general population to diagnose and track retinal diseases. To be effective, the automated systems must be able to recognize normal structures and distinguish them from pathological clinical manifestations. The main objective of the research leading to this thesis was to develop automated systems capable of recognizing and segmenting retinal anatomical structures and retinal pathological clinical manifestations associated with the most common retinal diseases. In particular, these automated algorithms were developed on the premise of robustness and efficiency to deal with the difficulties and complexity inherent in these images. Four objectives were considered in the analysis of fundus images. Segmentation of exudates, localization of the optic disc, detection of the midline of blood vessels, segmentation of the vascular network and detection of microaneurysms. In addition, we also evaluated the detection of diabetic retinopathy on fundus images using the microaneurysm detection method. An overview of the state of the art is presented to compare the performance of the developed approaches with the main methods described in the literature for each of the previously described objectives. To facilitate the comparison of methods, the state of the art has been divided into rulebased methods and machine learningbased methods. In the research reported in this paper, rulebased methods based on image processing methods were preferred over machine learningbased methods. In particular, scalespace methods proved to be effective in achieving the set goals. Two different approaches to exudate segmentation were developed. The first approach is based on scalespace curvature in combination with the local maximum of a scalespace blob detector and dynamic thresholds. The second approach is based on the analysis of the distribution function of the maximum values of the noise map in combination with morphological operators and adaptive thresholds. Both approaches perform a correct segmentation of the exudates and cope well with the uneven illumination and contrast variations in the fundus images. Optic disc localization was achieved using a new technique called cumulative sum fields, which was combined with a vascular enhancement method. The algorithm proved to be reliable and efficient, especially for pathological images. The robustness of the method was tested on 8 datasets. The detection of the midline of the blood vessels was achieved using a modified corner detector in combination with binary philtres and dynamic thresholding. Segmentation of the vascular network was achieved using a new scalespace blood vessels enhancement method. The developed methods have proven effective in detecting the midline of blood vessels and segmenting vascular networks. The microaneurysm detection method relies on a scalespace microaneurysm detection and labelling system. A new approach based on the neighbourhood of the microaneurysms was used for labelling. Microaneurysm detection enabled the assessment of diabetic retinopathy detection. The microaneurysm detection method proved to be competitive with other methods, especially with highresolution images. Diabetic retinopathy detection with the developed microaneurysm detection method showed similar performance to other methods and human experts. The results of this work show that it is possible to develop reliable and robust scalespace methods that can detect various anatomical structures and pathological features of the retina. Furthermore, the results obtained in this work show that although recent research has focused on machine learning methods, scalespace methods can achieve very competitive results and typically have greater independence from image acquisition. The methods developed in this work may also be relevant for the future definition of new descriptors and features that can significantly improve the results of automated methods.As imagens do fundo do olho são hoje um dos principais exames imagiológicos da oftalmologia moderna, pela sua simplicidade, baixo custo e acima de tudo pelo seu carácter nãoinvasivo. A aquisição e armazenamento de imagens do fundo do olho com alta resolução é também relativamente simples e rápida. Desta forma, as imagens do fundo do olho são um exame fundamental na identificação de alterações retinianas, monitorização da saúde ocular, e em programas de rastreio. Considerando o elevado volume e complexidade clínica associada a estas imagens, a análise e interpretação das mesmas por clínicos treinados tornase uma tarefa morosa e propensa a erros humanos. Assim, há um interesse crescente no desenvolvimento de abordagens automatizadas, acessíveis em custo, e com uma alta sensibilidade e especificidade. Estas devem ser robustas para serem aplicadas à população em geral no diagnóstico e seguimento de doenças retinianas. Para serem eficazes, os sistemas de análise têm que conseguir detetar e distinguir estruturas normais de sinais patológicos. O objetivo principal da investigação que levou a esta tese de doutoramento é o desenvolvimento de sistemas automáticos capazes de detetar e segmentar as estruturas anatómicas da retina, e os sinais patológicos retinianos associados às doenças retinianas mais comuns. Em particular, estes algoritmos automatizados foram desenvolvidos segundo as premissas de robustez e eficácia para lidar com as dificuldades e complexidades inerentes a estas imagens. Foram considerados quatro objetivos de análise de imagens do fundo do olho. São estes, a segmentação de exsudados, a localização do disco ótico, a deteção da linha central venosa dos vasos sanguíneos e segmentação da rede vascular, e a deteção de microaneurismas. De acrescentar que usando o método de deteção de microaneurismas, avaliouse também a capacidade de deteção da retinopatia diabética em imagens do fundo do olho. Para comparar o desempenho das metodologias desenvolvidas neste trabalho, foi realizado um levantamento do estado da arte, onde foram considerados os métodos mais relevantes descritos na literatura para cada um dos objetivos descritos anteriormente. Para facilitar a comparação entre métodos, o estado da arte foi dividido em metodologias de processamento de imagem e baseadas em aprendizagem máquina. Optouse no trabalho de investigação desenvolvido pela utilização de metodologias de análise espacial de imagem em detrimento de metodologias baseadas em aprendizagem máquina. Em particular, as metodologias baseadas no espaço de escalas mostraram ser efetivas na obtenção dos objetivos estabelecidos. Para a segmentação de exsudados foram usadas duas abordagens distintas. A primeira abordagem baseiase na curvatura em espaço de escalas em conjunto com a resposta máxima local de um detetor de manchas em espaço de escalas e limiares dinâmicos. A segunda abordagem baseiase na análise do mapa de distribuição de ruído em conjunto com operadores morfológicos e limiares adaptativos. Ambas as abordagens fazem uma segmentação dos exsudados de elevada precisão, além de lidarem eficazmente com a iluminação nãouniforme e a variação de contraste presente nas imagens do fundo do olho. A localização do disco ótico foi conseguida com uma nova técnica designada por campos de soma acumulativos, combinada com métodos de melhoramento da rede vascular. O algoritmo revela ser fiável e eficiente, particularmente em imagens patológicas. A robustez do método foi verificada pela sua avaliação em oito bases de dados. A deteção da linha central dos vasos sanguíneos foi obtida através de um detetor de cantos modificado em conjunto com filtros binários e limiares dinâmicos. A segmentação da rede vascular foi conseguida com um novo método de melhoramento de vasos sanguíneos em espaço de escalas. Os métodos desenvolvidos mostraram ser eficazes na deteção da linha central dos vasos sanguíneos e na segmentação da rede vascular. Finalmente, o método para a deteção de microaneurismas assenta num formalismo de espaço de escalas na deteção e na rotulagem dos microaneurismas. Para a rotulagem foi utilizada uma nova abordagem da vizinhança dos candidatos a microaneurismas. A deteção de microaneurismas permitiu avaliar também a deteção da retinopatia diabética. O método para a deteção de microaneurismas mostrou ser competitivo quando comparado com outros métodos, em particular em imagens de alta resolução. A deteção da retinopatia diabética exibiu um desempenho semelhante a outros métodos e a especialistas humanos. Os trabalhos descritos nesta tese mostram ser possível desenvolver uma abordagem fiável e robusta em espaço de escalas capaz de detetar diferentes estruturas anatómicas e sinais patológicos da retina. Além disso, os resultados obtidos mostram que apesar de a pesquisa mais recente concentrarse em metodologias de aprendizagem máquina, as metodologias de análise espacial apresentam resultados muito competitivos e tipicamente independentes do equipamento de aquisição das imagens. As metodologias desenvolvidas nesta tese podem ser importantes na definição de novos descritores e características, que podem melhorar significativamente o resultado de métodos automatizados

    On Improving Generalization of CNN-Based Image Classification with Delineation Maps Using the CORF Push-Pull Inhibition Operator

    Get PDF
    Deployed image classification pipelines are typically dependent on the images captured in real-world environments. This means that images might be affected by different sources of perturbations (e.g. sensor noise in low-light environments). The main challenge arises by the fact that image quality directly impacts the reliability and consistency of classification tasks. This challenge has, hence, attracted wide interest within the computer vision communities. We propose a transformation step that attempts to enhance the generalization ability of CNN models in the presence of unseen noise in the test set. Concretely, the delineation maps of given images are determined using the CORF push-pull inhibition operator. Such an operation transforms an input image into a space that is more robust to noise before being processed by a CNN. We evaluated our approach on the Fashion MNIST data set with an AlexNet model. It turned out that the proposed CORF-augmented pipeline achieved comparable results on noise-free images to those of a conventional AlexNet classification model without CORF delineation maps, but it consistently achieved significantly superior performance on test images perturbed with different levels of Gaussian and uniform noise
    corecore