324 research outputs found

    Deep convolutional neural networks for segmenting 3D in vivo multiphoton images of vasculature in Alzheimer disease mouse models

    Full text link
    The health and function of tissue rely on its vasculature network to provide reliable blood perfusion. Volumetric imaging approaches, such as multiphoton microscopy, are able to generate detailed 3D images of blood vessels that could contribute to our understanding of the role of vascular structure in normal physiology and in disease mechanisms. The segmentation of vessels, a core image analysis problem, is a bottleneck that has prevented the systematic comparison of 3D vascular architecture across experimental populations. We explored the use of convolutional neural networks to segment 3D vessels within volumetric in vivo images acquired by multiphoton microscopy. We evaluated different network architectures and machine learning techniques in the context of this segmentation problem. We show that our optimized convolutional neural network architecture, which we call DeepVess, yielded a segmentation accuracy that was better than both the current state-of-the-art and a trained human annotator, while also being orders of magnitude faster. To explore the effects of aging and Alzheimer's disease on capillaries, we applied DeepVess to 3D images of cortical blood vessels in young and old mouse models of Alzheimer's disease and wild type littermates. We found little difference in the distribution of capillary diameter or tortuosity between these groups, but did note a decrease in the number of longer capillary segments (>75μm>75\mu m) in aged animals as compared to young, in both wild type and Alzheimer's disease mouse models.Comment: 34 pages, 9 figure

    Neuropathy Classification of Corneal Nerve Images Using Artificial Intelligence

    Get PDF
    Nerve variations in the human cornea have been associated with alterations in the neuropathy state of a patient suffering from chronic diseases. For some diseases, such as diabetes, detection of neuropathy prior to visible symptoms is important, whereas for others, such as multiple sclerosis, early prediction of disease worsening is crucial. As current methods fail to provide early diagnosis of neuropathy, in vivo corneal confocal microscopy enables very early insight into the nerve damage by illuminating and magnifying the human cornea. This non-invasive method captures a sequence of images from the corneal sub-basal nerve plexus. Current practices of manual nerve tracing and classification impede the advancement of medical research in this domain. Since corneal nerve analysis for neuropathy is in its initial stages, there is a dire need for process automation. To address this limitation, we seek to automate the two stages of this process: nerve segmentation and neuropathy classification of images. For nerve segmentation, we compare the performance of two existing solutions on multiple datasets to select the appropriate method and proceed to the classification stage. Consequently, we approach neuropathy classification of the images through artificial intelligence using Adaptive Neuro-Fuzzy Inference System, Support Vector Machines, Naïve Bayes and k-nearest neighbors. We further compare the performance of machine learning classifiers with deep learning. We ascertained that nerve segmentation using convolutional neural networks provided a significant improvement in sensitivity and false negative rate by at least 5% over the state-of-the-art software. For classification, ANFIS yielded the best classification accuracy of 93.7% compared to other classifiers. Furthermore, for this problem, machine learning approaches performed better in terms of classification accuracy than deep learning

    Diabetic Retinopathy Image Classification with Neural Networks

    Get PDF
    The world is experiencing an increased life expectancy, which results in a natural increase in the chance of getting a disease. The main concern is that some of the methods to determine an affectation are not so fast and need expert people. Therefore, it is necessary to create new low-cost mechanisms of diagnosis that can give us fast and better results. Recent studies have been implemented using known architectures getting high scores of accuracies. An experimental classification model was implemented in this work using Python libraries. This is an experimental model with custom neural network architecture. This work intends to contrast the results using a model based on the AlexNet against my experimental architecture. The 2 main reasons to compare my work versus AlexNet is that during my investigation of the state of the art I did not find researches to solve the DR categorization using this architecture and also if I had chosen other architecture, I would need more powerful computing. In the end, AlexNet was not a good solution. This solution will help the healthcare industry to have a less expensive and non-invasive way to determine if a person is being affected by diabetic retinopathy, depending on the damage shown on their retinasITESO, A. C

    CAD system for early diagnosis of diabetic retinopathy based on 3D extracted imaging markers.

    Get PDF
    This dissertation makes significant contributions to the field of ophthalmology, addressing the segmentation of retinal layers and the diagnosis of diabetic retinopathy (DR). The first contribution is a novel 3D segmentation approach that leverages the patientspecific anatomy of retinal layers. This approach demonstrates superior accuracy in segmenting all retinal layers from a 3D retinal image compared to current state-of-the-art methods. It also offers enhanced speed, enabling potential clinical applications. The proposed segmentation approach holds great potential for supporting surgical planning and guidance in retinal procedures such as retinal detachment repair or macular hole closure. Surgeons can benefit from the accurate delineation of retinal layers, enabling better understanding of the anatomical structure and more effective surgical interventions. Moreover, real-time guidance systems can be developed to assist surgeons during procedures, improving overall patient outcomes. The second contribution of this dissertation is the introduction of a novel computeraided diagnosis (CAD) system for precise identification of diabetic retinopathy. The CAD system utilizes 3D-OCT imaging and employs an innovative approach that extracts two distinct features: first-order reflectivity and 3D thickness. These features are then fused and used to train and test a neural network classifier. The proposed CAD system exhibits promising results, surpassing other machine learning and deep learning algorithms commonly employed in DR detection. This demonstrates the effectiveness of the comprehensive analysis approach employed by the CAD system, which considers both low-level and high-level data from the 3D retinal layers. The CAD system presents a groundbreaking contribution to the field, as it goes beyond conventional methods, optimizing backpropagated neural networks to integrate multiple levels of information effectively. By achieving superior performance, the proposed CAD system showcases its potential in accurately diagnosing DR and aiding in the prevention of vision loss. In conclusion, this dissertation presents novel approaches for the segmentation of retinal layers and the diagnosis of diabetic retinopathy. The proposed methods exhibit significant improvements in accuracy, speed, and performance compared to existing techniques, opening new avenues for clinical applications and advancements in the field of ophthalmology. By addressing future research directions, such as testing on larger datasets, exploring alternative algorithms, and incorporating user feedback, the proposed methods can be further refined and developed into robust, accurate, and clinically valuable tools for diagnosing and monitoring retinal diseases

    Detection and Classification of Diabetic Retinopathy using Deep Learning Algorithms for Segmentation to Facilitate Referral Recommendation for Test and Treatment Prediction

    Full text link
    This research paper addresses the critical challenge of diabetic retinopathy (DR), a severe complication of diabetes leading to potential blindness. The proposed methodology leverages transfer learning with convolutional neural networks (CNNs) for automatic DR detection using a single fundus photograph, demonstrating high effectiveness with a quadratic weighted kappa score of 0.92546 in the APTOS 2019 Blindness Detection Competition. The paper reviews existing literature on DR detection, spanning classical computer vision methods to deep learning approaches, particularly focusing on CNNs. It identifies gaps in the research, emphasizing the lack of exploration in integrating pretrained large language models with segmented image inputs for generating recommendations and understanding dynamic interactions within a web application context.Objectives include developing a comprehensive DR detection methodology, exploring model integration, evaluating performance through competition ranking, contributing significantly to DR detection methodologies, and identifying research gaps.The methodology involves data preprocessing, data augmentation, and the use of a U-Net neural network architecture for segmentation. The U-Net model efficiently segments retinal structures, including blood vessels, hard and soft exudates, haemorrhages, microaneurysms, and the optical disc. High evaluation scores in Jaccard, F1, recall, precision, and accuracy underscore the model's potential for enhancing diagnostic capabilities in retinal pathology assessment.The outcomes of this research hold promise for improving patient outcomes through timely diagnosis and intervention in the fight against diabetic retinopathy, marking a significant contribution to the field of medical image analysis

    Medinoid : computer-aided diagnosis and localization of glaucoma using deep learning

    Get PDF
    Glaucoma is a leading eye disease, causing vision loss by gradually affecting peripheral vision if left untreated. Current diagnosis of glaucoma is performed by ophthalmologists, human experts who typically need to analyze different types of medical images generated by different types of medical equipment: fundus, Retinal Nerve Fiber Layer (RNFL), Optical Coherence Tomography (OCT) disc, OCT macula, perimetry, and/or perimetry deviation. Capturing and analyzing these medical images is labor intensive and time consuming. In this paper, we present a novel approach for glaucoma diagnosis and localization, only relying on fundus images that are analyzed by making use of state-of-the-art deep learning techniques. Specifically, our approach towards glaucoma diagnosis and localization leverages Convolutional Neural Networks (CNNs) and Gradient-weighted Class Activation Mapping (Grad-CAM), respectively. We built and evaluated different predictive models using a large set of fundus images, collected and labeled by ophthalmologists at Samsung Medical Center (SMC). Our experimental results demonstrate that our most effective predictive model is able to achieve a high diagnosis accuracy of 96%, as well as a high sensitivity of 96% and a high specificity of 100% for Dataset-Optic Disc (OD), a set of center-cropped fundus images highlighting the optic disc. Furthermore, we present Medinoid, a publicly-available prototype web application for computer-aided diagnosis and localization of glaucoma, integrating our most effective predictive model in its back-end

    Towards PACE-CAD Systems

    Get PDF
    Despite phenomenal advancements in the availability of medical image datasets and the development of modern classification algorithms, Computer-Aided Diagnosis (CAD) has had limited practical exposure in the real-world clinical workflow. This is primarily because of the inherently demanding and sensitive nature of medical diagnosis that can have far-reaching and serious repercussions in case of misdiagnosis. In this work, a paradigm called PACE (Pragmatic, Accurate, Confident, & Explainable) is presented as a set of some of must-have features for any CAD. Diagnosis of glaucoma using Retinal Fundus Images (RFIs) is taken as the primary use case for development of various methods that may enrich an ordinary CAD system with PACE. However, depending on specific requirements for different methods, other application areas in ophthalmology and dermatology have also been explored. Pragmatic CAD systems refer to a solution that can perform reliably in day-to-day clinical setup. In this research two, of possibly many, aspects of a pragmatic CAD are addressed. Firstly, observing that the existing medical image datasets are small and not representative of images taken in the real-world, a large RFI dataset for glaucoma detection is curated and published. Secondly, realising that a salient attribute of a reliable and pragmatic CAD is its ability to perform in a range of clinically relevant scenarios, classification of 622 unique cutaneous diseases in one of the largest publicly available datasets of skin lesions is successfully performed. Accuracy is one of the most essential metrics of any CAD system's performance. Domain knowledge relevant to three types of diseases, namely glaucoma, Diabetic Retinopathy (DR), and skin lesions, is industriously utilised in an attempt to improve the accuracy. For glaucoma, a two-stage framework for automatic Optic Disc (OD) localisation and glaucoma detection is developed, which marked new state-of-the-art for glaucoma detection and OD localisation. To identify DR, a model is proposed that combines coarse-grained classifiers with fine-grained classifiers and grades the disease in four stages with respect to severity. Lastly, different methods of modelling and incorporating metadata are also examined and their effect on a model's classification performance is studied. Confidence in diagnosing a disease is equally important as the diagnosis itself. One of the biggest reasons hampering the successful deployment of CAD in the real-world is that medical diagnosis cannot be readily decided based on an algorithm's output. Therefore, a hybrid CNN architecture is proposed with the convolutional feature extractor trained using point estimates and a dense classifier trained using Bayesian estimates. Evaluation on 13 publicly available datasets shows the superiority of this method in terms of classification accuracy and also provides an estimate of uncertainty for every prediction. Explainability of AI-driven algorithms has become a legal requirement after Europe’s General Data Protection Regulations came into effect. This research presents a framework for easy-to-understand textual explanations of skin lesion diagnosis. The framework is called ExAID (Explainable AI for Dermatology) and relies upon two fundamental modules. The first module uses any deep skin lesion classifier and performs detailed analysis on its latent space to map human-understandable disease-related concepts to the latent representation learnt by the deep model. The second module proposes Concept Localisation Maps, which extend Concept Activation Vectors by locating significant regions corresponding to a learned concept in the latent space of a trained image classifier. This thesis probes many viable solutions to equip a CAD system with PACE. However, it is noted that some of these methods require specific attributes in datasets and, therefore, not all methods may be applied on a single dataset. Regardless, this work anticipates that consolidating PACE into a CAD system can not only increase the confidence of medical practitioners in such tools but also serve as a stepping stone for the further development of AI-driven technologies in healthcare
    • …
    corecore