340 research outputs found

    Joint Segmentation and Uncertainty Visualization of Retinal Layers in Optical Coherence Tomography Images using Bayesian Deep Learning

    Full text link
    Optical coherence tomography (OCT) is commonly used to analyze retinal layers for assessment of ocular diseases. In this paper, we propose a method for retinal layer segmentation and quantification of uncertainty based on Bayesian deep learning. Our method not only performs end-to-end segmentation of retinal layers, but also gives the pixel wise uncertainty measure of the segmentation output. The generated uncertainty map can be used to identify erroneously segmented image regions which is useful in downstream analysis. We have validated our method on a dataset of 1487 images obtained from 15 subjects (OCT volumes) and compared it against the state-of-the-art segmentation algorithms that does not take uncertainty into account. The proposed uncertainty based segmentation method results in comparable or improved performance, and most importantly is more robust against noise

    Is attention all you need in medical image analysis? A review

    Full text link
    Medical imaging is a key component in clinical diagnosis, treatment planning and clinical trial design, accounting for almost 90% of all healthcare data. CNNs achieved performance gains in medical image analysis (MIA) over the last years. CNNs can efficiently model local pixel interactions and be trained on small-scale MI data. The main disadvantage of typical CNN models is that they ignore global pixel relationships within images, which limits their generalisation ability to understand out-of-distribution data with different 'global' information. The recent progress of Artificial Intelligence gave rise to Transformers, which can learn global relationships from data. However, full Transformer models need to be trained on large-scale data and involve tremendous computational complexity. Attention and Transformer compartments (Transf/Attention) which can well maintain properties for modelling global relationships, have been proposed as lighter alternatives of full Transformers. Recently, there is an increasing trend to co-pollinate complementary local-global properties from CNN and Transf/Attention architectures, which led to a new era of hybrid models. The past years have witnessed substantial growth in hybrid CNN-Transf/Attention models across diverse MIA problems. In this systematic review, we survey existing hybrid CNN-Transf/Attention models, review and unravel key architectural designs, analyse breakthroughs, and evaluate current and future opportunities as well as challenges. We also introduced a comprehensive analysis framework on generalisation opportunities of scientific and clinical impact, based on which new data-driven domain generalisation and adaptation methods can be stimulated

    Open Source Software for Automatic Detection of Cone Photoreceptors in Adaptive Optics Ophthalmoscopy Using Convolutional Neural Networks

    Get PDF
    Imaging with an adaptive optics scanning light ophthalmoscope (AOSLO) enables direct visualization of the cone photoreceptor mosaic in the living human retina. Quantitative analysis of AOSLO images typically requires manual grading, which is time consuming, and subjective; thus, automated algorithms are highly desirable. Previously developed automated methods are often reliant on ad hoc rules that may not be transferable between different imaging modalities or retinal locations. In this work, we present a convolutional neural network (CNN) based method for cone detection that learns features of interest directly from training data. This cone-identifying algorithm was trained and validated on separate data sets of confocal and split detector AOSLO images with results showing performance that closely mimics the gold standard manual process. Further, without any need for algorithmic modifications for a specific AOSLO imaging system, our fully-automated multi-modality CNN-based cone detection method resulted in comparable results to previous automatic cone segmentation methods which utilized ad hoc rules for different applications. We have made free open-source software for the proposed method and the corresponding training and testing datasets available online

    A new technique for cataract eye disease diagnosis in deep learning

    Get PDF
    Automated diagnosis of eye diseases using fundus images is challenging because manual analysis is time-consuming, prone to errors, and complicated. Thus, computer-aided tools for automatically detecting various ocular disorders from fundus images are needed. Deep learning algorithms enable improved image classification, making automated targeted ocular disease detection feasible. This study employed state-of-the-art deep learning image classifiers, such as VGG-19, to categorize the highly imbalanced ODIR-5K (Ocular Disease Intelligent Recognition) dataset of 5000 fundus images across eight disease classes, including cataract, glaucoma, diabetic retinopathy, and age-related macular degeneration. To address this imbalance, the multiclass problem is converted into binary classification tasks with equal samples in each category. The dataset was preprocessed and augmented to generate balanced datasets. The binary classifiers were trained on flat data using the VGG-19 (Visual Geometry Group) model. This approach achieved an accuracy of 95% for distinguishing normal versus cataract cases in only 15 epochs, outperforming the previous methods. Precision and recall were high for both classes – Normal and Cataract, with F1 scores of 0.95-0.96. Balancing the dataset and using deep VGG-19 classifiers significantly improved automated eye disease diagnosis accuracy from fundus images. With further research, this approach could lead to deploying AI (Artificial intelligence)-assisted tools for ophthalmologists to screen patients and support clinical decision-making
    • …
    corecore