190 research outputs found

    Dual-Tree Complex Wavelet Input Transform for Cyst Segmentation in OCT Images Based on a Deep Learning Framework

    Get PDF
    Optical coherence tomography (OCT) represents a non-invasive, high-resolution cross-sectional imaging modality. Macular edema is the swelling of the macular region. Segmentation of fluid or cyst regions in OCT images is essential, to provide useful information for clinicians and prevent visual impairment. However, manual segmentation of fluid regions is a time-consuming and subjective procedure. Traditional and off-the-shelf deep learning methods fail to extract the exact location of the boundaries under complicated conditions, such as with high noise levels and blurred edges. Therefore, developing a tailored automatic image segmentation method that exhibits good numerical and visual performance is essential for clinical application. The dual-tree complex wavelet transform (DTCWT) can extract rich information from different orientations of image boundaries and extract details that improve OCT fluid semantic segmentation results in difficult conditions. This paper presents a comparative study of using DTCWT subbands in the segmentation of fluids. To the best of our knowledge, no previous studies have focused on the various combinations of wavelet transforms and the role of each subband in OCT cyst segmentation. In this paper, we propose a semantic segmentation composite architecture based on a novel U-net and information from DTCWT subbands. We compare different combination schemes, to take advantage of hidden information in the subbands, and demonstrate the performance of the methods under original and noise-added conditions. Dice score, Jaccard index, and qualitative results are used to assess the performance of the subbands. The combination of subbands yielded high Dice and Jaccard values, outperforming the other methods, especially in the presence of a high level of noise

    Loss-Modified Transformer-Based U-Net for Accurate Segmentation of Fluids in Optical Coherence Tomography Images of Retinal Diseases.

    Get PDF
    Optical coherence tomography (OCT) imaging significantly contributes to ophthalmology in the diagnosis of retinal disorders such as age-related macular degeneration and diabetic macular edema. Both diseases involve the abnormal accumulation of fluids, location, and volume, which is vitally informative in detecting the severity of the diseases. Automated and accurate fluid segmentation in OCT images could potentially improve the current clinical diagnosis. This becomes more important by considering the limitations of manual fluid segmentation as a time-consuming and subjective to error method. Deep learning techniques have been applied to various image processing tasks, and their performance has already been explored in the segmentation of fluids in OCTs. This article suggests a novel automated deep learning method utilizing the U-Net structure as the basis. The modifications consist of the application of transformers in the encoder path of the U-Net with the purpose of more concentrated feature extraction. Furthermore, a custom loss function is empirically tailored to efficiently incorporate proper loss functions to deal with the imbalance and noisy images. A weighted combination of Dice loss, focal Tversky loss, and weighted binary cross-entropy is employed. Different metrics are calculated. The results show high accuracy (Dice coefficient of 95.52) and robustness of the proposed method in comparison to different methods after adding extra noise to the images (Dice coefficient of 92.79). The segmentation of fluid regions in retinal OCT images is critical because it assists clinicians in diagnosing macular edema and executing therapeutic operations more quickly. This study suggests a deep learning framework and novel loss function for automated fluid segmentation of retinal OCT images with excellent accuracy and rapid convergence result. [Abstract copyright: Copyright: © 2023 Journal of Medical Signals & Sensors.

    COVID TV-UNet: Segmenting COVID-19 Chest CT Images Using Connectivity Imposed U-Net

    Full text link
    The novel corona-virus disease (COVID-19) pandemic has caused a major outbreak in more than 200 countries around the world, leading to a severe impact on the health and life of many people globally. As of mid-July 2020, more than 12 million people were infected, and more than 570,000 death were reported. Computed Tomography (CT) images can be used as an alternative to the time-consuming RT-PCR test, to detect COVID-19. In this work we propose a segmentation framework to detect chest regions in CT images, which are infected by COVID-19. We use an architecture similar to U-Net model, and train it to detect ground glass regions, on pixel level. As the infected regions tend to form a connected component (rather than randomly distributed pixels), we add a suitable regularization term to the loss function, to promote connectivity of the segmentation map for COVID-19 pixels. 2D-anisotropic total-variation is used for this purpose, and therefore the proposed model is called "TV-UNet". Through experimental results on a relatively large-scale CT segmentation dataset of around 900 images, we show that adding this new regularization term leads to 2\% gain on overall segmentation performance compared to the U-Net model. Our experimental analysis, ranging from visual evaluation of the predicted segmentation results to quantitative assessment of segmentation performance (precision, recall, Dice score, and mIoU) demonstrated great ability to identify COVID-19 associated regions of the lungs, achieving a mIoU rate of over 99\%, and a Dice score of around 86\%

    A new convolutional neural network based on combination of circlets and wavelets for macular OCT classification

    Get PDF
    Artificial intelligence (AI) algorithms, encompassing machine learning and deep learning, can assist ophthalmologists in early detection of various ocular abnormalities through the analysis of retinal optical coherence tomography (OCT) images. Despite considerable progress in these algorithms, several limitations persist in medical imaging fields, where a lack of data is a common issue. Accordingly, specific image processing techniques, such as time–frequency transforms, can be employed in conjunction with AI algorithms to enhance diagnostic accuracy. This research investigates the influence of non-data-adaptive time–frequency transforms, specifically X-lets, on the classification of OCT B-scans. For this purpose, each B-scan was transformed using every considered X-let individually, and all the sub-bands were utilized as the input for a designed 2D Convolutional Neural Network (CNN) to extract optimal features, which were subsequently fed to the classifiers. Evaluating per-class accuracy shows that the use of the 2D Discrete Wavelet Transform (2D-DWT) yields superior outcomes for normal cases, whereas the circlet transform outperforms other X-lets for abnormal cases characterized by circles in their retinal structure (due to the accumulation of fluid). As a result, we propose a novel transform named CircWave by concatenating all sub-bands from the 2D-DWT and the circlet transform. The objective is to enhance the per-class accuracy of both normal and abnormal cases simultaneously. Our findings show that classification results based on the CircWave transform outperform those derived from original images or any individual transform. Furthermore, Grad-CAM class activation visualization for B-scans reconstructed from CircWave sub-bands highlights a greater emphasis on circular formations in abnormal cases and straight lines in normal cases, in contrast to the focus on irrelevant regions in original B-scans. To assess the generalizability of our method, we applied it to another dataset obtained from a different imaging system. We achieved promising accuracies of 94.5% and 90% for the first and second datasets, respectively, which are comparable with results from previous studies. The proposed CNN based on CircWave sub-bands (i.e. CircWaveNet) not only produces superior outcomes but also offers more interpretable results with a heightened focus on features crucial for ophthalmologists

    Diagnosis of multiple sclerosis by detecting asymmetry within the retina using a similarity-based neural network

    Get PDF
    Multiple sclerosis (MS) is a chronic neurological disorder that targets the central nervous system, causing demyelination and neural disruption, which can include retinal nerve damage leading to visual disturbances. The purpose of this study is to demonstrate the capability to automatically diagnose MS by detecting asymmetry within the retina, using a similarity-based neural network, trained on optical coherence tomography images. This work aims to investigate the feasibility of a learning-based system accurately detecting the presence of MS, based on information from pairs of left and right retina images. We also justify the effectiveness of a Siamese Neural Network for our task and present its strengths through experimental evaluation of the approach. We train a Siamese neural network to detect MS and assess its performance using a test dataset from the same distribution as well as an out-of-distribution dataset, which simulates an external dataset captured under different environmental conditions. Our experimental results demonstrate that a Siamese neural network can attain accuracy levels of up to 0.932 using both an in-distribution test dataset and a simulated external dataset. Our model can detect MS more accurately than standard neural network architectures, demonstrating its feasibility in medical applications for the early, cost-effective detection of MS

    Application of ImageJ in Optical Coherence Tomography Angiography (OCT-A): A Literature Review

    Get PDF
    Background. This study aimed to review the literature on the application of ImageJ in optical coherence tomography angiography (OCT-A) images. Methods. A general search was performed in PubMed, Google Scholar, and Scopus databases. The authors evaluated each of the selected articles in order to assess the implementation of ImageJ in OCT-A images. Results. ImageJ can aid in reducing artifacts, enhancing image quality to increase the accuracy of the process and analysis, processing and analyzing images, generating comparable parameters such as the parameters that assess perfusion of the layers (vessel density (VD), skeletonized density (SD), and vessel length density (VLD)) and the parameters that evaluate the structure of the layers (fractal dimension (FD), vessel density index (VDI), and lacunarity (LAC)), and the foveal avascular zone (FAZ) that are used widely in the retinal and choroidal studies), and establishing diagnostic criteria. It can help to save time when the dataset is huge with numerous plugins and options for image processing and analysis with reliable results. Diverse studies implemented distinct binarization and thresholding techniques, resulting in disparate outcomes and incomparable parameters. Uniformity in methodology is required to acquire comparable data from studies employing diverse processing and analysis techniques that yield varied outcomes. Conclusion. Researchers and professionals might benefit from using ImageJ because of how quickly and correctly it processes and analyzes images. It is highly adaptable and potent software, allowing users to evaluate images in a variety of ways. There exists a diverse range of methodologies for analyzing OCTA images through the utilization of ImageJ. However, it is imperative to establish a standardized strategy to ensure the reliability and consistency of the method for research purposes
    • …
    corecore