228 research outputs found

    A Deep Learning Approach to Denoise Optical Coherence Tomography Images of the Optic Nerve Head

    Full text link
    Purpose: To develop a deep learning approach to de-noise optical coherence tomography (OCT) B-scans of the optic nerve head (ONH). Methods: Volume scans consisting of 97 horizontal B-scans were acquired through the center of the ONH using a commercial OCT device (Spectralis) for both eyes of 20 subjects. For each eye, single-frame (without signal averaging), and multi-frame (75x signal averaging) volume scans were obtained. A custom deep learning network was then designed and trained with 2,328 "clean B-scans" (multi-frame B-scans), and their corresponding "noisy B-scans" (clean B-scans + gaussian noise) to de-noise the single-frame B-scans. The performance of the de-noising algorithm was assessed qualitatively, and quantitatively on 1,552 B-scans using the signal to noise ratio (SNR), contrast to noise ratio (CNR), and mean structural similarity index metrics (MSSIM). Results: The proposed algorithm successfully denoised unseen single-frame OCT B-scans. The denoised B-scans were qualitatively similar to their corresponding multi-frame B-scans, with enhanced visibility of the ONH tissues. The mean SNR increased from 4.02±0.684.02 \pm 0.68 dB (single-frame) to 8.14±1.038.14 \pm 1.03 dB (denoised). For all the ONH tissues, the mean CNR increased from 3.50±0.563.50 \pm 0.56 (single-frame) to 7.63±1.817.63 \pm 1.81 (denoised). The MSSIM increased from 0.13±0.020.13 \pm 0.02 (single frame) to 0.65±0.030.65 \pm 0.03 (denoised) when compared with the corresponding multi-frame B-scans. Conclusions: Our deep learning algorithm can denoise a single-frame OCT B-scan of the ONH in under 20 ms, thus offering a framework to obtain superior quality OCT B-scans with reduced scanning times and minimal patient discomfort

    Advancement in Denoising MRI Images via 3D-GAN Model with Direction Coupled Magnitude Histogram Consistency Loss

    Get PDF
    The diagnostics of medical pictures are essential for recognizing and comprehending a wide range of medical problems. This work introduces the Direction Coupled Magnitude Histogram (DCMH) as a novel structure picture descriptor to improve diagnostic accuracy. One of DCMH's unique selling points is its ability to include the edge oriented information that are oriented in any way inside a frame, enabling the expression of delicate nuances using various gradient features. The proposed method applies cartoon texture based textural loss and DCMH based structural loss to identify and analyse structural and textural information during the denoising time. A major contribution that improves the interpretability of images by emphasizing structural aspects that is inherent to the image. The proposed DCMH_3D_GANaverage results show exceptional performance, with an SSIM of 0.972995 and PSNR of 48.74, highlighting the effectiveness of the DCMH-based method in enhancing medical picture diagnosis. The capacity of Structured Loss to improve picture interpretability and lead to a more precise diagnosis is unquestionably advantageous. The newly developed DCMH-based approach, which includes texture loss and structured components, is a promising development in healthcare image processing that will enable better patient care through enhanced diagnostic abilities

    Breast Tumor Identification in Ultrafast MRI Using Temporal and Spatial Information

    Get PDF
    Purpose: To investigate the feasibility of using deep learning methods to differentiate benign from malignant breast lesions in ultrafast MRI with both temporal and spatial information. Methods: A total of 173 single breasts of 122 women (151 examinations) with lesions above 5 mm were retrospectively included. A total of 109 out of 173 lesions were benign. Maximum intensity projection (MIP) images were generated from each of the 14 contrast-enhanced T1-weighted acquisitions in the ultrafast MRI scan. A 2D convolutional neural network (CNN) and a long short-term memory (LSTM) network were employed to extract morphological and temporal features, respectively. The 2D CNN model was trained with the MIPs from the last four acquisitions to ensure the visibility of the lesions, while the LSTM model took MIPs of an entire scan as input. The performance of each model and their combination were evaluated with 100-times repeated stratified four-fold cross-validation. Those models were then compared with models developed with standard DCE-MRI which followed the same data split. Results: In the differentiation between benign and malignant lesions, the ultrafast MRI-based 2D CNN achieved a mean AUC of 0.81 ± 0.06, and the LSTM network achieved a mean AUC of 0.78 ± 0.07; their combination showed a mean AUC of 0.83 ± 0.06 in the cross-validation. The mean AUC values were significantly higher for ultrafast MRI-based models than standard DCE-MRI-based models. Conclusion: Deep learning models developed with ultrafast breast MRI achieved higher performances than standard DCE-MRI for malignancy discrimination. The improved AUC values of the combined models indicate an added value of temporal information extracted by the LSTM model in breast lesion characterization

    Joint Frequency and Image Space Learning for Fourier Imaging

    Full text link
    We demonstrate that neural network layers that explicitly combine frequency and image feature representations are a versatile building block for analysis of imaging data acquired in the frequency space. Our work is motivated by the challenges arising in MRI acquisition where the signal is a corrupted Fourier transform of the desired image. The joint learning schemes proposed and analyzed in this paper enable both correction of artifacts native to the frequency space and manipulation of image space representations to reconstruct coherent image structures. This is in contrast to most current deep learning approaches for image reconstruction that apply learned data manipulations solely in the frequency space or solely in the image space. We demonstrate the advantages of joint convolutional learning on three diverse tasks: image reconstruction from undersampled acquisitions, motion correction, and image denoising in brain and knee MRI. We further demonstrate advantages of the joint learning approaches across training schemes using a wide variety of loss functions. Unlike purely image based and purely frequency based architectures, the joint models produce consistently high quality output images across all tasks and datasets. Joint image and frequency space feature representations promise to significantly improve modeling and reconstruction of images acquired in the frequency space. Our code is available at https://github.com/nalinimsingh/interlacer.Comment: 16 pages, 13 figures, image reconstruction, motion correction, denoising, magnetic resonance imaging, deep learnin

    Automated Diagnosis of Cardiovascular Diseases from Cardiac Magnetic Resonance Imaging Using Deep Learning Models: A Review

    Full text link
    In recent years, cardiovascular diseases (CVDs) have become one of the leading causes of mortality globally. CVDs appear with minor symptoms and progressively get worse. The majority of people experience symptoms such as exhaustion, shortness of breath, ankle swelling, fluid retention, and other symptoms when starting CVD. Coronary artery disease (CAD), arrhythmia, cardiomyopathy, congenital heart defect (CHD), mitral regurgitation, and angina are the most common CVDs. Clinical methods such as blood tests, electrocardiography (ECG) signals, and medical imaging are the most effective methods used for the detection of CVDs. Among the diagnostic methods, cardiac magnetic resonance imaging (CMR) is increasingly used to diagnose, monitor the disease, plan treatment and predict CVDs. Coupled with all the advantages of CMR data, CVDs diagnosis is challenging for physicians due to many slices of data, low contrast, etc. To address these issues, deep learning (DL) techniques have been employed to the diagnosis of CVDs using CMR data, and much research is currently being conducted in this field. This review provides an overview of the studies performed in CVDs detection using CMR images and DL techniques. The introduction section examined CVDs types, diagnostic methods, and the most important medical imaging techniques. In the following, investigations to detect CVDs using CMR images and the most significant DL methods are presented. Another section discussed the challenges in diagnosing CVDs from CMR data. Next, the discussion section discusses the results of this review, and future work in CVDs diagnosis from CMR images and DL techniques are outlined. The most important findings of this study are presented in the conclusion section

    An Interactive Automation for Human Biliary Tree Diagnosis Using Computer Vision

    Get PDF
    The biliary tree is a network of tubes that connects the liver to the gallbladder, an organ right beneath it. The bile duct is the major tube in the biliary tree. The dilatation of a bile duct is a key indicator for more major problems in the human body, such as stones and tumors, which are frequently caused by the pancreas or the papilla of vater. The detection of bile duct dilatation can be challenging for beginner or untrained medical personnel in many circumstances. Even professionals are unable to detect bile duct dilatation with the naked eye. This research presents a unique vision-based model for biliary tree initial diagnosis. To segment the biliary tree from the Magnetic Resonance Image, the framework used different image processing approaches (MRI). After the image’s region of interest was segmented, numerous calculations were performed on it to extract 10 features, including major and minor axes, bile duct area, biliary tree area, compactness, and some textural features (contrast, mean, variance and correlation). This study used a database of images from King Hussein Medical Center in Amman, Jordan, which included 200 MRI images, 100 normal cases, and 100 patients with dilated bile ducts. After the characteristics are extracted, various classifiers are used to determine the patients’ condition in terms of their health (normal or dilated). The findings demonstrate that the extracted features perform well with all classifiers in terms of accuracy and area under the curve. This study is unique in that it uses an automated approach to segment the biliary tree from MRI images, as well as scientifically correlating retrieved features with biliary tree status that has never been done before in the literature

    Intelligent Imaging of Perfusion Using Arterial Spin Labelling

    Get PDF
    Arterial spin labelling (ASL) is a powerful magnetic resonance imaging technique, which can be used to noninvasively measure perfusion in the brain and other organs of the body. Promising research results show how ASL might be used in stroke, tumours, dementia and paediatric medicine, in addition to many other areas. However, significant obstacles remain to prevent widespread use: ASL images have an inherently low signal to noise ratio, and are susceptible to corrupting artifacts from motion and other sources. The objective of the work in this thesis is to move towards an "intelligent imaging" paradigm: one in which the image acquisition, reconstruction and processing are mutually coupled, and tailored to the individual patient. This thesis explores how ASL images may be improved at several stages of the imaging pipeline. We review the relevant ASL literature, exploring details of ASL acquisitions, parameter inference and artifact post-processing. We subsequently present original work: we use the framework of Bayesian experimental design to generate optimised ASL acquisitions, we present original methods to improve parameter inference through anatomically-driven modelling of spatial correlation, and we describe a novel deep learning approach for simultaneous denoising and artifact filtering. Using a mixture of theoretical derivation, simulation results and imaging experiments, the work in this thesis presents several new approaches for ASL, and hopefully will shape future research and future ASL usage
    corecore