40,737 research outputs found

    Using CycleGANs for effectively reducing image variability across OCT devices and improving retinal fluid segmentation

    Full text link
    Optical coherence tomography (OCT) has become the most important imaging modality in ophthalmology. A substantial amount of research has recently been devoted to the development of machine learning (ML) models for the identification and quantification of pathological features in OCT images. Among the several sources of variability the ML models have to deal with, a major factor is the acquisition device, which can limit the ML model's generalizability. In this paper, we propose to reduce the image variability across different OCT devices (Spectralis and Cirrus) by using CycleGAN, an unsupervised unpaired image transformation algorithm. The usefulness of this approach is evaluated in the setting of retinal fluid segmentation, namely intraretinal cystoid fluid (IRC) and subretinal fluid (SRF). First, we train a segmentation model on images acquired with a source OCT device. Then we evaluate the model on (1) source, (2) target and (3) transformed versions of the target OCT images. The presented transformation strategy shows an F1 score of 0.4 (0.51) for IRC (SRF) segmentations. Compared with traditional transformation approaches, this means an F1 score gain of 0.2 (0.12).Comment: * Contributed equally (order was defined by flipping a coin) --------------- Accepted for publication in the "IEEE International Symposium on Biomedical Imaging (ISBI) 2019

    Synthetic Data Augmentation using GAN for Improved Liver Lesion Classification

    Full text link
    In this paper, we present a data augmentation method that generates synthetic medical images using Generative Adversarial Networks (GANs). We propose a training scheme that first uses classical data augmentation to enlarge the training set and then further enlarges the data size and its diversity by applying GAN techniques for synthetic data augmentation. Our method is demonstrated on a limited dataset of computed tomography (CT) images of 182 liver lesions (53 cysts, 64 metastases and 65 hemangiomas). The classification performance using only classic data augmentation yielded 78.6% sensitivity and 88.4% specificity. By adding the synthetic data augmentation the results significantly increased to 85.7% sensitivity and 92.4% specificity.Comment: To be presented at IEEE International Symposium on Biomedical Imaging (ISBI), 201

    Deep Multi-Modal Classification of Intraductal Papillary Mucinous Neoplasms (IPMN) with Canonical Correlation Analysis

    Full text link
    Pancreatic cancer has the poorest prognosis among all cancer types. Intraductal Papillary Mucinous Neoplasms (IPMNs) are radiographically identifiable precursors to pancreatic cancer; hence, early detection and precise risk assessment of IPMN are vital. In this work, we propose a Convolutional Neural Network (CNN) based computer aided diagnosis (CAD) system to perform IPMN diagnosis and risk assessment by utilizing multi-modal MRI. In our proposed approach, we use minimum and maximum intensity projections to ease the annotation variations among different slices and type of MRIs. Then, we present a CNN to obtain deep feature representation corresponding to each MRI modality (T1-weighted and T2-weighted). At the final step, we employ canonical correlation analysis (CCA) to perform a fusion operation at the feature level, leading to discriminative canonical correlation features. Extracted features are used for classification. Our results indicate significant improvements over other potential approaches to solve this important problem. The proposed approach doesn't require explicit sample balancing in cases of imbalance between positive and negative examples. To the best of our knowledge, our study is the first to automatically diagnose IPMN using multi-modal MRI.Comment: Accepted for publication in IEEE International Symposium on Biomedical Imaging (ISBI) 201

    Features of the NIH atlas small animal pet scanner and its use with a coaxial small animal volume CT scanner

    Get PDF
    Proceeding of: 2002 IEEE International Symposium On Biomedical Imaging, Washington, D.C., USA, July 7-10, 2002ATLAS (Advanced Technology Laboratory Animal Scanner), a small animal PET scanner designed to image animals the size of rats and mice, is about to enter service on the NIH campus in Bethesda, Maryland. This system is the first small animal PET scanner with a depth-ofinteraction capability and the first to use iterative resolution recovery algorithms, rather than conventional filtered back projection, for "production" image reconstruction. ATLAS is also proximate to, and co-axial with, a high resolution small animal CT scanner. When fully integrated, spatially registered PET and CT images of each animal will be used to correct the emission data for radiation attenuation and to aid in target identification. In this report we describe some of the technical and functional features of this system and illustrate how these features are used in an actual small animal imaging studyThis work was supported in part by projects FIS 00/0036 and 11I PRICIT Comunidad de Madrid, Spain

    Quasi-Exact Helical Cone Beam Reconstruction for Micro CT

    Get PDF
    A cone beam micro-CT system is set up to collect truncated helical cone beam data. This system includes a micro-focal X-ray source, a precision computer-controlled X-Y-Z-theta stage, and an image-intensifier coupled to a large format CCD detector. The helical scanning mode is implemented by rotating and translating the stage while keeping X-ray source and detector stationary. A chunk of bone and a mouse leg are scanned and quasi-exact reconstruction is performed using the approach proposed in J. Hu et al. (2001). This approach introduced the original idea of accessory paths with upper and lower virtual detectors having infinite axial extent. It has a filtered backprojection structure which is desirable in practice and possesses the advantages of being simple to implement and computationally efficient compared to other quasi-exact helical cone beam algorithms for the long object problem

    Automatic Segmentation of the Left Ventricle in Cardiac CT Angiography Using Convolutional Neural Network

    Full text link
    Accurate delineation of the left ventricle (LV) is an important step in evaluation of cardiac function. In this paper, we present an automatic method for segmentation of the LV in cardiac CT angiography (CCTA) scans. Segmentation is performed in two stages. First, a bounding box around the LV is detected using a combination of three convolutional neural networks (CNNs). Subsequently, to obtain the segmentation of the LV, voxel classification is performed within the defined bounding box using a CNN. The study included CCTA scans of sixty patients, fifty scans were used to train the CNNs for the LV localization, five scans were used to train LV segmentation and the remaining five scans were used for testing the method. Automatic segmentation resulted in the average Dice coefficient of 0.85 and mean absolute surface distance of 1.1 mm. The results demonstrate that automatic segmentation of the LV in CCTA scans using voxel classification with convolutional neural networks is feasible.Comment: This work has been published as: Zreik, M., Leiner, T., de Vos, B. D., van Hamersvelt, R. W., Viergever, M. A., I\v{s}gum, I. (2016, April). Automatic segmentation of the left ventricle in cardiac CT angiography using convolutional neural networks. In Biomedical Imaging (ISBI), 2016 IEEE 13th International Symposium on (pp. 40-43). IEE
    corecore