34 research outputs found

    Towards Adversarial Retinal Image Synthesis

    Get PDF
    Synthesizing images of the eye fundus is a challenging task that has been previously approached by formulating complex models of the anatomy of the eye. New images can then be generated by sampling a suitable parameter space. In this work, we propose a method that learns to synthesize eye fundus images directly from data. For that, we pair true eye fundus images with their respective vessel trees, by means of a vessel segmentation technique. These pairs are then used to learn a mapping from a binary vessel tree to a new retinal image. For this purpose, we use a recent image-to-image translation technique, based on the idea of adversarial learning. Experimental results show that the original and the generated images are visually different in terms of their global appearance, in spite of sharing the same vessel tree. Additionally, a quantitative quality analysis of the synthetic retinal images confirms that the produced images retain a high proportion of the true image set quality

    ScarGAN: Chained Generative Adversarial Networks to Simulate Pathological Tissue on Cardiovascular MR Scans

    Full text link
    Medical images with specific pathologies are scarce, but a large amount of data is usually required for a deep convolutional neural network (DCNN) to achieve good accuracy. We consider the problem of segmenting the left ventricular (LV) myocardium on late gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) scans of which only some of the scans have scar tissue. We propose ScarGAN to simulate scar tissue on healthy myocardium using chained generative adversarial networks (GAN). Our novel approach factorizes the simulation process into 3 steps: 1) a mask generator to simulate the shape of the scar tissue; 2) a domain-specific heuristic to produce the initial simulated scar tissue from the simulated shape; 3) a refining generator to add details to the simulated scar tissue. Unlike other approaches that generate samples from scratch, we simulate scar tissue on normal scans resulting in highly realistic samples. We show that experienced radiologists are unable to distinguish between real and simulated scar tissue. Training a U-Net with additional scans with scar tissue simulated by ScarGAN increases the percentage of scar pixels correctly included in LV myocardium prediction from 75.9% to 80.5%.Comment: 12 pages, 5 figures. To appear in MICCAI DLMIA 201

    Generative Modeling for Retinal Fundus Image Synthesis

    Get PDF
    Medical imaging datasets typically do not contain many training images, usually being deficient for training deep learning networks.We propose a deep residual variational auto-encoder and a generative adversarial network that can generate a synthetic retinal fundus image dataset with corresponding blood vessel annotation. Ourinitial experiments produce results with higher scores than the stateof the art for verifying that the structural statistics of our generatedimages are compatible with real fundus images. The successful application of generative models to generate synthetic medical datawill not only help to mitigate the small dataset problem but will alsoaddress the privacy concerns associated with medical datasets
    corecore