17 research outputs found

    SESAME: Semantic Editing of Scenes by Adding, Manipulating or Erasing Objects

    Full text link
    Recent advances in image generation gave rise to powerful tools for semantic image editing. However, existing approaches can either operate on a single image or require an abundance of additional information. They are not capable of handling the complete set of editing operations, that is addition, manipulation or removal of semantic concepts. To address these limitations, we propose SESAME, a novel generator-discriminator pair for Semantic Editing of Scenes by Adding, Manipulating or Erasing objects. In our setup, the user provides the semantic labels of the areas to be edited and the generator synthesizes the corresponding pixels. In contrast to previous methods that employ a discriminator that trivially concatenates semantics and image as an input, the SESAME discriminator is composed of two input streams that independently process the image and its semantics, using the latter to manipulate the results of the former. We evaluate our model on a diverse set of datasets and report state-of-the-art performance on two tasks: (a) image manipulation and (b) image generation conditioned on semantic labels

    GAN-based multiple adjacent brain MRI slice reconstruction for unsupervised alzheimer’s disease diagnosis

    Get PDF
    Unsupervised learning can discover various unseen diseases, relying on large-scale unannotated medical images of healthy subjects. Towards this, unsupervised methods reconstruct a single medical image to detect outliers either in the learned feature space or from high reconstruction loss. However, without considering continuity between multiple adjacent slices, they cannot directly discriminate diseases composed of the accumulation of subtle anatomical anomalies, such as Alzheimer's Disease (AD). Moreover, no study has shown how unsupervised anomaly detection is associated with disease stages. Therefore, we propose a two-step method using Generative Adversarial Network-based multiple adjacent brain MRI slice reconstruction to detect AD at various stages: (Reconstruction) Wasserstein loss with Gradient Penalty + L1 loss---trained on 3 healthy slices to reconstruct the next 3 ones---reconstructs unseen healthy/AD cases; (Diagnosis) Average/Maximum loss (e.g., L2 loss) per scan discriminates them, comparing the reconstructed/ground truth images. The results show that we can reliably detect AD at a very early stage with Area Under the Curve (AUC) 0.780 while also detecting AD at a late stage much more accurately with AUC 0.917; since our method is fully unsupervised, it should also discover and alert any anomalies including rare disease.Comment: 10 pages, 4 figures, Accepted to Lecture Notes in Bioinformatics (LNBI) as a volume in the Springer serie

    A Novel Deep Learning Approach for Liver MRI Classification and HCC Detection

    No full text
    International audienceThis work proposes a deep learning algorithm based on the Convolutional Neural Network (CNN) architecture to detect HepatoCellular Carcinoma (HCC) from liver DCE-MRI (Dynamic Contrast-Enhanced MRI) sequences. The Deep Learning technique is an artificial intelligence technique (AI) that tries to imitate the human brain work in the training data and creating models used for decision. Actually, it is widely used for various clinical issues. To diagnose HCC, radiologists consider three different phases during contrast injection (before injection; arterial phase; portal phase for instance). This paper presents an approach that offers a parallel preprocessing algorithm. It allows HCC detection and localization in MRI images via a CNN algorithm. The created CNN model reached an accuracy level of 90% in both arterial and portal phases using MRI patches of 64×64 pixels. We mention also its ability to decrease false detection comparing with our previous works. The obtained good accuracy is considered to be ameliorated in our future works

    Semi-supervised Medical Image Segmentation via Learning Consistency Under Transformations

    No full text
    The scarcity of labeled data often limits the application of supervised deep learning techniques for medical image segmentation. This has motivated the development of semi-supervised techniques that learn from a mixture of labeled and unlabeled images. In this paper, we propose a novel semi-supervised method that, in addition to supervised learning on labeled training images, learns to predict segmentations consistent under a given class of transformations on both labeled and unlabeled images. More specifically, in this work we explore learning equivariance to elastic deformations. We implement this through: 1) a Siamese architecture with two identical branches, each of which receives a differently transformed image, and 2) a composite loss function with a supervised segmentation loss term and an unsupervised term that encourages segmentation consistency between the predictions of the two branches. We evaluate the method on a public dataset of chest radiographs with segmentations of anatomical structures using 5-fold cross-validation. The proposed method reaches significantly higher segmentation accuracy compared to supervised learning. This is due to learning transformation consistency on both labeled and unlabeled images, with the latter contributing the most. We achieve the performance comparable to state-of-the-art chest X-ray segmentation methods while using substantially fewer labeled images

    Infinite Brain MR Images: PGGAN-Based Data Augmentation for Tumor Detection

    No full text
    Due to the lack of available annotated medical images, accurate computer-assisted diagnosis requires intensive data augmentation (DA) techniques, such as geometric/intensity transformations of original images; however, those transformed images intrinsically have a similar distribution to the original ones, leading to limited performance improvement. To fill the data lack in the real image distribution, we synthesize brain contrast-enhanced magnetic resonance (MR) images—realistic but completely different from the original ones—using generative adversarial networks (GANs). This study exploits progressive growing of GANs (PGGANs), a multistage generative training method, to generate original-sized 256 × 256 MR images for convolutional neural network-based brain tumor detection, which is challenging via conventional GANs; difficulties arise due to unstable GAN training with high resolution and a variety of tumors in size, location, shape, and contrast. Our preliminary results show that this novel PGGAN-based DA method can achieve a promising performance improvement, when combined with classical DA, in tumor detection and also in other medical imaging tasks
    corecore