15 research outputs found
Towards Realistic Ultrasound Fetal Brain Imaging Synthesis
Prenatal ultrasound imaging is the first-choice modality to assess fetal
health. Medical image datasets for AI and ML methods must be diverse (i.e.
diagnoses, diseases, pathologies, scanners, demographics, etc), however there
are few public ultrasound fetal imaging datasets due to insufficient amounts of
clinical data, patient privacy, rare occurrence of abnormalities in general
practice, and limited experts for data collection and validation. To address
such data scarcity, we proposed generative adversarial networks (GAN)-based
models, diffusion-super-resolution-GAN and transformer-based-GAN, to synthesise
images of fetal ultrasound brain planes from one public dataset. We reported
that GAN-based methods can generate 256x256 pixel size of fetal ultrasound
trans-cerebellum brain image plane with stable training losses, resulting in
lower FID values for diffusion-super-resolution-GAN (average 7.04 and lower FID
5.09 at epoch 10) than the FID values of transformer-based-GAN (average 36.02
and lower 28.93 at epoch 60). The results of this work illustrate the potential
of GAN-based methods to synthesise realistic high-resolution ultrasound images,
leading to future work with other fetal brain planes, anatomies, devices and
the need of a pool of experts to evaluate synthesised images. Code, data and
other resources to reproduce this work are available at
\url{https://github.com/budai4medtech/midl2023}.Comment: 3 pages, 1 figur
Ea-GANs: Edge-Aware Generative Adversarial Networks for Cross-Modality MR Image Synthesis
Magnetic resonance (MR) imaging is a widely used medical imaging protocol that can be configured to provide different contrasts between the tissues in human body. By setting different scanning parameters, each MR imaging modality reflects the unique visual characteristic of scanned body part, benefiting the subsequent analysis from multiple perspectives. To utilize the complementary information from multiple imaging modalities, cross-modality MR image synthesis has aroused increasing research interest recently. However, most existing methods only focus on minimizing pixel/voxel-wise intensity difference but ignore the textural details of image content structure, which affects the quality of synthesized images. In this paper, we propose edge-aware generative adversarial networks (Ea-GANs) for cross-modality MR image synthesis. Specifically, we integrate edge information, which reflects the textural structure of image content and depicts the boundaries of different objects in images, to reduce this gap. Corresponding to different learning strategies, two frameworks are proposed, i.e., a generator-induced Ea-GAN (gEa-GAN) and a discriminator-induced Ea-GAN (dEa-GAN). The gEa-GAN incorporates the edge information via its generator, while the dEa-GAN further does this from both the generator and the discriminator so that the edge similarity is also adversarially learned. In addition, the proposed Ea-GANs are 3D-based and utilize hierarchical features to capture contextual information. The experimental results demonstrate that the proposed Ea-GANs, especially the dEa-GAN, outperform multiple state-of-the-art methods for cross-modality MR image synthesis in both qualitative and quantitative measures. Moreover, the dEa-GAN also shows excellent generality to generic image synthesis tasks on benchmark datasets about facades, maps, and cityscapes