150 research outputs found
Anatomy-Aware Self-supervised Fetal MRI Synthesis from Unpaired Ultrasound Images
Fetal brain magnetic resonance imaging (MRI) offers exquisite images of the
developing brain but is not suitable for anomaly screening. For this ultrasound
(US) is employed. While expert sonographers are adept at reading US images, MR
images are much easier for non-experts to interpret. Hence in this paper we
seek to produce images with MRI-like appearance directly from clinical US
images. Our own clinical motivation is to seek a way to communicate US findings
to patients or clinical professionals unfamiliar with US, but in medical image
analysis such a capability is potentially useful, for instance, for US-MRI
registration or fusion. Our model is self-supervised and end-to-end trainable.
Specifically, based on an assumption that the US and MRI data share a similar
anatomical latent space, we first utilise an extractor to determine shared
latent features, which are then used for data synthesis. Since paired data was
unavailable for our study (and rare in practice), we propose to enforce the
distributions to be similar instead of employing pixel-wise constraints, by
adversarial learning in both the image domain and latent space. Furthermore, we
propose an adversarial structural constraint to regularise the anatomical
structures between the two modalities during the synthesis. A cross-modal
attention scheme is proposed to leverage non-local spatial correlations. The
feasibility of the approach to produce realistic looking MR images is
demonstrated quantitatively and with a qualitative evaluation compared to real
fetal MR images.Comment: MICCAI-MLMI 201
DC-cycleGAN: Bidirectional CT-to-MR Synthesis from Unpaired Data
Magnetic resonance (MR) and computer tomography (CT) images are two typical
types of medical images that provide mutually-complementary information for
accurate clinical diagnosis and treatment. However, obtaining both images may
be limited due to some considerations such as cost, radiation dose and modality
missing. Recently, medical image synthesis has aroused gaining research
interest to cope with this limitation. In this paper, we propose a
bidirectional learning model, denoted as dual contrast cycleGAN (DC-cycleGAN),
to synthesize medical images from unpaired data. Specifically, a dual contrast
loss is introduced into the discriminators to indirectly build constraints
between real source and synthetic images by taking advantage of samples from
the source domain as negative samples and enforce the synthetic images to fall
far away from the source domain. In addition, cross-entropy and structural
similarity index (SSIM) are integrated into the DC-cycleGAN in order to
consider both the luminance and structure of samples when synthesizing images.
The experimental results indicate that DC-cycleGAN is able to produce promising
results as compared with other cycleGAN-based medical image synthesis methods
such as cycleGAN, RegGAN, DualGAN, and NiceGAN. The code will be available at
https://github.com/JiayuanWang-JW/DC-cycleGAN
TPSDicyc: Improved deformation invariant cross-domain medical image synthesis
Cycle-consistent generative adversarial network (CycleGAN) has been widely used for cross-domain medical image systhesis tasks particularly due to its ability to deal with unpaired data. However, most CycleGAN-based synthesis methods can not achieve good alignment between the synthesized images and data from the source domain, even with additional image alignment losses. This is because the CycleGAN generator network can encode the relative deformations and noises associated to different domains. This can be detrimental for the downstream applications that rely on the synthesized images, such as generating pseudo-CT for PET-MR attenuation correction. In this paper, we present a deformation invariant model based on the deformation-invariant CycleGAN (DicycleGAN) architecture and the spatial transformation network (STN) using thin-plate-spline (TPS). The proposed method can be trained with unpaired and unaligned data, and generate synthesised images aligned with the source data. Robustness to the presence of relative deformations between data from the source and target domain has been evaluated through experiments on multi-sequence brain MR data and multi-modality abdominal CT and MR data. Experiment results demonstrated that our method can achieve better alignment between the source and target data while maintaining superior image quality of signal compared to several state-of-the-art CycleGAN-based methods
Self-Supervised Ultrasound to MRI Fetal Brain Image Synthesis
Fetal brain magnetic resonance imaging (MRI) offers exquisite images of the
developing brain but is not suitable for second-trimester anomaly screening,
for which ultrasound (US) is employed. Although expert sonographers are adept
at reading US images, MR images which closely resemble anatomical images are
much easier for non-experts to interpret. Thus in this paper we propose to
generate MR-like images directly from clinical US images. In medical image
analysis such a capability is potentially useful as well, for instance for
automatic US-MRI registration and fusion. The proposed model is end-to-end
trainable and self-supervised without any external annotations. Specifically,
based on an assumption that the US and MRI data share a similar anatomical
latent space, we first utilise a network to extract the shared latent features,
which are then used for MRI synthesis. Since paired data is unavailable for our
study (and rare in practice), pixel-level constraints are infeasible to apply.
We instead propose to enforce the distributions to be statistically
indistinguishable, by adversarial learning in both the image domain and feature
space. To regularise the anatomical structures between US and MRI during
synthesis, we further propose an adversarial structural constraint. A new
cross-modal attention technique is proposed to utilise non-local spatial
information, by encouraging multi-modal knowledge fusion and propagation. We
extend the approach to consider the case where 3D auxiliary information (e.g.,
3D neighbours and a 3D location index) from volumetric data is also available,
and show that this improves image synthesis. The proposed approach is evaluated
quantitatively and qualitatively with comparison to real fetal MR images and
other approaches to synthesis, demonstrating its feasibility of synthesising
realistic MR images.Comment: IEEE Transactions on Medical Imaging 202
- …