Acquiring images of the same anatomy with multiple different contrasts
increases the diversity of diagnostic information available in an MR exam. Yet,
scan time limitations may prohibit acquisition of certain contrasts, and images
for some contrast may be corrupted by noise and artifacts. In such cases, the
ability to synthesize unacquired or corrupted contrasts from remaining
contrasts can improve diagnostic utility. For multi-contrast synthesis, current
methods learn a nonlinear intensity transformation between the source and
target images, either via nonlinear regression or deterministic neural
networks. These methods can in turn suffer from loss of high-spatial-frequency
information in synthesized images. Here we propose a new approach for
multi-contrast MRI synthesis based on conditional generative adversarial
networks. The proposed approach preserves high-frequency details via an
adversarial loss; and it offers enhanced synthesis performance via a pixel-wise
loss for registered multi-contrast images and a cycle-consistency loss for
unregistered images. Information from neighboring cross-sections are utilized
to further improved synthesis quality. Demonstrations on T1- and T2-weighted
images from healthy subjects and patients clearly indicate the superior
performance of the proposed approach compared to previous state-of-the-art
methods. Our synthesis approach can help improve quality and versatility of
multi-contrast MRI exams without the need for prolonged examinations