Training deep learning models on brain MRI is often plagued by small sample
size, which can lead to biased training or overfitting. One potential solution
is to synthetically generate realistic MRIs via generative models such as
Generative Adversarial Network (GAN). However, existing GANs for synthesizing
realistic brain MRIs largely rely on image-to-image conditioned transformations
requiring extensive, well-curated pairs of MRI samples for training. On the
other hand, unconditioned GAN models (i.e., those generating MRI from random
noise) are unstable during training and tend to produce blurred images during
inference. Here, we propose an efficient strategy that generates high fidelity
3D brain MRI via Diffusion Probabilistic Model (DPM). To this end, we train a
conditional DPM with attention to generate an MRI sub-volume (a set of slices
at arbitrary locations) conditioned on another subset of slices from the same
MRI. By computing attention weights from slice indices and using a mask to
encode the target and conditional slices, the model is able to learn the
long-range dependency across distant slices with limited computational
resources. After training, the model can progressively synthesize a new 3D
brain MRI by generating the first subset of slices from random noise and
conditionally generating subsequent slices. Based on 1262 t1-weighted MRIs from
three neuroimaging studies, our experiments demonstrate that the proposed
method can generate high quality 3D MRIs that share the same distribution as
real MRIs and are more realistic than the ones produced by GAN-based models