An automated localization, segmentation and reconstruction framework for fetal brain MRI

Abstract

Reconstructing a high-resolution (HR) volume from motion-corrupted and sparsely acquired stacks plays an increasing role in fetal brain Magnetic Resonance Imaging (MRI) studies. Existing reconstruction methods are time-consuming and often require user interaction to localize and extract the brain from several stacks of 2D slices. In this paper, we propose a fully automatic framework for fetal brain reconstruction that consists of three stages: (1) brain localization based on a coarse segmentation of a down-sampled input image by a Convolutional Neural Network (CNN), (2) fine segmentation by a second CNN trained with a multi-scale loss function, and (3) novel, single-parameter outlier-robust super-resolution reconstruction (SRR) for HR visualization in the standard anatomical space. We validate our framework with images from fetuses with variable degrees of ventriculomegaly associated with spina bifida. Experiments show that each step of our proposed pipeline outperforms state-of-the-art methods in both segmentation and reconstruction comparisons. Overall, we report automatic SRR reconstructions that compare favorably with those obtained by manual, labor-intensive brain segmentations. This potentially unlocks the use of automatic fetal brain reconstruction studies in clinical practice

    Similar works