7 research outputs found

    Towards segmentation and spatial alignment of the human embryonic brain using deep learning for atlas-based registration

    Full text link
    We propose an unsupervised deep learning method for atlas based registration to achieve segmentation and spatial alignment of the embryonic brain in a single framework. Our approach consists of two sequential networks with a specifically designed loss function to address the challenges in 3D first trimester ultrasound. The first part learns the affine transformation and the second part learns the voxelwise nonrigid deformation between the target image and the atlas. We trained this network end-to-end and validated it against a ground truth on synthetic datasets designed to resemble the challenges present in 3D first trimester ultrasound. The method was tested on a dataset of human embryonic ultrasound volumes acquired at 9 weeks gestational age, which showed alignment of the brain in some cases and gave insight in open challenges for the proposed method. We conclude that our method is a promising approach towards fully automated spatial alignment and segmentation of embryonic brains in 3D ultrasound

    Anatomy-Aware Self-supervised Fetal MRI Synthesis from Unpaired Ultrasound Images

    Full text link
    Fetal brain magnetic resonance imaging (MRI) offers exquisite images of the developing brain but is not suitable for anomaly screening. For this ultrasound (US) is employed. While expert sonographers are adept at reading US images, MR images are much easier for non-experts to interpret. Hence in this paper we seek to produce images with MRI-like appearance directly from clinical US images. Our own clinical motivation is to seek a way to communicate US findings to patients or clinical professionals unfamiliar with US, but in medical image analysis such a capability is potentially useful, for instance, for US-MRI registration or fusion. Our model is self-supervised and end-to-end trainable. Specifically, based on an assumption that the US and MRI data share a similar anatomical latent space, we first utilise an extractor to determine shared latent features, which are then used for data synthesis. Since paired data was unavailable for our study (and rare in practice), we propose to enforce the distributions to be similar instead of employing pixel-wise constraints, by adversarial learning in both the image domain and latent space. Furthermore, we propose an adversarial structural constraint to regularise the anatomical structures between the two modalities during the synthesis. A cross-modal attention scheme is proposed to leverage non-local spatial correlations. The feasibility of the approach to produce realistic looking MR images is demonstrated quantitatively and with a qualitative evaluation compared to real fetal MR images.Comment: MICCAI-MLMI 201

    Multi-channel groupwise registration to construct an ultrasound-specific fetal brain atlas

    No full text
    In this paper, we describe a method to construct a 3D atlas from fetal brain ultrasound (US) volumes. A multi-channel groupwise Demons registration is proposed to simultaneously register a set of images from a population to a common reference space, thereby representing the population average. Similar to the standard Demons formulation, our approach takes as input an intensity image, but with an additional channel which contains phase-based features extracted from the intensity channel. The proposed multi-channel atlas construction method is evaluated using a groupwise Dice overlap, and is shown to outperform standard (single-channel) groupwise diffeomorphic Demons registration. This method is then used to construct an atlas from US brain volumes collected from a population of 39 healthy fetal subjects at 23 gestational weeks. The resulting atlas manifests high structural overlap, and correspondence between the US-based and an age-matched fetal MRI-based atlas is observed

    Multi-task CNN for structural semantic segmentation in 3D fetal brain ultrasound

    No full text
    The fetal brain undergoes extensive morphological changes throughout pregnancy, which can be visually seen in ultrasound acquisitions. We explore the use of convolutional neural networks (CNNs) for the segmentation of multiple fetal brain structures in 3D ultrasound images. Accurate automatic segmentation of brain structures in fetal ultrasound images can track brain development through gestation, and can provide useful information that can help predict fetal health outcomes. We propose a multi-task CNN to produce automatic segmentations from atlas-generated labels of the white matter, thalamus, brainstem, and cerebellum. The network as trained on 480 volumes produced accurate 3D segmentations on 48 test volumes, with Dice coefficient of 0.93 on the white matter and over 0.77 on segmentations of thalamus, brainstem and cerebellum

    Calibrated Bayesian neural networks to estimate gestational age and its uncertainty on fetal brain ultrasound images

    No full text
    We present an original automated framework for estimating gestational age (GA) from fetal ultrasound head biometry plane images. A novelty of our approach is the use of a Bayesian Neural Network (BNN), which quantifies uncertainty of the estimated GA. Knowledge of estimated uncertainty is useful in clinical decision-making, and is especially important in ultrasound image analysis where image appearance and quality can naturally vary a lot. A further novelty of our approach is that the neural network is not provided with images pixel size, thus making it rely on anatomical appearance characteristics and not size. We train the network using 9,299 scans from the INTERGROWTH-21st [22] dataset ranging from 14+0 weeks to 42+6 weeks GA. We achieve average RMSE and MAE of 9.6 and 12.5 days respectively over the GA range. We explore the robustness of the BNN architecture to invalid input images by testing with (i) a different dataset derived from routine anomaly scanning and (ii) scans of a different fetal anatomy
    corecore