558 research outputs found

    Fully-automated deep learning pipeline for 3D fetal brain ultrasound

    Get PDF
    Three-dimensional ultrasound (3D US) imaging has shown significant potential for in-utero assessment of the development of the fetal brain. However, in spite of the potential benefits of this modality over its two-dimensional (2D) counterpart, its widespread adoption remains largely limited by the difficulty associated with its analysis. While more established 3D neuroimaging modalities, such as Magnetic Res- onance Imaging (MRI), have circumvented similar challenges thanks to reliable, automated neuroimage analysis pipelines, there is currently no comparable pipeline solution for 3D neurosonography. With the goal of facilitating medical research and encouraging the adoption of 3D US for clinical assessment, the main objective of my doctoral thesis is to design, develop, and validate a set of fundamental automated modules that comprise a fast, robust, fully automated, general-purpose pipeline for the neuroimage analysis of fetal 3D US scans. For the first module, I propose the fetal Brain Extraction Network (fBEN), a fully-automated, end-to-end 3D Convolutional Neural Network (CNN) with an encoder-decoder architecture. It predicts an accurate binary brain mask for the automated extraction of the fetal brain from standard clinical 3D US scans. For the second module I propose the fetal Brain Alignment Network (fBAN), a fully-automated, end-to-end regression network with a cascade architecture that accurately predicts the alignment parameters required to rigidly align standard clinical 3D US scans to a canonical reference space. Finally, for the third module, I propose the fetal Brain Fingerprinting Net- work (fBFN), a fully-automated, end-to-end network based on a Variational AutoEncoder (VAE) architecture, that encodes the entire structural information of the 3D brain into a relatively small set of parameters in a continuously distributed latent space. It is a general-purpose solution aimed at facilitating the assessment of the 3D US scans by recharacterising the fetal brain into a representation that is easier to analyse. After exhaustive analysis, each module of this pipeline has proven to achieve state-of-the-art performance that is consistent across a wide gestational range, as well as robust to image quality, while requiring minimal pre-processing. Additionally, this pipeline has been designed to be modular, and easy to modify and expand upon, with the purpose of making it as easy as possible for other researchers to develop new tools and adapt it to their needs. This combination of performance, flexibility, and ease of use may have the potential to help 3D US become the preferred imaging modality for researching and assessing fetal development

    Towards segmentation and spatial alignment of the human embryonic brain using deep learning for atlas-based registration

    Full text link
    We propose an unsupervised deep learning method for atlas based registration to achieve segmentation and spatial alignment of the embryonic brain in a single framework. Our approach consists of two sequential networks with a specifically designed loss function to address the challenges in 3D first trimester ultrasound. The first part learns the affine transformation and the second part learns the voxelwise nonrigid deformation between the target image and the atlas. We trained this network end-to-end and validated it against a ground truth on synthetic datasets designed to resemble the challenges present in 3D first trimester ultrasound. The method was tested on a dataset of human embryonic ultrasound volumes acquired at 9 weeks gestational age, which showed alignment of the brain in some cases and gave insight in open challenges for the proposed method. We conclude that our method is a promising approach towards fully automated spatial alignment and segmentation of embryonic brains in 3D ultrasound

    A review of image processing methods for fetal head and brain analysis in ultrasound images

    Get PDF
    Background and objective: Examination of head shape and brain during the fetal period is paramount to evaluate head growth, predict neurodevelopment, and to diagnose fetal abnormalities. Prenatal ultrasound is the most used imaging modality to perform this evaluation. However, manual interpretation of these images is challenging and thus, image processing methods to aid this task have been proposed in the literature. This article aims to present a review of these state-of-the-art methods. Methods: In this work, it is intended to analyze and categorize the different image processing methods to evaluate fetal head and brain in ultrasound imaging. For that, a total of 109 articles published since 2010 were analyzed. Different applications are covered in this review, namely analysis of head shape and inner structures of the brain, standard clinical planes identification, fetal development analysis, and methods for image processing enhancement. Results: For each application, the reviewed techniques are categorized according to their theoretical approach, and the more suitable image processing methods to accurately analyze the head and brain are identified. Furthermore, future research needs are discussed. Finally, topics whose research is lacking in the literature are outlined, along with new fields of applications. Conclusions: A multitude of image processing methods has been proposed for fetal head and brain analysis. Summarily, techniques from different categories showed their potential to improve clinical practice. Nevertheless, further research must be conducted to potentiate the current methods, especially for 3D imaging analysis and acquisition and for abnormality detection. (c) 2022 Elsevier B.V. All rights reserved.FCT - Fundação para a Ciência e a Tecnologia(UIDB/00319/2020)This work was funded by projects “NORTE-01–0145-FEDER- 0 0 0 059 , NORTE-01-0145-FEDER-024300 and “NORTE-01–0145- FEDER-0 0 0 045 , supported by Northern Portugal Regional Opera- tional Programme (Norte2020), under the Portugal 2020 Partner- ship Agreement, through the European Regional Development Fund (FEDER). It was also funded by national funds, through the FCT – Fundação para a Ciência e Tecnologia within the R&D Units Project Scope: UIDB/00319/2020 and by FCT and FCT/MCTES in the scope of the projects UIDB/05549/2020 and UIDP/05549/2020 . The authors also acknowledge support from FCT and the Euro- pean Social Found, through Programa Operacional Capital Humano (POCH), in the scope of the PhD grant SFRH/BD/136670/2018 and SFRH/BD/136721/2018

    Automated fetal brain extraction from clinical Ultrasound volumes using 3D Convolutional Neural Networks

    Full text link
    To improve the performance of most neuroimiage analysis pipelines, brain extraction is used as a fundamental first step in the image processing. But in the case of fetal brain development, there is a need for a reliable US-specific tool. In this work we propose a fully automated 3D CNN approach to fetal brain extraction from 3D US clinical volumes with minimal preprocessing. Our method accurately and reliably extracts the brain regardless of the large data variation inherent in this imaging modality. It also performs consistently throughout a gestational age range between 14 and 31 weeks, regardless of the pose variation of the subject, the scale, and even partial feature-obstruction in the image, outperforming all current alternatives.Comment: 13 pages, 7 figures, MIUA conferenc

    Anatomy-Aware Self-supervised Fetal MRI Synthesis from Unpaired Ultrasound Images

    Full text link
    Fetal brain magnetic resonance imaging (MRI) offers exquisite images of the developing brain but is not suitable for anomaly screening. For this ultrasound (US) is employed. While expert sonographers are adept at reading US images, MR images are much easier for non-experts to interpret. Hence in this paper we seek to produce images with MRI-like appearance directly from clinical US images. Our own clinical motivation is to seek a way to communicate US findings to patients or clinical professionals unfamiliar with US, but in medical image analysis such a capability is potentially useful, for instance, for US-MRI registration or fusion. Our model is self-supervised and end-to-end trainable. Specifically, based on an assumption that the US and MRI data share a similar anatomical latent space, we first utilise an extractor to determine shared latent features, which are then used for data synthesis. Since paired data was unavailable for our study (and rare in practice), we propose to enforce the distributions to be similar instead of employing pixel-wise constraints, by adversarial learning in both the image domain and latent space. Furthermore, we propose an adversarial structural constraint to regularise the anatomical structures between the two modalities during the synthesis. A cross-modal attention scheme is proposed to leverage non-local spatial correlations. The feasibility of the approach to produce realistic looking MR images is demonstrated quantitatively and with a qualitative evaluation compared to real fetal MR images.Comment: MICCAI-MLMI 201

    Self-Supervised Ultrasound to MRI Fetal Brain Image Synthesis

    Full text link
    Fetal brain magnetic resonance imaging (MRI) offers exquisite images of the developing brain but is not suitable for second-trimester anomaly screening, for which ultrasound (US) is employed. Although expert sonographers are adept at reading US images, MR images which closely resemble anatomical images are much easier for non-experts to interpret. Thus in this paper we propose to generate MR-like images directly from clinical US images. In medical image analysis such a capability is potentially useful as well, for instance for automatic US-MRI registration and fusion. The proposed model is end-to-end trainable and self-supervised without any external annotations. Specifically, based on an assumption that the US and MRI data share a similar anatomical latent space, we first utilise a network to extract the shared latent features, which are then used for MRI synthesis. Since paired data is unavailable for our study (and rare in practice), pixel-level constraints are infeasible to apply. We instead propose to enforce the distributions to be statistically indistinguishable, by adversarial learning in both the image domain and feature space. To regularise the anatomical structures between US and MRI during synthesis, we further propose an adversarial structural constraint. A new cross-modal attention technique is proposed to utilise non-local spatial information, by encouraging multi-modal knowledge fusion and propagation. We extend the approach to consider the case where 3D auxiliary information (e.g., 3D neighbours and a 3D location index) from volumetric data is also available, and show that this improves image synthesis. The proposed approach is evaluated quantitatively and qualitatively with comparison to real fetal MR images and other approaches to synthesis, demonstrating its feasibility of synthesising realistic MR images.Comment: IEEE Transactions on Medical Imaging 202

    Deep learning-based plane pose regression in obstetric ultrasound

    Get PDF
    PURPOSE: In obstetric ultrasound (US) scanning, the learner's ability to mentally build a three-dimensional (3D) map of the fetus from a two-dimensional (2D) US image represents a major challenge in skill acquisition. We aim to build a US plane localisation system for 3D visualisation, training, and guidance without integrating additional sensors. METHODS: We propose a regression convolutional neural network (CNN) using image features to estimate the six-dimensional pose of arbitrarily oriented US planes relative to the fetal brain centre. The network was trained on synthetic images acquired from phantom 3D US volumes and fine-tuned on real scans. Training data was generated by slicing US volumes into imaging planes in Unity at random coordinates and more densely around the standard transventricular (TV) plane. RESULTS: With phantom data, the median errors are 0.90 mm/1.17[Formula: see text] and 0.44 mm/1.21[Formula: see text] for random planes and planes close to the TV one, respectively. With real data, using a different fetus with the same gestational age (GA), these errors are 11.84 mm/25.17[Formula: see text]. The average inference time is 2.97 ms per plane. CONCLUSION: The proposed network reliably localises US planes within the fetal brain in phantom data and successfully generalises pose regression for an unseen fetal brain from a similar GA as in training. Future development will expand the prediction to volumes of the whole fetus and assess its potential for vision-based, freehand US-assisted navigation when acquiring standard fetal planes

    Learning ultrasound plane pose regression: assessing generalized pose coordinates in the fetal brain

    Get PDF
    In obstetric ultrasound (US) scanning, the learner's ability to mentally build a three-dimensional (3D) map of the fetus from a two-dimensional (2D) US image represents a significant challenge in skill acquisition. We aim to build a US plane localization system for 3D visualization, training, and guidance without integrating additional sensors. This work builds on top of our previous work, which predicts the six-dimensional (6D) pose of arbitrarily-oriented US planes slicing the fetal brain with respect to a normalized reference frame using a convolutional neural network (CNN) regression network. Here, we analyze in detail the assumptions of the normalized fetal brain reference frame and quantify its accuracy with respect to the acquisition of transventricular (TV) standard plane (SP) for fetal biometry. We investigate the impact of registration quality in the training and testing data and its subsequent effect on trained models. Finally, we introduce data augmentations and larger training sets that improve the results of our previous work, achieving median errors of 3.53 mm and 6.42 degrees for translation and rotation, respectively.Comment: 12 pages, 9 figures, 2 tables. This work has been submitted to the IEEE for possible publication (IEEE TMRB). Copyright may be transferred without notice, after which this version may no longer be accessibl
    • …
    corecore