271 research outputs found
A review of image processing methods for fetal head and brain analysis in ultrasound images
Background and objective: Examination of head shape and brain during the fetal period is paramount to evaluate head growth, predict neurodevelopment, and to diagnose fetal abnormalities. Prenatal ultrasound is the most used imaging modality to perform this evaluation. However, manual interpretation of these images is challenging and thus, image processing methods to aid this task have been proposed in the literature. This article aims to present a review of these state-of-the-art methods. Methods: In this work, it is intended to analyze and categorize the different image processing methods to evaluate fetal head and brain in ultrasound imaging. For that, a total of 109 articles published since 2010 were analyzed. Different applications are covered in this review, namely analysis of head shape and inner structures of the brain, standard clinical planes identification, fetal development analysis, and methods for image processing enhancement. Results: For each application, the reviewed techniques are categorized according to their theoretical approach, and the more suitable image processing methods to accurately analyze the head and brain are identified. Furthermore, future research needs are discussed. Finally, topics whose research is lacking in the literature are outlined, along with new fields of applications. Conclusions: A multitude of image processing methods has been proposed for fetal head and brain analysis. Summarily, techniques from different categories showed their potential to improve clinical practice. Nevertheless, further research must be conducted to potentiate the current methods, especially for 3D imaging analysis and acquisition and for abnormality detection. (c) 2022 Elsevier B.V. All rights reserved.FCT - Fundação para a Ciência e a Tecnologia(UIDB/00319/2020)This work was funded by projects “NORTE-01–0145-FEDER- 0 0 0 059 , NORTE-01-0145-FEDER-024300 and “NORTE-01–0145- FEDER-0 0 0 045 , supported by Northern Portugal Regional Opera- tional Programme (Norte2020), under the Portugal 2020 Partner- ship Agreement, through the European Regional Development Fund (FEDER). It was also funded by national funds, through the FCT – Fundação para a Ciência e Tecnologia within the R&D Units Project Scope: UIDB/00319/2020 and by FCT and FCT/MCTES in the scope of the projects UIDB/05549/2020 and UIDP/05549/2020 . The authors also acknowledge support from FCT and the Euro- pean Social Found, through Programa Operacional Capital Humano (POCH), in the scope of the PhD grant SFRH/BD/136670/2018 and SFRH/BD/136721/2018
Fully-automated deep learning pipeline for 3D fetal brain ultrasound
Three-dimensional ultrasound (3D US) imaging has shown significant potential for in-utero assessment of the development of the fetal brain. However, in spite of the potential benefits of this modality over its two-dimensional (2D) counterpart, its widespread adoption remains largely limited by the difficulty associated with its analysis.
While more established 3D neuroimaging modalities, such as Magnetic Res- onance Imaging (MRI), have circumvented similar challenges thanks to reliable, automated neuroimage analysis pipelines, there is currently no comparable pipeline solution for 3D neurosonography.
With the goal of facilitating medical research and encouraging the adoption of 3D US for clinical assessment, the main objective of my doctoral thesis is to design, develop, and validate a set of fundamental automated modules that comprise a fast, robust, fully automated, general-purpose pipeline for the neuroimage analysis of fetal 3D US scans.
For the first module, I propose the fetal Brain Extraction Network (fBEN), a fully-automated, end-to-end 3D Convolutional Neural Network (CNN) with an encoder-decoder architecture. It predicts an accurate binary brain mask for the automated extraction of the fetal brain from standard clinical 3D US scans.
For the second module I propose the fetal Brain Alignment Network (fBAN), a fully-automated, end-to-end regression network with a cascade architecture that accurately predicts the alignment parameters required to rigidly align standard clinical 3D US scans to a canonical reference space.
Finally, for the third module, I propose the fetal Brain Fingerprinting Net- work (fBFN), a fully-automated, end-to-end network based on a Variational AutoEncoder (VAE) architecture, that encodes the entire structural information of the 3D brain into a relatively small set of parameters in a continuously distributed latent space. It is a general-purpose solution aimed at facilitating the assessment of the 3D US scans by recharacterising the fetal brain into a representation that is easier to analyse.
After exhaustive analysis, each module of this pipeline has proven to achieve state-of-the-art performance that is consistent across a wide gestational range, as well as robust to image quality, while requiring minimal pre-processing. Additionally, this pipeline has been designed to be modular, and easy to modify and expand upon, with the purpose of making it as easy as possible for other researchers to develop new tools and adapt it to their needs. This combination of performance, flexibility, and ease of use may have the potential to help 3D US become the preferred imaging modality for researching and assessing fetal development
A framework for analysis of linear ultrasound videos to detect fetal presentation and heartbeat.
Confirmation of pregnancy viability (presence of fetal cardiac activity) and diagnosis of fetal presentation (head or buttock in the maternal pelvis) are the first essential components of ultrasound assessment in obstetrics. The former is useful in assessing the presence of an on-going pregnancy and the latter is essential for labour management. We propose an automated framework for detection of fetal presentation and heartbeat from a predefined free-hand ultrasound sweep of the maternal abdomen. Our method exploits the presence of key anatomical sonographic image patterns in carefully designed scanning protocols to develop, for the first time, an automated framework allowing novice sonographers to detect fetal breech presentation and heartbeat from an ultrasound sweep. The framework consists of a classification regime for a frame by frame categorization of each 2D slice of the video. The classification scores are then regularized through a conditional random field model, taking into account the temporal relationship between the video frames. Subsequently, if consecutive frames of the fetal heart are detected, a kernelized linear dynamical model is used to identify whether a heartbeat can be detected in the sequence. In a dataset of 323 predefined free-hand videos, covering the mother's abdomen in a straight sweep, the fetal skull, abdomen, and heart were detected with a mean classification accuracy of 83.4%. Furthermore, for the detection of the heartbeat an overall classification accuracy of 93.1% was achieved
How to Acquire Cardiac Volumes for Sonographic Examination of the Fetal Heart
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/135409/1/jum20163551043.pd
Recent Advances in Artificial Intelligence-Assisted Ultrasound Scanning
Funded by the Spanish Ministry of Economic Affairs and Digital Transformation (Project MIA.2021.M02.0005 TARTAGLIA, from the Recovery, Resilience, and Transformation Plan financed by the European Union through Next Generation EU funds). TARTAGLIA takes place under the R&D Missions in Artificial Intelligence program, which is part of the Spain Digital 2025 Agenda and the Spanish National Artificial Intelligence Strategy.Ultrasound (US) is a flexible imaging modality used globally as a first-line medical exam procedure in many different clinical cases. It benefits from the continued evolution of ultrasonic technologies and a well-established US-based digital health system. Nevertheless, its diagnostic performance still presents challenges due to the inherent characteristics of US imaging, such as manual operation and significant operator dependence. Artificial intelligence (AI) has proven to recognize complicated scan patterns and provide quantitative assessments for imaging data. Therefore, AI technology has the potential to help physicians get more accurate and repeatable outcomes in the US. In this article, we review the recent advances in AI-assisted US scanning. We have identified the main areas where AI is being used to facilitate US scanning, such as standard plane recognition and organ identification, the extraction of standard clinical planes from 3D US volumes, and the scanning guidance of US acquisitions performed by humans or robots. In general, the lack of standardization and reference datasets in this field makes it difficult to perform comparative studies among the different proposed methods. More open-access repositories of large US datasets with detailed information about the acquisition are needed to facilitate the development of this very active research field, which is expected to have a very positive impact on US imaging.Depto. de Estructura de la Materia, FÃsica Térmica y ElectrónicaFac. de Ciencias FÃsicasTRUEMinistry of Economic Affairs and Digital Transformation from the Recovery, Resilience, and Transformation PlanNext Generation EU fundspu
Automated fetal brain extraction from clinical Ultrasound volumes using 3D Convolutional Neural Networks
To improve the performance of most neuroimiage analysis pipelines, brain
extraction is used as a fundamental first step in the image processing. But in
the case of fetal brain development, there is a need for a reliable US-specific
tool. In this work we propose a fully automated 3D CNN approach to fetal brain
extraction from 3D US clinical volumes with minimal preprocessing. Our method
accurately and reliably extracts the brain regardless of the large data
variation inherent in this imaging modality. It also performs consistently
throughout a gestational age range between 14 and 31 weeks, regardless of the
pose variation of the subject, the scale, and even partial feature-obstruction
in the image, outperforming all current alternatives.Comment: 13 pages, 7 figures, MIUA conferenc
Automatic linear measurements of the fetal brain on MRI with deep neural networks
Timely, accurate and reliable assessment of fetal brain development is
essential to reduce short and long-term risks to fetus and mother. Fetal MRI is
increasingly used for fetal brain assessment. Three key biometric linear
measurements important for fetal brain evaluation are Cerebral Biparietal
Diameter (CBD), Bone Biparietal Diameter (BBD), and Trans-Cerebellum Diameter
(TCD), obtained manually by expert radiologists on reference slices, which is
time consuming and prone to human error. The aim of this study was to develop a
fully automatic method computing the CBD, BBD and TCD measurements from fetal
brain MRI. The input is fetal brain MRI volumes which may include the fetal
body and the mother's abdomen. The outputs are the measurement values and
reference slices on which the measurements were computed. The method, which
follows the manual measurements principle, consists of five stages: 1)
computation of a Region Of Interest that includes the fetal brain with an
anisotropic 3D U-Net classifier; 2) reference slice selection with a
Convolutional Neural Network; 3) slice-wise fetal brain structures segmentation
with a multiclass U-Net classifier; 4) computation of the fetal brain
midsagittal line and fetal brain orientation, and; 5) computation of the
measurements. Experimental results on 214 volumes for CBD, BBD and TCD
measurements yielded a mean difference of 1.55mm, 1.45mm and 1.23mm
respectively, and a Bland-Altman 95% confidence interval () of 3.92mm,
3.98mm and 2.25mm respectively. These results are similar to the manual
inter-observer variability. The proposed automatic method for computing
biometric linear measurements of the fetal brain from MR imaging achieves human
level performance. It has the potential of being a useful method for the
assessment of fetal brain biometry in normal and pathological cases, and of
improving routine clinical practice.Comment: 15 pages, 8 figures, presented in CARS 2020, submitted to IJCAR
- …