227 research outputs found
A regression framework to head-circumference delineation from US fetal images
Background and Objectives: Measuring head-circumference (HC) length from ultrasound (US) images is a crucial clinical task to assess fetus growth. To lower intra- and inter-operator variability in HC length measuring, several computer-assisted solutions have been proposed in the years. Recently, a large number of deep-learning approaches is addressing the problem of HC delineation through the segmentation of the whole fetal head via convolutional neural networks (CNNs). Since the task is a edge-delineation problem, we propose a different strategy based on regression CNNs. Methods: The proposed framework consists of a region-proposal CNN for head localization and centering, and a regression CNN for accurately delineate the HC. The first CNN is trained exploiting transfer learning, while we propose a training strategy for the regression CNN based on distance fields. Results: The framework was tested on the HC18 Challenge dataset, which consists of 999 training and 335 testing images. A mean absolute difference of 1.90 ( ± 1.76) mm and a Dice similarity coefficient of 97.75 ( ± 1.32) % were achieved, overcoming approaches in the literature. Conclusions: The experimental results showed the effectiveness of the proposed framework, proving its potential in supporting clinicians during the clinical practice
Segmentation of fetal 2D images with deep learning: a review
Image segmentation plays a vital role in
providing sustainable medical care in this evolving biomedical
image processing technology. Nowadays, it is considered one of
the most important research directions in the computer vision
field. Since the last decade, deep learning-based medical image
processing has become a research hotspot due to its exceptional
performance. In this paper, we present a review of different
deep learning techniques used to segment fetal 2D images.
First, we explain the basic ideas of each approach and then
thoroughly investigate the methods used for the segmentation
of fetal images. Secondly, the results and accuracy of different
approaches are also discussed. The dataset details used for
assessing the performance of the respective method are also
documented. Based on the review studies, the challenges and
future work are also pointed out at the end. As a result, it is
shown that deep learning techniques are very effective in the
segmentation of fetal 2D images.info:eu-repo/semantics/publishedVersio
FUSQA: Fetal Ultrasound Segmentation Quality Assessment
Deep learning models have been effective for various fetal ultrasound
segmentation tasks. However, generalization to new unseen data has raised
questions about their effectiveness for clinical adoption. Normally, a
transition to new unseen data requires time-consuming and costly quality
assurance processes to validate the segmentation performance post-transition.
Segmentation quality assessment efforts have focused on natural images, where
the problem has been typically formulated as a dice score regression task. In
this paper, we propose a simplified Fetal Ultrasound Segmentation Quality
Assessment (FUSQA) model to tackle the segmentation quality assessment when no
masks exist to compare with. We formulate the segmentation quality assessment
process as an automated classification task to distinguish between good and
poor-quality segmentation masks for more accurate gestational age estimation.
We validate the performance of our proposed approach on two datasets we collect
from two hospitals using different ultrasound machines. We compare different
architectures, with our best-performing architecture achieving over 90%
classification accuracy on distinguishing between good and poor-quality
segmentation masks from an unseen dataset. Additionally, there was only a
1.45-day difference between the gestational age reported by doctors and
estimated based on CRL measurements using well-segmented masks. On the other
hand, this difference increased and reached up to 7.73 days when we calculated
CRL from the poorly segmented masks. As a result, AI-based approaches can
potentially aid fetal ultrasound segmentation quality assessment and might
detect poor segmentation in real-time screening in the future.Comment: 13 pages, 3 figures, 3 table
Machine Learning in Fetal Cardiology: What to Expect
In fetal cardiology, imaging (especially echocardiography) has demonstrated to help in the diagnosis and monitoring of fetuses with a compromised cardiovascular system potentially associated with several fetal conditions. Different ultrasound approaches are currently used to evaluate fetal cardiac structure and function, including conventional 2-D imaging and M-mode and tissue Doppler imaging among others. However, assessment of the fetal heart is still challenging mainly due to involuntary movements of the fetus, the small size of the heart, and the lack of expertise in fetal echocardiography of some sonographers. Therefore, the use of new technologies to improve the primary acquired images, to help extract measurements, or to aid in the diagnosis of cardiac abnormalities is of great importance for optimal assessment of the fetal heart. Machine leaning (ML) is a computer science discipline focused on teaching a computer to perform tasks with specific goals without explicitly programming the rules on how to perform this task. In this review we provide a brief overview on the potential of ML techniques to improve the evaluation of fetal cardiac function by optimizing image acquisition and quantification/segmentation, as well as aid in improving the prenatal diagnoses of fetal cardiac remodeling and abnormalities
AutoFB: Automating Fetal Biometry Estimation from Standard Ultrasound Planes
During pregnancy, ultrasound examination in the second trimester can assess fetal size according to standardized charts. To achieve a reproducible and accurate measurement, a sonographer needs to identify three standard 2D planes of the fetal anatomy (head, abdomen, femur) and manually mark the key anatomical landmarks on the image for accurate biometry and fetal weight estimation. This can be a time-consuming operator-dependent task, especially for a trainee sonographer. Computer-assisted techniques can help in automating the fetal biometry computation process. In this paper, we present a unified automated framework for estimating all measurements needed for the fetal weight assessment. The proposed framework semantically segments the key fetal anatomies using state-of-the-art segmentation models, followed by region fitting and scale recovery for the biometry estimation. We present an ablation study of segmentation algorithms to show their robustness through 4-fold cross-validation on a dataset of 349 ultrasound standard plane images from 42 pregnancies. Moreover, we show that the network with the best segmentation performance tends to be more accurate for biometry estimation. Furthermore, we demonstrate that the error between clinically measured and predicted fetal biometry is lower than the permissible error during routine clinical measurements
AutoFB: Automating Fetal Biometry Estimation from Standard Ultrasound Planes
During pregnancy, ultrasound examination in the second trimester can assess fetal size according to standardized charts. To achieve a reproducible and accurate measurement, a sonographer needs to identify three standard 2D planes of the fetal anatomy (head, abdomen, femur) and manually mark the key anatomical landmarks on the image for accurate biometry and fetal weight estimation. This can be a time-consuming operator-dependent task, especially for a trainee sonographer. Computer-assisted techniques can help in automating the fetal biometry computation process. In this paper, we present a unified automated framework for estimating all measurements needed for the fetal weight assessment. The proposed framework semantically segments the key fetal anatomies using state-of-the-art segmentation models, followed by region fitting and scale recovery for the biometry estimation. We present an ablation study of segmentation algorithms to show their robustness through 4-fold cross-validation on a dataset of 349 ultrasound standard plane images from 42 pregnancies. Moreover, we show that the network with the best segmentation performance tends to be more accurate for biometry estimation. Furthermore, we demonstrate that the error between clinically measured and predicted fetal biometry is lower than the permissible error during routine clinical measurements
Symbiotic deep learning for medical image analysis with applications in real-time diagnosis for fetal ultrasound screening
The last hundred years have seen a monumental rise in the power and capability of machines to
perform intelligent tasks in the stead of previously human operators. This rise is not expected
to slow down any time soon and what this means for society and humanity as a whole remains
to be seen. The overwhelming notion is that with the right goals in mind, the growing influence
of machines on our every day tasks will enable humanity to give more attention to the truly
groundbreaking challenges that we all face together. This will usher in a new age of human
machine collaboration in which humans and machines may work side by side to achieve greater
heights for all of humanity. Intelligent systems are useful in isolation, but the true benefits of
intelligent systems come to the fore in complex systems where the interaction between humans
and machines can be made seamless, and it is this goal of symbiosis between human and machine
that may democratise complex knowledge, which motivates this thesis. In the recent past, datadriven
methods have come to the fore and now represent the state-of-the-art in many different
fields. Alongside the shift from rule-based towards data-driven methods we have also seen a
shift in how humans interact with these technologies. Human computer interaction is changing
in response to data-driven methods and new techniques must be developed to enable the same
symbiosis between man and machine for data-driven methods as for previous formula-driven
technology.
We address five key challenges which need to be overcome for data-driven human-in-the-loop
computing to reach maturity. These are (1) the ’Categorisation Challenge’ where we examine
existing work and form a taxonomy of the different methods being utilised for data-driven
human-in-the-loop computing; (2) the ’Confidence Challenge’, where data-driven methods must
communicate interpretable beliefs in how confident their predictions are; (3) the ’Complexity
Challenge’ where the aim of reasoned communication becomes increasingly important as the
complexity of tasks and methods to solve also increases; (4) the ’Classification Challenge’ in
which we look at how complex methods can be separated in order to provide greater reasoning
in complex classification tasks; and finally (5) the ’Curation Challenge’ where we challenge the
assumptions around bottleneck creation for the development of supervised learning methods.Open Acces
- …