239 research outputs found

    EchoFusion: Tracking and Reconstruction of Objects in 4D Freehand Ultrasound Imaging without External Trackers

    Get PDF
    Ultrasound (US) is the most widely used fetal imaging technique. However, US images have limited capture range, and suffer from view dependent artefacts such as acoustic shadows. Compounding of overlapping 3D US acquisitions into a high-resolution volume can extend the field of view and remove image artefacts, which is useful for retrospective analysis including population based studies. However, such volume reconstructions require information about relative transformations between probe positions from which the individual volumes were acquired. In prenatal US scans, the fetus can move independently from the mother, making external trackers such as electromagnetic or optical tracking unable to track the motion between probe position and the moving fetus. We provide a novel methodology for image-based tracking and volume reconstruction by combining recent advances in deep learning and simultaneous localisation and mapping (SLAM). Tracking semantics are established through the use of a Residual 3D U-Net and the output is fed to the SLAM algorithm. As a proof of concept, experiments are conducted on US volumes taken from a whole body fetal phantom, and from the heads of real fetuses. For the fetal head segmentation, we also introduce a novel weak annotation approach to minimise the required manual effort for ground truth annotation. We evaluate our method qualitatively, and quantitatively with respect to tissue discrimination accuracy and tracking robustness.Comment: MICCAI Workshop on Perinatal, Preterm and Paediatric Image analysis (PIPPI), 201

    FUSQA: Fetal Ultrasound Segmentation Quality Assessment

    Full text link
    Deep learning models have been effective for various fetal ultrasound segmentation tasks. However, generalization to new unseen data has raised questions about their effectiveness for clinical adoption. Normally, a transition to new unseen data requires time-consuming and costly quality assurance processes to validate the segmentation performance post-transition. Segmentation quality assessment efforts have focused on natural images, where the problem has been typically formulated as a dice score regression task. In this paper, we propose a simplified Fetal Ultrasound Segmentation Quality Assessment (FUSQA) model to tackle the segmentation quality assessment when no masks exist to compare with. We formulate the segmentation quality assessment process as an automated classification task to distinguish between good and poor-quality segmentation masks for more accurate gestational age estimation. We validate the performance of our proposed approach on two datasets we collect from two hospitals using different ultrasound machines. We compare different architectures, with our best-performing architecture achieving over 90% classification accuracy on distinguishing between good and poor-quality segmentation masks from an unseen dataset. Additionally, there was only a 1.45-day difference between the gestational age reported by doctors and estimated based on CRL measurements using well-segmented masks. On the other hand, this difference increased and reached up to 7.73 days when we calculated CRL from the poorly segmented masks. As a result, AI-based approaches can potentially aid fetal ultrasound segmentation quality assessment and might detect poor segmentation in real-time screening in the future.Comment: 13 pages, 3 figures, 3 table

    Machine learning and disease prediction in obstetrics

    Get PDF
    Machine learning technologies and translation of artificial intelligence tools to enhance the patient experience are changing obstetric and maternity care. An increasing number of predictive tools have been developed with data sourced from electronic health records, diagnostic imaging and digital devices. In this review, we explore the latest tools of machine learning, the algorithms to establish prediction models and the challenges to assess fetal well-being, predict and diagnose obstetric diseases such as gestational diabetes, pre-eclampsia, preterm birth and fetal growth restriction. We discuss the rapid growth of machine learning approaches and intelligent tools for automated diagnostic imaging of fetal anomalies and to asses fetoplacental and cervix function using ultrasound and magnetic resonance imaging. In prenatal diagnosis, we discuss intelligent tools for magnetic resonance imaging sequencing of the fetus, placenta and cervix to reduce the risk of preterm birth. Finally, the use of machine learning to improve safety standards in intrapartum care and early detection of complications will be discussed. The demand for technologies to enhance diagnosis and treatment in obstetrics and maternity should improve frameworks for patient safety and enhance clinical practice

    Automatic Segmentation of Human Placenta Images with U-Net

    Full text link
    © 2013 IEEE. Placenta is closely related to the health of the fetus. Abnormal placental function will affect the normal development of the fetus, and in severe cases, even endanger the life of the fetus. Therefore, accurate and quantitative evaluation of placenta has important clinical significance. It is a common method to segment human placenta with semantic segmentation. However, manual segmentation relies too much on the professional knowledge and clinical experience of the staff, and it will also consume a lot of time. Therefore, based on u-net, we propose an automatic segmentation method of human placenta, which reduces manual intervention and greatly speeds up the segmentation, making large-scale segmentation possible. The human placenta data set we used was labeled by experts, which was obtained from prenatal examinations of 11 pregnant women, about 1,110 images. It was a comprehensive and clinically significant data set. By training the network with such data set, the robustness of the model will be better. After testing on the data set, the segmentation effect is basically consistent with the manual segmentation effect

    A review of image processing methods for fetal head and brain analysis in ultrasound images

    Get PDF
    Background and objective: Examination of head shape and brain during the fetal period is paramount to evaluate head growth, predict neurodevelopment, and to diagnose fetal abnormalities. Prenatal ultrasound is the most used imaging modality to perform this evaluation. However, manual interpretation of these images is challenging and thus, image processing methods to aid this task have been proposed in the literature. This article aims to present a review of these state-of-the-art methods. Methods: In this work, it is intended to analyze and categorize the different image processing methods to evaluate fetal head and brain in ultrasound imaging. For that, a total of 109 articles published since 2010 were analyzed. Different applications are covered in this review, namely analysis of head shape and inner structures of the brain, standard clinical planes identification, fetal development analysis, and methods for image processing enhancement. Results: For each application, the reviewed techniques are categorized according to their theoretical approach, and the more suitable image processing methods to accurately analyze the head and brain are identified. Furthermore, future research needs are discussed. Finally, topics whose research is lacking in the literature are outlined, along with new fields of applications. Conclusions: A multitude of image processing methods has been proposed for fetal head and brain analysis. Summarily, techniques from different categories showed their potential to improve clinical practice. Nevertheless, further research must be conducted to potentiate the current methods, especially for 3D imaging analysis and acquisition and for abnormality detection. (c) 2022 Elsevier B.V. All rights reserved.FCT - Fundação para a Ciência e a Tecnologia(UIDB/00319/2020)This work was funded by projects “NORTE-01–0145-FEDER- 0 0 0 059 , NORTE-01-0145-FEDER-024300 and “NORTE-01–0145- FEDER-0 0 0 045 , supported by Northern Portugal Regional Opera- tional Programme (Norte2020), under the Portugal 2020 Partner- ship Agreement, through the European Regional Development Fund (FEDER). It was also funded by national funds, through the FCT – Fundação para a Ciência e Tecnologia within the R&D Units Project Scope: UIDB/00319/2020 and by FCT and FCT/MCTES in the scope of the projects UIDB/05549/2020 and UIDP/05549/2020 . The authors also acknowledge support from FCT and the Euro- pean Social Found, through Programa Operacional Capital Humano (POCH), in the scope of the PhD grant SFRH/BD/136670/2018 and SFRH/BD/136721/2018

    Automatic 3D Multi-modal Ultrasound Segmentation of Human Placenta using Fusion Strategies and Deep Learning

    Full text link
    Purpose: Ultrasound is the most commonly used medical imaging modality for diagnosis and screening in clinical practice. Due to its safety profile, noninvasive nature and portability, ultrasound is the primary imaging modality for fetal assessment in pregnancy. Current ultrasound processing methods are either manual or semi-automatic and are therefore laborious, time-consuming and prone to errors, and automation would go a long way in addressing these challenges. Automated identification of placental changes at earlier gestation could facilitate potential therapies for conditions such as fetal growth restriction and pre-eclampsia that are currently detected only at late gestational age, potentially preventing perinatal morbidity and mortality. Methods: We propose an automatic three-dimensional multi-modal (B-mode and power Doppler) ultrasound segmentation of the human placenta using deep learning combined with different fusion strategies.We collected data containing Bmode and power Doppler ultrasound scans for 400 studies. Results: We evaluated different fusion strategies and state-of-the-art image segmentation networks for placenta segmentation based on standard overlap- and boundary-based metrics. We found that multimodal information in the form of B-mode and power Doppler scans outperform any single modality. Furthermore, we found that B-mode and power Doppler input scans fused at the data level provide the best results with a mean Dice Similarity Coefficient (DSC) of 0.849. Conclusion: We conclude that the multi-modal approach of combining B-mode and power Doppler scans is effective in segmenting the placenta from 3D ultrasound scans in a fully automated manner and is robust to quality variation of the datasets
    • …
    corecore