41 research outputs found

    A review of image processing methods for fetal head and brain analysis in ultrasound images

    Get PDF
    Background and objective: Examination of head shape and brain during the fetal period is paramount to evaluate head growth, predict neurodevelopment, and to diagnose fetal abnormalities. Prenatal ultrasound is the most used imaging modality to perform this evaluation. However, manual interpretation of these images is challenging and thus, image processing methods to aid this task have been proposed in the literature. This article aims to present a review of these state-of-the-art methods. Methods: In this work, it is intended to analyze and categorize the different image processing methods to evaluate fetal head and brain in ultrasound imaging. For that, a total of 109 articles published since 2010 were analyzed. Different applications are covered in this review, namely analysis of head shape and inner structures of the brain, standard clinical planes identification, fetal development analysis, and methods for image processing enhancement. Results: For each application, the reviewed techniques are categorized according to their theoretical approach, and the more suitable image processing methods to accurately analyze the head and brain are identified. Furthermore, future research needs are discussed. Finally, topics whose research is lacking in the literature are outlined, along with new fields of applications. Conclusions: A multitude of image processing methods has been proposed for fetal head and brain analysis. Summarily, techniques from different categories showed their potential to improve clinical practice. Nevertheless, further research must be conducted to potentiate the current methods, especially for 3D imaging analysis and acquisition and for abnormality detection. (c) 2022 Elsevier B.V. All rights reserved.FCT - Fundação para a Ciência e a Tecnologia(UIDB/00319/2020)This work was funded by projects “NORTE-01–0145-FEDER- 0 0 0 059 , NORTE-01-0145-FEDER-024300 and “NORTE-01–0145- FEDER-0 0 0 045 , supported by Northern Portugal Regional Opera- tional Programme (Norte2020), under the Portugal 2020 Partner- ship Agreement, through the European Regional Development Fund (FEDER). It was also funded by national funds, through the FCT – Fundação para a Ciência e Tecnologia within the R&D Units Project Scope: UIDB/00319/2020 and by FCT and FCT/MCTES in the scope of the projects UIDB/05549/2020 and UIDP/05549/2020 . The authors also acknowledge support from FCT and the Euro- pean Social Found, through Programa Operacional Capital Humano (POCH), in the scope of the PhD grant SFRH/BD/136670/2018 and SFRH/BD/136721/2018

    Minimally Interactive Segmentation with Application to Human Placenta in Fetal MR Images

    Get PDF
    Placenta segmentation from fetal Magnetic Resonance (MR) images is important for fetal surgical planning. However, accurate segmentation results are difficult to achieve for automatic methods, due to sparse acquisition, inter-slice motion, and the widely varying position and shape of the placenta among pregnant women. Interactive methods have been widely used to get more accurate and robust results. A good interactive segmentation method should achieve high accuracy, minimize user interactions with low variability among users, and be computationally fast. Exploiting recent advances in machine learning, I explore a family of new interactive methods for placenta segmentation from fetal MR images. I investigate the combination of user interactions with learning from a single image or a large set of images. For learning from a single image, I propose novel Online Random Forests to efficiently leverage user interactions for the segmentation of 2D and 3D fetal MR images. I also investigate co-segmentation of multiple volumes of the same patient with 4D Graph Cuts. For learning from a large set of images, I first propose a deep learning-based framework that combines user interactions with Convolutional Neural Networks (CNN) based on geodesic distance transforms to achieve accurate segmentation and good interactivity. I then propose image-specific fine-tuning to make CNNs adaptive to different individual images and able to segment previously unseen objects. Experimental results show that the proposed algorithms outperform traditional interactive segmentation methods in terms of accuracy and interactivity. Therefore, they might be suitable for segmentation of the placenta in planning systems for fetal and maternal surgery, and for rapid characterization of the placenta by MR images. I also demonstrate that they can be applied to the segmentation of other organs from 2D and 3D images

    Fully-automated deep learning pipeline for 3D fetal brain ultrasound

    Get PDF
    Three-dimensional ultrasound (3D US) imaging has shown significant potential for in-utero assessment of the development of the fetal brain. However, in spite of the potential benefits of this modality over its two-dimensional (2D) counterpart, its widespread adoption remains largely limited by the difficulty associated with its analysis. While more established 3D neuroimaging modalities, such as Magnetic Res- onance Imaging (MRI), have circumvented similar challenges thanks to reliable, automated neuroimage analysis pipelines, there is currently no comparable pipeline solution for 3D neurosonography. With the goal of facilitating medical research and encouraging the adoption of 3D US for clinical assessment, the main objective of my doctoral thesis is to design, develop, and validate a set of fundamental automated modules that comprise a fast, robust, fully automated, general-purpose pipeline for the neuroimage analysis of fetal 3D US scans. For the first module, I propose the fetal Brain Extraction Network (fBEN), a fully-automated, end-to-end 3D Convolutional Neural Network (CNN) with an encoder-decoder architecture. It predicts an accurate binary brain mask for the automated extraction of the fetal brain from standard clinical 3D US scans. For the second module I propose the fetal Brain Alignment Network (fBAN), a fully-automated, end-to-end regression network with a cascade architecture that accurately predicts the alignment parameters required to rigidly align standard clinical 3D US scans to a canonical reference space. Finally, for the third module, I propose the fetal Brain Fingerprinting Net- work (fBFN), a fully-automated, end-to-end network based on a Variational AutoEncoder (VAE) architecture, that encodes the entire structural information of the 3D brain into a relatively small set of parameters in a continuously distributed latent space. It is a general-purpose solution aimed at facilitating the assessment of the 3D US scans by recharacterising the fetal brain into a representation that is easier to analyse. After exhaustive analysis, each module of this pipeline has proven to achieve state-of-the-art performance that is consistent across a wide gestational range, as well as robust to image quality, while requiring minimal pre-processing. Additionally, this pipeline has been designed to be modular, and easy to modify and expand upon, with the purpose of making it as easy as possible for other researchers to develop new tools and adapt it to their needs. This combination of performance, flexibility, and ease of use may have the potential to help 3D US become the preferred imaging modality for researching and assessing fetal development

    Symbiotic deep learning for medical image analysis with applications in real-time diagnosis for fetal ultrasound screening

    Get PDF
    The last hundred years have seen a monumental rise in the power and capability of machines to perform intelligent tasks in the stead of previously human operators. This rise is not expected to slow down any time soon and what this means for society and humanity as a whole remains to be seen. The overwhelming notion is that with the right goals in mind, the growing influence of machines on our every day tasks will enable humanity to give more attention to the truly groundbreaking challenges that we all face together. This will usher in a new age of human machine collaboration in which humans and machines may work side by side to achieve greater heights for all of humanity. Intelligent systems are useful in isolation, but the true benefits of intelligent systems come to the fore in complex systems where the interaction between humans and machines can be made seamless, and it is this goal of symbiosis between human and machine that may democratise complex knowledge, which motivates this thesis. In the recent past, datadriven methods have come to the fore and now represent the state-of-the-art in many different fields. Alongside the shift from rule-based towards data-driven methods we have also seen a shift in how humans interact with these technologies. Human computer interaction is changing in response to data-driven methods and new techniques must be developed to enable the same symbiosis between man and machine for data-driven methods as for previous formula-driven technology. We address five key challenges which need to be overcome for data-driven human-in-the-loop computing to reach maturity. These are (1) the ’Categorisation Challenge’ where we examine existing work and form a taxonomy of the different methods being utilised for data-driven human-in-the-loop computing; (2) the ’Confidence Challenge’, where data-driven methods must communicate interpretable beliefs in how confident their predictions are; (3) the ’Complexity Challenge’ where the aim of reasoned communication becomes increasingly important as the complexity of tasks and methods to solve also increases; (4) the ’Classification Challenge’ in which we look at how complex methods can be separated in order to provide greater reasoning in complex classification tasks; and finally (5) the ’Curation Challenge’ where we challenge the assumptions around bottleneck creation for the development of supervised learning methods.Open Acces

    Quantification of fetal brain development from ultrasound images using interpretable deep learning

    Get PDF
    Ultrasound images of the fetal brain are routinely acquired during pregnancy to assess the health and development of the fetus. It is standard clinical practice to obtain simple in-plane measurements in 2D images that can not capture the complex structural development of the fetal brain during gestation. Therefore, in this thesis, I propose deep learning-based methods that can improve the understanding of brain development from ultrasound. Firstly, I studied the use of deep learning for subcortical structure segmentation. Deep learning models typically need a reasonably large number of samples to effectively learn the task. However, as subcortical structure segmentation is not a task typically performed in clinic, it is challenging to obtain large sample numbers with pixel-wise annotations. For this reason, I explored subcortical segmentation in a low-data regime, demonstrating that segmentation performance close to intra-observer variability can be obtained with only a handful of manual annotations. The developed segmentation models were then applied to a large number of volumes of a diverse, healthy population, generating ultrasound-specific growth curves of subcortical development. Predicting the gestational age of a fetus based on brain morphology can also be used as a way to quantify developmental patterns. While conventional deep learning methods can be used for this task, they can typically not explain their reasoning process or provide insight into the image parts that contributed to the final prediction. However, for clinical applications, it is vital to understand model behaviour to identify failure modes and gain patients’ trust. For this reason, I developed an age prediction model for fetal ultrasound that incorporates guided attention in the architecture to make interpretable and local brain age predictions. The attention is regularised with a segmentation loss, enforcing the network to focus on specific parts of the image. I demonstrate that guiding the network to focus on age-discriminative regions (the cortical plate and cerebellum) results in significantly improved prediction performance. Finally, I propose an alternative approach to interpretable brain age prediction that uses an inherently interpretable network, as opposed to a post-hoc explanation. The network learns a set of representative examples from the training set (prototypes) and predicts the age of a new sample based on the distances to these prototypes. The image-level distances are constructed from patch-level distances, which are structurally matched using optimal transport. The prototypes and distance computations can both be visualised, providing an understanding of the reasoning process of the model

    Computer-Assisted Planning and Robotics in Epilepsy Surgery

    Get PDF
    Epilepsy is a severe and devastating condition that affects ~1% of the population. Around 30% of these patients are drug-refractory. Epilepsy surgery may provide a cure in selected individuals with drug-resistant focal epilepsy if the epileptogenic zone can be identified and safely resected or ablated. Stereoelectroencephalography (SEEG) is a diagnostic procedure that is performed to aid in the delineation of the seizure onset zone when non-invasive investigations are not sufficiently informative or discordant. Utilizing a multi-modal imaging platform, a novel computer-assisted planning (CAP) algorithm was adapted, applied and clinically validated for optimizing safe SEEG trajectory planning. In an initial retrospective validation study, 13 patients with 116 electrodes were enrolled and safety parameters between automated CAP trajectories and expert manual plans were compared. The automated CAP trajectories returned statistically significant improvements in all of the compared clinical metrics including overall risk score (CAP 0.57 +/- 0.39 (mean +/- SD) and manual 1.00 +/- 0.60, p < 0.001). Assessment of the inter-rater variability revealed there was no difference in external expert surgeon ratings. Both manual and CAP electrodes were rated as feasible in 42.8% (42/98) of cases. CAP was able to provide feasible electrodes in 19.4% (19/98), whereas manual planning was able to generate a feasible electrode in 26.5% (26/98) when the alternative generation method was not feasible. Based on the encouraging results from the retrospective analysis a prospective validation study including an additional 125 electrodes in 13 patients was then undertaken to compare CAP to expert manual plans from two neurosurgeons. The manual plans were performed separately and blindly from the CAP. Computer-generated trajectories were found to carry lower risks scores (absolute difference of 0.04 mm (95% CI = -0.42-0.01), p = 0.04) and were subsequently implanted in all cases without complication. The pipeline has been fully integrated into the clinical service and has now replaced manual SEEG planning at our institution. Further efforts were then focused on the distillation of optimal entry and target points for common SEEG trajectories and applying machine learning methods to develop an active learning algorithm to adapt to individual surgeon preferences. Thirty-two patients were prospectively enrolled in the study. The first 12 patients underwent prospective CAP planning and implantation following the pipeline outlined in the previous study. These patients were used as a training set and all of the 108 electrodes after successful implantation were normalized to atlas space to generate ‘spatial priors’, using a K-Nearest Neighbour (K-NN) classifier. A subsequent test set of 20 patients (210 electrodes) were then used to prospectively validate the spatial priors. From the test set, 78% (123/157) of the implanted trajectories passed through both the entry and target spatial priors defined from the training set. To improve the generalizability of the spatial priors to other neurosurgical centres undertaking SEEG and to take into account the potential for changing institutional practices, an active learning algorithm was implemented. The K-NN classifier was shown to dynamically learn and refine the spatial priors. The progressive refinement of CAP SEEG planning outlined in this and previous studies has culminated in an algorithm that not only optimizes the surgical heuristics and risk scores related to SEEG planning but can also learn from previous experience. Overall, safe and feasible trajectory schema were returning in 30% of the time required for manual SEEG planning. Computer-assisted planning was then applied to optimize laser interstitial thermal therapy (LITT) trajectory planning, which is a minimally invasive alternative to open mesial temporal resections, focal lesion ablation and anterior 2/3 corpus callosotomy. We describe and validate the first CAP algorithm for mesial temporal LITT ablations for epilepsy treatment. Twenty-five patients that had previously undergone LITT ablations at a single institution and with a median follow up of 2 years were included. Trajectory parameters for the CAP algorithm were derived from expert consensus to maximize distance from vasculature and ablation of the amygdalohippocampal complex, minimize collateral damage to adjacent brain structures whilst avoiding transgression of the ventricles and sulci. Trajectory parameters were also optimized to reduce the drilling angle to the skull and overall catheter length. Simulated cavities attributable to the CAP trajectories were calculated using a 5-15 mm ablation diameter. In comparison to manually planned and implemented LITT trajectories,CAP resulted in a significant increase in the percentage ablation of the amygdalohippocampal complex (manual 57.82 +/- 15.05% (mean +/- S.D.) and unablated medial hippocampal head depth (manual 4.45 +/- 1.58 mm (mean +/- S.D.), CAP 1.19 +/- 1.37 (mean +/- S.D.), p = 0.0001). As LITT ablation of the mesial temporal structures is a novel procedure there are no established standards for trajectory planning. A data-driven machine learning approach was, therefore, applied to identify hitherto unknown CAP trajectory parameter combinations. All possible combinations of planning parameters were calculated culminating in 720 unique combinations per patient. Linear regression and random forest machine learning algorithms were trained on half of the data set (3800 trajectories) and tested on the remaining unseen trajectories (3800 trajectories). The linear regression and random forest methods returned good predictive accuracies with both returning Pearson correlations of ρ = 0.7 and root mean squared errors of 0.13 and 0.12 respectively. The machine learning algorithm revealed that the optimal entry points were centred over the junction of the inferior occipital, middle temporal and middle occipital gyri. The optimal target points were anterior and medial translations of the centre of the amygdala. A large multicenter external validation study of 95 patients was then undertaken comparing the manually planned and implemented trajectories, CAP trajectories targeting the centre of the amygdala, the CAP parameters derived from expert consensus and the CAP trajectories utilizing the machine learning derived parameters. Three external blinded expert surgeons were then selected to undertake feasibility ratings and preference rankings of the trajectories. CAP generated trajectories result in a significant improvement in many of the planning metrics, notably the risk score (manual 1.3 +/- 0.1 (mean +/- S.D.), CAP 1.1 +/- 0.2 (mean +/- S.D.), p<0.000) and overall ablation of the amygdala (manual 45.3 +/- 22.2 % (mean +/- S.D.), CAP 64.2 +/- 20 % (mean +/- S.D.), p<0.000). Blinded external feasibility ratings revealed that manual trajectories were less preferable than CAP planned trajectories with an estimated probability of being ranked 4th (lowest) of 0.62. Traditional open corpus callosotomy requires a midline craniotomy, interhemispheric dissection and disconnection of the rostrum, genu and body of the corpus callosum. In cases where drop attacks persist a completion corpus callosotomy to disrupt the remaining fibres in the splenium is then performed. The emergence of LITT technology has raised the possibility of being able to undertake this procedure in a minimally invasive fashion and without the need for a craniotomy using two or three individual trajectories. Early case series have shown LITT anterior two-thirds corpus callosotomy to be safe and efficacious. Whole-brain probabilistic tractography connectomes were generated utilizing 3-Tesla multi-shell imaging data and constrained spherical deconvolution (CSD). Two independent blinded expert neurosurgeons with experience of performing the procedure using LITT then planned the trajectories in each patient following their current clinical practice. Automated trajectories returned a significant reduction in the risk score (manual 1.3 +/- 0.1 (mean +/- S.D.), CAP 1.1 +/- 0.1 (mean +/- S.D.), p<0.000). Finally, we investigate the different methods of surgical implantation for SEEG electrodes. As an initial study, a systematic review and meta-analysis of the literature to date were performed. This revealed a wide variety of implantation methods including traditional frame-based, frameless, robotic and custom-3D printed jigs were being used in clinical practice. Of concern, all comparative reports from institutions that had changed from one implantation method to another, such as following the introduction of robotic systems, did not undertake parallel-group comparisons. This suggests that patients may have been exposed to risks associated with learning curves and potential harms related to the new device until the efficacy was known. A pragmatic randomized control trial of a novel non-CE marked robotic trajectory guidance system (iSYS1) was then devised. Before clinical implantations began a series of pre-clinical investigations utilizing 3D printed phantom heads from previously implanted patients was performed to provide pilot data and also assess the surgical learning curve. The surgeons had comparatively little clinical experience with the new robotic device which replicates the introduction of such novel technologies to clinical practice. The study confirmed that the learning curve with the iSYS1 devices was minimal and the accuracies and workflow were similar to the conventional manual method. The randomized control trial represents the first of its kind for stereotactic neurosurgical procedures. Thirty-two patients were enrolled with 16 patients randomized to the iSYS1 intervention arm and 16 patients to the manual implantation arm. The intervention allocation was concealed from the patients. The surgical and research team could be not blinded. Trial management, independent data monitoring and trial steering committees were convened at four points doing the trial (after every 8 patients implanted). Based on the high level of accuracy required for both methods, the main distinguishing factor would be the time to achieve the alignment to the prespecified trajectory. The primary outcome for comparison, therefore, was the time for individual SEEG electrode implantation. Secondary outcomes included the implantation accuracy derived from the post-operative CT scan, infection, intracranial haemorrhage and neurological deficit rates. Overall, 32 patients (328 electrodes) completed the trial (16 in each intervention arm) and the baseline demographics were broadly similar between the two groups. The time for individual electrode implantation was significantly less with the iSYS1 device (median of 3.36 (95% CI 5.72 to 7.07) than for the PAD group (median of 9.06 minutes (95% CI 8.16 to 10.06), p=0.0001). Target point accuracy was significantly greater with the PAD (median of 1.58 mm (95% CI 1.38 to 1.82) compared to the iSYS1 (median of 1.16 mm (95% CI 1.01 to 1.33), p=0.004). The difference between the target point accuracies are not clinically significant for SEEG but may have implications for procedures such as deep brain stimulation that require higher placement accuracy. All of the electrodes achieved their respective intended anatomical targets. In 12 of 16 patients following robotic implantations, and 10 of 16 following manual PAD implantations a seizure onset zone was identified and resection recommended. The aforementioned systematic review and meta-analysis were updated to include additional studies published during the trial duration. In this context, the iSYS1 device entry and target point accuracies were similar to those reported in other published studies of robotic devices including the ROSA, Neuromate and iSYS1. The PAD accuracies, however, outperformed the previously published results for other frameless stereotaxy methods. In conclusion, the presented studies report the integration and validation of a complex clinical decision support software into the clinical neurosurgical workflow for SEEG planning. The stereotactic planning platform was further refined by integrating machine learning techniques and also extended towards optimisation of LITT trajectories for ablation of mesial temporal structures and corpus callosotomy. The platform was then used to seamlessly integrate with a novel trajectory planning software to effectively and safely guide the implantation of the SEEG electrodes. Through a single-blinded randomised control trial, the ISYS1 device was shown to reduce the time taken for individual electrode insertion. Taken together, this work presents and validates the first fully integrated stereotactic trajectory planning platform that can be used for both SEEG and LITT trajectory planning followed by surgical implantation through the use of a novel trajectory guidance system
    corecore