11 research outputs found

    Fetal Brain Biometric Measurements on 3D Super-Resolution Reconstructed T2-Weighted MRI: An Intra- and Inter-observer Agreement Study.

    Get PDF
    We present the comparison of two-dimensional (2D) fetal brain biometry on magnetic resonance (MR) images using orthogonal 2D T2-weighted sequences (T2WSs) vs. one 3D super-resolution (SR) reconstructed volume and evaluation of the level of confidence and concordance between an experienced pediatric radiologist (obs1) and a junior radiologist (obs2). Twenty-five normal fetal brain MRI scans (18-34 weeks of gestation) including orthogonal 3-mm-thick T2WSs were analyzed retrospectively. One 3D SR volume was reconstructed per subject based on multiple series of T2WSs. The two observers performed 11 2D biometric measurements (specifying their level of confidence) on T2WS and SR volumes. Measurements were compared using the paired Wilcoxon rank sum test between observers for each dataset (T2WS and SR) and between T2WS and SR for each observer. Bland-Altman plots were used to assess the agreement between each pair of measurements. Measurements were made with low confidence in three subjects by obs1 and in 11 subjects by obs2 (mostly concerning the length of the corpus callosum on T2WS). Inter-rater intra-dataset comparisons showed no significant difference (p > 0.05), except for brain axial biparietal diameter (BIP) on T2WS and for brain and skull coronal BIP and coronal transverse cerebellar diameter (DTC) on SR. None of them remained significant after correction for multiple comparisons. Inter-dataset intra-rater comparisons showed statistical differences in brain axial and coronal BIP for both observers, skull coronal BIP for obs1, and axial and coronal DTC for obs2. After correction for multiple comparisons, only axial brain BIP remained significantly different, but differences were small (2.95 ± 1.73 mm). SR allows similar fetal brain biometry as compared to using the conventional T2WS while improving the level of confidence in the measurements and using a single reconstructed volume

    RimNet: A deep 3D multimodal MRI architecture for paramagnetic rim lesion assessment in multiple sclerosis.

    Get PDF
    In multiple sclerosis (MS), the presence of a paramagnetic rim at the edge of non-gadolinium-enhancing lesions indicates perilesional chronic inflammation. Patients featuring a higher paramagnetic rim lesion burden tend to have more aggressive disease. The objective of this study was to develop and evaluate a convolutional neural network (CNN) architecture (RimNet) for automated detection of paramagnetic rim lesions in MS employing multiple magnetic resonance (MR) imaging contrasts. Imaging data were acquired at 3 Tesla on three different scanners from two different centers, totaling 124 MS patients, and studied retrospectively. Paramagnetic rim lesion detection was independently assessed by two expert raters on T2*-phase images, yielding 462 rim-positive (rim+) and 4857 rim-negative (rim-) lesions. RimNet was designed using 3D patches centered on candidate lesions in 3D-EPI phase and 3D FLAIR as input to two network branches. The interconnection of branches at both the first network blocks and the last fully connected layers favors the extraction of low and high-level multimodal features, respectively. RimNet's performance was quantitatively evaluated against experts' evaluation from both lesion-wise and patient-wise perspectives. For the latter, patients were categorized based on a clinically relevant threshold of 4 rim+ lesions per patient. The individual prediction capabilities of the images were also explored and compared (DeLong test) by testing a CNN trained with one image as input (unimodal). The unimodal exploration showed the superior performance of 3D-EPI phase and 3D-EPI magnitude images in the rim+/- classification task (AUC = 0.913 and 0.901), compared to the 3D FLAIR (AUC = 0.855, Ps < 0.0001). The proposed multimodal RimNet prototype clearly outperformed the best unimodal approach (AUC = 0.943, P < 0.0001). The sensitivity and specificity achieved by RimNet (70.6% and 94.9%, respectively) are comparable to those of experts at the lesion level. In the patient-wise analysis, RimNet performed with an accuracy of 89.5% and a Dice coefficient (or F1 score) of 83.5%. The proposed prototype showed promising performance, supporting the usage of RimNet for speeding up and standardizing the paramagnetic rim lesions analysis in MS

    Fetal brain tissue annotation and segmentation challenge results.

    Get PDF
    In-utero fetal MRI is emerging as an important tool in the diagnosis and analysis of the developing human brain. Automatic segmentation of the developing fetal brain is a vital step in the quantitative analysis of prenatal neurodevelopment both in the research and clinical context. However, manual segmentation of cerebral structures is time-consuming and prone to error and inter-observer variability. Therefore, we organized the Fetal Tissue Annotation (FeTA) Challenge in 2021 in order to encourage the development of automatic segmentation algorithms on an international level. The challenge utilized FeTA Dataset, an open dataset of fetal brain MRI reconstructions segmented into seven different tissues (external cerebrospinal fluid, gray matter, white matter, ventricles, cerebellum, brainstem, deep gray matter). 20 international teams participated in this challenge, submitting a total of 21 algorithms for evaluation. In this paper, we provide a detailed analysis of the results from both a technical and clinical perspective. All participants relied on deep learning methods, mainly U-Nets, with some variability present in the network architecture, optimization, and image pre- and post-processing. The majority of teams used existing medical imaging deep learning frameworks. The main differences between the submissions were the fine tuning done during training, and the specific pre- and post-processing steps performed. The challenge results showed that almost all submissions performed similarly. Four of the top five teams used ensemble learning methods. However, one team's algorithm performed significantly superior to the other submissions, and consisted of an asymmetrical U-Net network architecture. This paper provides a first of its kind benchmark for future automatic multi-tissue segmentation algorithms for the developing human brain in utero

    TBSS++: A novel computational method for Tract-Based Spatial Statistics.

    No full text
    Diffusion-weighted magnetic resonance imaging (dMRI) is widely used to assess the brain white matter. One of the most common computations in dMRI involves cross-subject tract-specific analysis, whereby dMRI-derived biomarkers are compared between cohorts of subjects. The accuracy and reliability of these studies hinges on the ability to compare precisely the same white matter tracts across subjects. This is an intricate and error-prone computation. Existing computational methods such as Tract-Based Spatial Statistics (TBSS) suffer from a host of shortcomings and limitations that can seriously undermine the validity of the results. We present a new computational framework that overcomes the limitations of existing methods via (i) accurate segmentation of the tracts, and (ii) precise registration of data from different subjects/scans. The registration is based on fiber orientation distributions. To further improve the alignment of cross-subject data, we create detailed atlases of white matter tracts. These atlases serve as an unbiased reference space where the data from all subjects is registered for comparison. Extensive evaluations show that, compared with TBSS, our proposed framework offers significantly higher reproducibility and robustness to data pertur-bations. Our method promises a drastic improvement in accuracy and reproducibility of cross-subject dMRI studies that are routinely used in neuroscience and medical research

    Direct segmentation of brain white matter tracts in diffusion MRI.

    No full text
    The brain white matter consists of a set of tracts that connect distinct regions of the brain. Segmentation of these tracts is often needed for clinical and research studies. Diffusion-weighted MRI offers unique contrast to delineate these tracts. However, existing segmentation methods rely on intermediate computations such as tractography or estimation of fiber orientation density. These intermediate computations, in turn, entail complex computations that can result in unnecessary errors. Moreover, these intermediate computations often require dense multi-shell measurements that are unavailable in many clinical and research applications. As a result, current methods suffer from low accuracy and poor generalizability. Here, we propose a new deep learning method that segments these tracts directly from the diffusion MRI data, thereby sidestepping the intermediate computation errors. Our experiments show that this method can achieve segmentation accuracy that is on par with the state of the art methods (mean Dice Similarity Coefficient of 0.826). Compared with the state of the art, our method offers far superior generalizability to undersampled data that are typical of clinical studies and to data obtained with different acquisition protocols. Moreover, we propose a new method for detecting inaccurate segmentations and show that it is more accurate than standard methods that are based on estimation uncertainty quantification. The new methods can serve many critically important clinical and scientific applications that require accurate and reliable non-invasive segmentation of white matter tracts

    Ductile fracture modelling using local approaches, application to steel welded joints in nuclear components

    No full text
    International audienceThe simulation of crack propagation in ductile materials using the finite element method requires appropriate models for describing the nucleation, growth and coalescence of voids in a robust way. Local models, such as Rousselier and Gurson-Tvergaard-Needleman are now available in the finite element softwares as Cast3m [3]. A large number of models of this kind can be found in the literature, but they suffer from numerical drawbacks. First, they often show a marked mesh dependency of the solution. Second, volumetric locking of the elements is common in elastoplastic damage models in near-incompressible conditions. These two major issues must be solved in order to insure the robustness of such approaches. Our goal is to propose a model which be able to handle these two problems. The mesh dependency can solved by using regularization techniques, such as implicit gradient enrichment of an internal variable [1]. The locking can be treated, either using selective integration techniques, or a mixed formulation [2], which adds the volume variation as a new variable in addition to the displacement.The proposed models, based on the existing Rousselier and GTN models in Cast3m [3], address both issues using an implicit-enriched gradient of damage, and include a mixed formulation in the local models to ensure the desired robustness. In this presentation, the new models and the implementation of the new models are first presented. In a second part, simulations of crack propagation using the proposed models for axisymmetric and compact-tensile specimens in 2D using the Cast3m finite element software [3] are used to illustrate the relevancy of the approach

    Deep learning microstructure estimation of developing brains from diffusion MRI: A newborn and fetal study.

    No full text
    Diffusion-weighted magnetic resonance imaging (dMRI) is widely used to assess the brain white matter. Fiber orientation distribution functions (FODs) are a common way of representing the orientation and density of white matter fibers. However, with standard FOD computation methods, accurate estimation requires a large number of measurements that usually cannot be acquired for newborns and fetuses. We propose to overcome this limitation by using a deep learning method to map as few as six diffusion-weighted measurements to the target FOD. To train the model, we use the FODs computed using multi-shell high angular resolution measurements as target. Extensive quantitative evaluations show that the new deep learning method, using significantly fewer measurements, achieves comparable or superior results than standard methods such as Constrained Spherical Deconvolution and two state-of-the-art deep learning methods. For voxels with one and two fibers, respectively, our method shows an agreement rate in terms of the number of fibers of 77.5% and 22.2%, which is 3% and 5.4% higher than other deep learning methods, and an angular error of 10° and 20°, which is 6° and 5° lower than other deep learning methods. To determine baselines for assessing the performance of our method, we compute agreement metrics using densely sampled newborn data. Moreover, we demonstrate the generalizability of the new deep learning method across scanners, acquisition protocols, and anatomy on two clinical external datasets of newborns and fetuses. We validate fetal FODs, successfully estimated for the first time with deep learning, using post-mortem histological data. Our results show the advantage of deep learning in computing the fiber orientation density for the developing brain from in-vivo dMRI measurements that are often very limited due to constrained acquisition times. Our findings also highlight the intrinsic limitations of dMRI for probing the developing brain microstructure

    Through-Plane Super-Resolution With Autoencoders in Diffusion Magnetic Resonance Imaging of the Developing Human Brain.

    No full text
    Fetal brain diffusion magnetic resonance images (MRI) are often acquired with a lower through-plane than in-plane resolution. This anisotropy is often overcome by classical upsampling methods such as linear or cubic interpolation. In this work, we employ an unsupervised learning algorithm using an autoencoder neural network for single-image through-plane super-resolution by leveraging a large amount of data. Our framework, which can also be used for slice outliers replacement, overperformed conventional interpolations quantitatively and qualitatively on pre-term newborns of the developing Human Connectome Project. The evaluation was performed on both the original diffusion-weighted signal and the estimated diffusion tensor maps. A byproduct of our autoencoder was its ability to act as a denoiser. The network was able to generalize fetal data with different levels of motions and we qualitatively showed its consistency, hence supporting the relevance of pre-term datasets to improve the processing of fetal brain images

    An automatic multi-tissue human fetal brain segmentation benchmark using the Fetal Tissue Annotation Dataset.

    Get PDF
    It is critical to quantitatively analyse the developing human fetal brain in order to fully understand neurodevelopment in both normal fetuses and those with congenital disorders. To facilitate this analysis, automatic multi-tissue fetal brain segmentation algorithms are needed, which in turn requires open datasets of segmented fetal brains. Here we introduce a publicly available dataset of 50 manually segmented pathological and non-pathological fetal magnetic resonance brain volume reconstructions across a range of gestational ages (20 to 33 weeks) into 7 different tissue categories (external cerebrospinal fluid, grey matter, white matter, ventricles, cerebellum, deep grey matter, brainstem/spinal cord). In addition, we quantitatively evaluate the accuracy of several automatic multi-tissue segmentation algorithms of the developing human fetal brain. Four research groups participated, submitting a total of 10 algorithms, demonstrating the benefits the dataset for the development of automatic algorithms

    Multi-view convolutional neural networks for automated ocular structure and tumor segmentation in retinoblastoma.

    No full text
    In retinoblastoma, accurate segmentation of ocular structure and tumor tissue is important when working towards personalized treatment. This retrospective study serves to evaluate the performance of multi-view convolutional neural networks (MV-CNNs) for automated eye and tumor segmentation on MRI in retinoblastoma patients. Forty retinoblastoma and 20 healthy-eyes from 30 patients were included in a train/test (N = 29 retinoblastoma-, 17 healthy-eyes) and independent validation (N = 11 retinoblastoma-, 3 healthy-eyes) set. Imaging was done using 3.0 T Fast Imaging Employing Steady-state Acquisition (FIESTA), T2-weighted and contrast-enhanced T1-weighted sequences. Sclera, vitreous humour, lens, retinal detachment and tumor were manually delineated on FIESTA images to serve as a reference standard. Volumetric and spatial performance were assessed by calculating intra-class correlation (ICC) and dice similarity coefficient (DSC). Additionally, the effects of multi-scale, sequences and data augmentation were explored. Optimal performance was obtained by using a three-level pyramid MV-CNN with FIESTA, T2 and T1c sequences and data augmentation. Eye and tumor volumetric ICC were 0.997 and 0.996, respectively. Median [Interquartile range] DSC for eye, sclera, vitreous, lens, retinal detachment and tumor were 0.965 [0.950-0.975], 0.847 [0.782-0.893], 0.975 [0.930-0.986], 0.909 [0.847-0.951], 0.828 [0.458-0.962] and 0.914 [0.852-0.958], respectively. MV-CNN can be used to obtain accurate ocular structure and tumor segmentations in retinoblastoma
    corecore