27 research outputs found

    Multi-Atlas Segmentation using Partially Annotated Data: Methods and Annotation Strategies

    Get PDF
    Multi-atlas segmentation is a widely used tool in medical image analysis, providing robust and accurate results by learning from annotated atlas datasets. However, the availability of fully annotated atlas images for training is limited due to the time required for the labelling task. Segmentation methods requiring only a proportion of each atlas image to be labelled could therefore reduce the workload on expert raters tasked with annotating atlas images. To address this issue, we first re-examine the labelling problem common in many existing approaches and formulate its solution in terms of a Markov Random Field energy minimisation problem on a graph connecting atlases and the target image. This provides a unifying framework for multi-atlas segmentation. We then show how modifications in the graph configuration of the proposed framework enable the use of partially annotated atlas images and investigate different partial annotation strategies. The proposed method was evaluated on two Magnetic Resonance Imaging (MRI) datasets for hippocampal and cardiac segmentation. Experiments were performed aimed at (1) recreating existing segmentation techniques with the proposed framework and (2) demonstrating the potential of employing sparsely annotated atlas data for multi-atlas segmentation

    Pseudo-Random Streams for Distributed and Parallel Stochastic Simulations on GP-GPU

    Get PDF
    International audienceRandom number generation is a key element of stochastic simulations. It has been widely studied for sequential applications purposes, enabling us to reliably use pseudo-random numbers in this case. Unfortunately, we cannot be so enthusiastic when dealing with parallel stochastic simulations. Many applications still neglect random stream parallelization, leading to potentially biased results. In particular parallel execution platforms, such as Graphics Processing Units (GPUs), add their constraints to those of Pseudo-Random Number Generators (PRNGs) used in parallel. This results in a situation where potential biases can be combined with performance drops when parallelization of random streams has not been carried out rigorously. Here, we propose criteria guiding the design of good GPU-enabled PRNGs. We enhance our comments with a study of the techniques aiming to parallelize random streams correctly, in the context of GPU-enabled stochastic simulations

    Learning-based quality control for cardiac MR images

    Get PDF
    The effectiveness of a cardiovascular magnetic resonance (CMR) scan depends on the ability of the operator to correctly tune the acquisition parameters to the subject being scanned and on the potential occurrence of imaging artifacts, such as cardiac and respiratory motion. In the clinical practice, a quality control step is performed by visual assessment of the acquired images; however, this procedure is strongly operator-dependent, cumbersome, and sometimes incompatible with the time constraints in clinical settings and large-scale studies. We propose a fast, fully automated, and learning-based quality control pipeline for CMR images, specifically for short-axis image stacks. Our pipeline performs three important quality checks: 1) heart coverage estimation; 2) inter-slice motion detection; 3) image contrast estimation in the cardiac region. The pipeline uses a hybrid decision forest method—integrating both regression and structured classification models—to extract landmarks and probabilistic segmentation maps from both long- and short-axis images as a basis to perform the quality checks. The technique was tested on up to 3000 cases from the UK Biobank and on 100 cases from the UK Digital Heart Project and validated against manual annotations and visual inspections performed by expert interpreters. The results show the capability of the proposed pipeline to correctly detect incomplete or corrupted scans (e.g., on UK Biobank, sensitivity and specificity, respectively, 88% and 99% for heart coverage estimation and 85% and 95% for motion detection), allowing their exclusion from the analyzed dataset or the triggering of a new acquisition

    Development of microstructural and morphological cortical profiles in the neonatal brain

    Get PDF
    Interruptions to neurodevelopment during the perinatal period may have long-lasting consequences. However, to be able to investigate deviations in the foundation of proper connectivity and functional circuits, we need a measure of how this architecture evolves in the typically developing brain. To this end, in a cohort of 241 term-born infants, we used magnetic resonance imaging to estimate cortical profiles based on morphometry and microstructure over the perinatal period (37-44 weeks postmenstrual age, PMA). Using the covariance of these profiles as a measure of inter-areal network similarity (morphometric similarity networks; MSN), we clustered these networks into distinct modules. The resulting modules were consistent and symmetric, and corresponded to known functional distinctions, including sensory-motor, limbic, and association regions, and were spatially mapped onto known cytoarchitectonic tissue classes. Posterior regions became more morphometrically similar with increasing age, while peri-cingulate and medial temporal regions became more dissimilar. Network strength was associated with age: Within-network similarity increased over age suggesting emerging network distinction. These changes in cortical network architecture over an 8-week period are consistent with, and likely underpin, the highly dynamic processes occurring during this critical period. The resulting cortical profiles might provide normative reference to investigate atypical early brain development

    The Developing Human Connectome Project: a minimal processing pipeline for neonatal cortical surface reconstruction

    Get PDF
    The Developing Human Connectome Project (dHCP) seeks to create the first 4-dimensional connectome of early life. Understanding this connectome in detail may provide insights into normal as well as abnormal patterns of brain development. Following established best practices adopted by the WU-MINN Human Connectome Project (HCP), and pioneered by FreeSurfer, the project utilises cortical surface-based processing pipelines. In this paper, we propose a fully automated processing pipeline for the structural Magnetic Resonance Imaging (MRI) of the developing neonatal brain. This proposed pipeline consists of a refined framework for cortical and sub-cortical volume segmentation, cortical surface extraction, and cortical surface inflation, which has been specifically designed to address considerable differences between adult and neonatal brains, as imaged using MRI. Using the proposed pipeline our results demonstrate that images collected from 465 subjects ranging from 28 to 45 weeks post-menstrual age (PMA) can be processed fully automatically; generating cortical surface models that are topologically correct, and correspond well with manual evaluations of tissue boundaries in 85% of cases. Results improve on state-of-the-art neonatal tissue segmentation models and significant errors were found in only 2% of cases, where these corresponded to subjects with high motion. Downstream, these surfaces will enhance comparisons of functional and diffusion MRI datasets, supporting the modelling of emerging patterns of brain connectivity

    The Developing Human Connectome Project Neonatal Data Release

    Get PDF
    The Developing Human Connectome Project has created a large open science resource which provides researchers with data for investigating typical and atypical brain development across the perinatal period. It has collected 1228 multimodal magnetic resonance imaging (MRI) brain datasets from 1173 fetal and/or neonatal participants, together with collateral demographic, clinical, family, neurocognitive and genomic data from 1173 participants, together with collateral demographic, clinical, family, neurocognitive and genomic data. All subjects were studied in utero and/or soon after birth on a single MRI scanner using specially developed scanning sequences which included novel motion-tolerant imaging methods. Imaging data are complemented by rich demographic, clinical, neurodevelopmental, and genomic information. The project is now releasing a large set of neonatal data; fetal data will be described and released separately. This release includes scans from 783 infants of whom: 583 were healthy infants born at term; as well as preterm infants; and infants at high risk of atypical neurocognitive development. Many infants were imaged more than once to provide longitudinal data, and the total number of datasets being released is 887. We now describe the dHCP image acquisition and processing protocols, summarize the available imaging and collateral data, and provide information on how the data can be accessed

    Robust aggregation for adaptive privacy preserving federated learning in healthcare

    No full text
    Federated learning (FL) has enabled training models collaboratively from multiple data owning parties without sharing their data. Given the privacy regulations of patient's healthcare data, learning-based systems in healthcare can greatly benefit from privacy-preserving FL approaches. However, typical model aggregation methods in FL are sensitive to local model updates, which may lead to failure in learning a robust and accurate global model. In this work, we implement and evaluate different robust aggregation methods in FL applied to healthcare data. Furthermore, we show that such methods can detect and discard faulty or malicious local clients during training. We run two sets of experiments using two real-world healthcare datasets for training medical diagnosis classification tasks. Each dataset is used to simulate the performance of three different robust FL aggregation strategies when facing different poisoning attacks. The results show that privacy preserving methods can be successfully applied alongside Byzantine-robust aggregation techniques. We observed in particular how using differential privacy (DP) did not significantly impact the final learning convergence of the different aggregation strategies

    Improved capillary electrophoresis determination of carbohydrate-deficient transferrin including on-line immunosubtraction.

    No full text
    The instrumental analysis of carbohydrate-deficient transferrin (CDT), a recognized marker of chronic alcohol abuse, is most commonly carried out by high-performance liquid chromatography (HPLC) or capillary zone electrophoresis (CZE). Between these two techniques, CZE shows higher efficiency and productivity, but is often reported to be inferior to HPLC in terms of selectivity, because of a less specific ultraviolet detection wavelength than HPLC. On these grounds, the present work was aimed at the development of an improved CZE method for CDT determination, including an on-line immunosubtraction step specifically aimed at enhancing the analytical specificity of CZE determination. The analytical conditions were as follows: uncoated fused silica capillary, 30 microm x 60 cm (L = 50 cm to detector); running buffer, 100 mmol/L borate and 6 mmol/L DAB (1,4-diaminobutane), pH 8.3; voltage, 30 kV; temperature, 25 degrees C; detection, 200 nm. Under the described CZE conditions, a baseline separation between all the CDT related peaks was achieved with good analytical performances in terms of both precision and accuracy. In order to achieve unequivocal recognition of the CDT peaks, an in-capillary immunosubtraction step was included by loading a plug of anti-human transferrin antibody solution after the sample plug. This analytical approach was applied successfully to recognize CDT peaks in the presence of potential interferences
    corecore