27 research outputs found
Recommended from our members
Learning under Distributed Weak Supervision
The availability of training data for supervision is a frequently encountered bottleneck of medical image analysis methods. While typically established by a clinical expert rater, the increase in acquired imaging data renders traditional pixel-wise segmentations less feasible. In this paper, we examine the use of a crowdsourcing platform for the distribution of super-pixel weak annotation tasks and collect such annotations from a crowd of non-expert raters. The crowd annotations are subsequently used for training a fully convolutional neural network to address the problem of fetal brain segmentation in T2-weighted MR images. Using this approach we report encouraging results compared to highly targeted, fully supervised methods and potentially address a frequent problem impeding image analysis research
Recommended from our members
ARIA: On the Interaction Between Architectures, Initialization and Aggregation Methods for Federated Visual Classification
Federated Learning (FL) is a collaborative training paradigm that allows for privacy-preserving learning of cross-institutional models by eliminating the exchange of sensitive data and instead relying on the exchange of model parameters between the clients and a server. Despite individual studies on how client models are aggregated, and, more recently, on the benefits of ImageNet pre-training, there is a lack of understanding of the effect the architecture chosen for the federation has, and of how the aforementioned elements interconnect. To this end, we conduct the first joint ARchitecture-Initialization-Aggregation study and benchmark ARIAs across a range of medical image classification tasks. We find that, contrary to current practices, ARIA elements have to be chosen together to achieve the best possible performance. Our results also shed light on good choices for each element depending on the task, the effect of normalization layers, and the utility of SSL pre-training, pointing to potential directions for designing FL-specific architectures and training pipelines
Pseudo-Random Streams for Distributed and Parallel Stochastic Simulations on GP-GPU
International audienceRandom number generation is a key element of stochastic simulations. It has been widely studied for sequential applications purposes, enabling us to reliably use pseudo-random numbers in this case. Unfortunately, we cannot be so enthusiastic when dealing with parallel stochastic simulations. Many applications still neglect random stream parallelization, leading to potentially biased results. In particular parallel execution platforms, such as Graphics Processing Units (GPUs), add their constraints to those of Pseudo-Random Number Generators (PRNGs) used in parallel. This results in a situation where potential biases can be combined with performance drops when parallelization of random streams has not been carried out rigorously. Here, we propose criteria guiding the design of good GPU-enabled PRNGs. We enhance our comments with a study of the techniques aiming to parallelize random streams correctly, in the context of GPU-enabled stochastic simulations
Learning-based quality control for cardiac MR images
The effectiveness of a cardiovascular magnetic resonance (CMR) scan depends on the ability of the operator to correctly tune the acquisition parameters to the subject being scanned and on the potential occurrence of imaging artifacts, such as cardiac and respiratory motion. In the clinical practice, a quality control step is performed by visual assessment of the acquired images; however, this procedure is strongly operator-dependent, cumbersome, and sometimes incompatible with the time constraints in clinical settings and large-scale studies. We propose a fast, fully automated, and learning-based quality control pipeline for CMR images, specifically for short-axis image stacks. Our pipeline performs three important quality checks: 1) heart coverage estimation; 2) inter-slice motion detection; 3) image contrast estimation in the cardiac region. The pipeline uses a hybrid decision forest method—integrating both regression and structured classification models—to extract landmarks and probabilistic segmentation maps from both long- and short-axis images as a basis to perform the quality checks. The technique was tested on up to 3000 cases from the UK Biobank and on 100 cases from the UK Digital Heart Project and validated against manual annotations and visual inspections performed by expert interpreters. The results show the capability of the proposed pipeline to correctly detect incomplete or corrupted scans (e.g., on UK Biobank, sensitivity and specificity, respectively, 88% and 99% for heart coverage estimation and 85% and 95% for motion detection), allowing their exclusion from the analyzed dataset or the triggering of a new acquisition
Development of microstructural and morphological cortical profiles in the neonatal brain
Interruptions to neurodevelopment during the perinatal period may have long-lasting consequences. However, to be able to investigate deviations in the foundation of proper connectivity and functional circuits, we need a measure of how this architecture evolves in the typically developing brain. To this end, in a cohort of 241 term-born infants, we used magnetic resonance imaging to estimate cortical profiles based on morphometry and microstructure over the perinatal period (37-44Â weeks postmenstrual age, PMA). Using the covariance of these profiles as a measure of inter-areal network similarity (morphometric similarity networks; MSN), we clustered these networks into distinct modules. The resulting modules were consistent and symmetric, and corresponded to known functional distinctions, including sensory-motor, limbic, and association regions, and were spatially mapped onto known cytoarchitectonic tissue classes. Posterior regions became more morphometrically similar with increasing age, while peri-cingulate and medial temporal regions became more dissimilar. Network strength was associated with age: Within-network similarity increased over age suggesting emerging network distinction. These changes in cortical network architecture over an 8-week period are consistent with, and likely underpin, the highly dynamic processes occurring during this critical period. The resulting cortical profiles might provide normative reference to investigate atypical early brain development
The Developing Human Connectome Project: a minimal processing pipeline for neonatal cortical surface reconstruction
The Developing Human Connectome Project (dHCP) seeks to create the first 4-dimensional connectome of early life. Understanding this connectome in detail may provide insights into normal as well as abnormal patterns of brain development. Following established best practices adopted by the WU-MINN Human Connectome Project (HCP), and pioneered by FreeSurfer, the project utilises cortical surface-based processing pipelines. In this paper, we propose a fully automated processing pipeline for the structural Magnetic Resonance Imaging (MRI) of the developing neonatal brain. This proposed pipeline consists of a refined framework for cortical and sub-cortical volume segmentation, cortical surface extraction, and cortical surface inflation, which has been specifically designed to address considerable differences between adult and neonatal brains, as imaged using MRI. Using the proposed pipeline our results demonstrate that images collected from 465 subjects ranging from 28 to 45 weeks post-menstrual age (PMA) can be processed fully automatically; generating cortical surface models that are topologically correct, and correspond well with manual evaluations of tissue boundaries in 85% of cases. Results improve on state-of-the-art neonatal tissue segmentation models and significant errors were found in only 2% of cases, where these corresponded to subjects with high motion. Downstream, these surfaces will enhance comparisons of functional and diffusion MRI datasets, supporting the modelling of emerging patterns of brain connectivity
The Developing Human Connectome Project Neonatal Data Release
The Developing Human Connectome Project has created a large open science resource which provides researchers with data for investigating typical and atypical brain development across the perinatal period. It has collected 1228 multimodal magnetic resonance imaging (MRI) brain datasets from 1173 fetal and/or neonatal participants, together with collateral demographic, clinical, family, neurocognitive and genomic data from 1173 participants, together with collateral demographic, clinical, family, neurocognitive and genomic data. All subjects were studied in utero and/or soon after birth on a single MRI scanner using specially developed scanning sequences which included novel motion-tolerant imaging methods. Imaging data are complemented by rich demographic, clinical, neurodevelopmental, and genomic information. The project is now releasing a large set of neonatal data; fetal data will be described and released separately. This release includes scans from 783 infants of whom: 583 were healthy infants born at term; as well as preterm infants; and infants at high risk of atypical neurocognitive development. Many infants were imaged more than once to provide longitudinal data, and the total number of datasets being released is 887. We now describe the dHCP image acquisition and processing protocols, summarize the available imaging and collateral data, and provide information on how the data can be accessed
Comparison of brain networks with unknown correspondences
Graph theory has drawn a lot of attention in the field of Neuroscience during the last decade, mainly due to the abundance of tools that it provides to explore the interactions of elements in a complex network like the brain. The local and global organization of a brain network can shed light on mechanisms of complex cognitive functions, while disruptions within the network can be linked to neurodevelopmental disorders. In this effort, the construction of a representative brain network for each individual is critical for further analysis. Additionally, graph comparison is an essential step for inference and classification analyses on brain graphs. In this work we explore a method based on graph edit distance for evaluating graph similarity, when correspondences between network elements are unknown due to different underlying subdivisions of the brain. We test this method on 30 unrelated subjects as well as 40 twin pairs and show that this method can accurately reflect the higher similarity between two related networks compared to unrelated ones, while identifying node correspondences
Génération parallèle de nombres pseudo-aléatoires pour le calcul à haute performance. Application aux processeurs de type GP-GPU
International audienc
2CP: decentralized protocols to transparently evaluate contributivity in blockchain federated learning environments
Federated Learning harnesses data from multiple sources to build a single model. While the initial model might belong solely to the actor bringing it to the network for training, determining the ownership of the trained model resulting from Federated Learning remains an open question. In this paper we explore how Blockchains (in particular Ethereum) can be used to determine the evolving ownership of a model trained with Federated Learning. Firstly, we use the step-by-step evaluation metric to assess the relative contributivities of participants in a Federated Learning process. Next, we introduce 2CP, a framework comprising two novel protocols for Blockchained Federated Learning, which both reward contributors with shares in the final model based on their relative contributivity. The Crowdsource Protocol allows an actor to bring a model forward for training, and use their own data to evaluate the contributions made to it. Potential trainers are guaranteed a fair share of the resulting model, even in a trustless setting. The Consortium Protocol gives trainers the same guarantee even when no party owns the initial model and no evaluator is available. We conduct experiments with the MNIST dataset that reveal sound contributivity scores resulting from both Protocols by rewarding larger datasets with greater shares in the model. Our experiments also showed the necessity to pair 2CP with a robust model aggregation mechanism to discard low quality inputs coming from model poisoning attacks