25 research outputs found

    SFHarmony: Source Free Domain Adaptation for Distributed Neuroimaging Analysis

    Get PDF
    To represent the biological variability of clinical neuroimaging populations, it is vital to be able to combine data across scanners and studies. However, different MRI scanners produce images with different characteristics, resulting in a domain shift known as the `harmonisation problem'. Additionally, neuroimaging data is inherently personal in nature, leading to data privacy concerns when sharing the data. To overcome these barriers, we propose an Unsupervised Source-Free Domain Adaptation (SFDA) method, SFHarmony. Through modelling the imaging features as a Gaussian Mixture Model and minimising an adapted Bhattacharyya distance between the source and target features, we can create a model that performs well for the target data whilst having a shared feature representation across the data domains, without needing access to the source data for adaptation or target labels. We demonstrate the performance of our method on simulated and real domain shifts, showing that the approach is applicable to classification, segmentation and regression tasks, requiring no changes to the algorithm. Our method outperforms existing SFDA approaches across a range of realistic data scenarios, demonstrating the potential utility of our approach for MRI harmonisation and general SFDA problems. Our code is available at \url{https://github.com/nkdinsdale/SFHarmony}

    Self-Supervised Ultrasound to MRI Fetal Brain Image Synthesis

    Full text link
    Fetal brain magnetic resonance imaging (MRI) offers exquisite images of the developing brain but is not suitable for second-trimester anomaly screening, for which ultrasound (US) is employed. Although expert sonographers are adept at reading US images, MR images which closely resemble anatomical images are much easier for non-experts to interpret. Thus in this paper we propose to generate MR-like images directly from clinical US images. In medical image analysis such a capability is potentially useful as well, for instance for automatic US-MRI registration and fusion. The proposed model is end-to-end trainable and self-supervised without any external annotations. Specifically, based on an assumption that the US and MRI data share a similar anatomical latent space, we first utilise a network to extract the shared latent features, which are then used for MRI synthesis. Since paired data is unavailable for our study (and rare in practice), pixel-level constraints are infeasible to apply. We instead propose to enforce the distributions to be statistically indistinguishable, by adversarial learning in both the image domain and feature space. To regularise the anatomical structures between US and MRI during synthesis, we further propose an adversarial structural constraint. A new cross-modal attention technique is proposed to utilise non-local spatial information, by encouraging multi-modal knowledge fusion and propagation. We extend the approach to consider the case where 3D auxiliary information (e.g., 3D neighbours and a 3D location index) from volumetric data is also available, and show that this improves image synthesis. The proposed approach is evaluated quantitatively and qualitatively with comparison to real fetal MR images and other approaches to synthesis, demonstrating its feasibility of synthesising realistic MR images.Comment: IEEE Transactions on Medical Imaging 202

    CheckList [assistive memory system]

    Get PDF
    There are countless times where cellphones, wallets, or keys have been forgotten at home, in the office or on public transit.  Our solution to this problem is the CheckList. With this handy device, anyone will be able to tag an item and add it onto their CheckList.  Before walking out of any place, the CheckList will detect whether or not you have everything you need.  Simply press the handy \u27Check\u27 button and the CheckList will inform the user if anything is missing.  Forgetting will be a thing of the past.  Just remember the CheckList

    Normative spatiotemporal fetal brain maturation with satisfactory development at 2 years

    Get PDF
    Maturation of the human fetal brain should follow precisely scheduled structural growth and folding of the cerebral cortex for optimal postnatal function1 . We present a normative digital atlas of fetal brain maturation based on a prospective international cohort of healthy pregnant women2 , selected using World Health Organization recommendations for growth standards3 . Their fetuses were accurately dated in the first trimester, with satisfactory growth and neurodevelopment from early pregnancy to 2 years of age4,5 . The atlas was produced using 1,059 optimal quality, three dimensional ultrasound brain volumes from 899 of the fetuses and an automated analysis pipeline6–8 . The atlas corresponds structurally to published magnetic resonance images9 , but with finer anatomical details in deep grey matter. The between study site variability represented less than 8.0% of the total variance of all brain measures, supporting pooling data from the eight study sites to produce patterns of normative maturation. We have thereby generated an average representation of each cerebral hemisphere between 14 and 31 weeks’ gestation with quantification of intracranial volume variability and growth patterns. Emergent asymmetries were detectable from as early as 14 weeks, with peak asymmetries in regions associated with language development and functional lateralization between 20 and 26 weeks’ gestation. These patterns were validated in 1,487 three-dimensional brain volumes from 1,295 different fetuses in the same cohort. We provide a unique spatiotemporal benchmark of fetal brain maturation from a large cohort with normative postnatal growth and neurodevelopment

    Machine learning to assess the fetal brain from ultrasound images

    No full text
    Obstetric care decisions fundamentally rely upon accurate estimation of gestational age (GA). Ultrasound- (US) based measurements provide reliable estimates of GA, if performed early in pregnancy. However, in low-income settings, the lack of appropriately trained sonographers and the tendency for women to present for care late in pregnancy are barriers to the use of US for dating purposes. In this thesis, we propose to exploit sonographic image patterns associated with dynamic fetal brain development to predict GA. We designed an algorithm which automatically estimates GA from an US scan collected from a single visit, thereby enabling clinically useful estimates of GA to be made even in the third trimester of pregnancy: a period complicated by biological variation and unreliable size-based estimates. The presented model was conceived on the basis that fetal brain development follows a precise spatiotemporal pattern, with folds emerging and disappearing on the surface of the brain (cerebral cortex) at fixed time points during pregnancy. This timing is so precise that post-mortem neuroanatomical and MRI evidence suggest that the 'developmental maturation' of the fetal brain may be a better predictor for GA than traditional size-based estimates. We capitalize on these age-related patterns to develop, for the first time, a unified model which combines sonographic image features and clinical measurements to predict GA and brain maturation. The framework benefits from a manifold surface representation of the fetal head which delineates the inner skull boundary and serves as a common coordinate system based on cranial position. This allows for fast and efficient sampling of anatomically-corresponding brain regions to achieve like-for-like structural comparison of different developmental stages. Bespoke features capture neurosonographic patterns in 3D images, and using a regression forest classifier, we characterize structural brain development both spatially and temporally to capture the natural variation existing in a healthy population (n=448) over an age range of active brain maturation (18 to 34 weeks). Our GA prediction results on a high-risk clinical dataset (n=187) strongly correlate with true GA (r=0.98, accurate within &plusmn; 6.10 days), confirming the link between maturational progression and neurosonographic activity observable across gestation. Our model also outperforms current clinical methods, particularly in the third trimester. Through feature selection, the model successfully identified regional biomarkers of neurodevelopmental progression over gestation. Guided by these regions, we present a novel approach for defining and testing hypotheses associated with neuropathological deviations.</p

    Data for paper 'Learning to segment key clinical anatomical structures in fetal neurosonography informed by a region-based descriptor'

    No full text
    Paper and statistic data for journal: Ruobing Huang, Ana Namburete, Alison Noble, "Learning to segment key clinical anatomical structures in fetal neurosonography informed by a region-based descriptor," J. Med. Imag. 5(1), 014007 (2018), doi: 10.1117/1.JMI.5.1.014007. Future research can therefore compare relative results

    FedHarmony: Unlearning Scanner Bias with Distributed Data

    Full text link
    The ability to combine data across scanners and studies is vital for neuroimaging, to increase both statistical power and the representation of biological variability. However, combining datasets across sites leads to two challenges: first, an increase in undesirable non-biological variance due to scanner and acquisition differences - the harmonisation problem - and second, data privacy concerns due to the inherently personal nature of medical imaging data, meaning that sharing them across sites may risk violation of privacy laws. To overcome these restrictions, we propose FedHarmony: a harmonisation framework operating in the federated learning paradigm. We show that to remove the scanner-specific effects, we only need to share the mean and standard deviation of the learned features, helping to protect individual subjects' privacy. We demonstrate our approach across a range of realistic data scenarios, using real multi-site data from the ABIDE dataset, thus showing the potential utility of our method for MRI harmonisation across studies. Our code is available at https://github.com/nkdinsdale/FedHarmony.Comment: Accepted to MICCAI 2022, Code available at: https://github.com/nkdinsdale/FedHarmon

    STAMP: Simultaneous Training and Model Pruning for low data regimes in medical image segmentation

    No full text
    Acquisition of high quality manual annotations is vital for the development of segmentation algorithms. However, to create them we require a substantial amount of expert time and knowledge. Large numbers of labels are required to train convolutional neural networks due to the vast number of parameters that must be learned in the optimisation process. Here, we develop the STAMP algorithm to allow the simultaneous training and pruning of a UNet architecture for medical image segmentation with targeted channelwise dropout to make the network robust to the pruning. We demonstrate the technique across segmentation tasks and imaging modalities. It is then shown that, through online pruning, we are able to train networks to have much higher performance than the equivalent standard UNet models while reducing their size by more than 85% in terms of parameters. This has the potential to allow networks to be directly trained on datasets where very low numbers of labels are available

    FedHarmony: unlearning scanner bias with distributed data

    No full text
    The ability to combine data across scanners and studies is vital for neuroimaging, to increase both statistical power and the representation of biological variability. However, combining datasets across sites leads to two challenges: first, an increase in undesirable non-biological variance due to scanner and acquisition differences - the harmonisation problem - and second, data privacy concerns due to the inherently personal nature of medical imaging data, meaning that sharing them across sites may risk violation of privacy laws. To overcome these restrictions, we propose FedHarmony: a harmonisation framework operating in the federated learning paradigm. We show that to remove the scanner-specific effects, for our scenario we only need to share the mean and standard deviation of the learned features, helping to protect individual subjects’ privacy. We demonstrate our approach across a range of realistic data scenarios, using real multi-site data from the ABIDE dataset, thus showing the potential utility of our method for MRI harmonisation across studies. Our code is available at https://github.com/nkdinsdale/FedHarmony
    corecore