30 research outputs found

    SFHarmony: Source Free Domain Adaptation for Distributed Neuroimaging Analysis

    Get PDF
    To represent the biological variability of clinical neuroimaging populations, it is vital to be able to combine data across scanners and studies. However, different MRI scanners produce images with different characteristics, resulting in a domain shift known as the `harmonisation problem'. Additionally, neuroimaging data is inherently personal in nature, leading to data privacy concerns when sharing the data. To overcome these barriers, we propose an Unsupervised Source-Free Domain Adaptation (SFDA) method, SFHarmony. Through modelling the imaging features as a Gaussian Mixture Model and minimising an adapted Bhattacharyya distance between the source and target features, we can create a model that performs well for the target data whilst having a shared feature representation across the data domains, without needing access to the source data for adaptation or target labels. We demonstrate the performance of our method on simulated and real domain shifts, showing that the approach is applicable to classification, segmentation and regression tasks, requiring no changes to the algorithm. Our method outperforms existing SFDA approaches across a range of realistic data scenarios, demonstrating the potential utility of our approach for MRI harmonisation and general SFDA problems. Our code is available at \url{https://github.com/nkdinsdale/SFHarmony}

    Prototype learning for explainable brain age prediction

    Get PDF
    The lack of explainability of deep learning models limits the adoption of such models in clinical practice. Prototype-based models can provide inherent explainable predictions, but these have predominantly been designed for classification tasks, despite many important tasks in medical imaging being continuous regression problems. Therefore, in this work, we present ExPeRT: an explainable prototype-based model specifically designed for regression tasks. Our proposed model makes a sample prediction from the distances to a set of learned prototypes in latent space, using a weighted mean of prototype labels. The distances in latent space are regularized to be relative to label differences, and each of the prototypes can be visualized as a sample from the training set. The image-level distances are further constructed from patch-level distances, in which the patches of both images are structurally matched using optimal transport. This thus provides an example-based explanation with patch-level detail at inference time. We demonstrate our proposed model for brain age prediction on two imaging datasets: adult MR and fetal ultrasound. Our approach achieved state-of-the-art prediction performance while providing insight into the model’s reasoning process

    Self-Supervised Ultrasound to MRI Fetal Brain Image Synthesis

    Full text link
    Fetal brain magnetic resonance imaging (MRI) offers exquisite images of the developing brain but is not suitable for second-trimester anomaly screening, for which ultrasound (US) is employed. Although expert sonographers are adept at reading US images, MR images which closely resemble anatomical images are much easier for non-experts to interpret. Thus in this paper we propose to generate MR-like images directly from clinical US images. In medical image analysis such a capability is potentially useful as well, for instance for automatic US-MRI registration and fusion. The proposed model is end-to-end trainable and self-supervised without any external annotations. Specifically, based on an assumption that the US and MRI data share a similar anatomical latent space, we first utilise a network to extract the shared latent features, which are then used for MRI synthesis. Since paired data is unavailable for our study (and rare in practice), pixel-level constraints are infeasible to apply. We instead propose to enforce the distributions to be statistically indistinguishable, by adversarial learning in both the image domain and feature space. To regularise the anatomical structures between US and MRI during synthesis, we further propose an adversarial structural constraint. A new cross-modal attention technique is proposed to utilise non-local spatial information, by encouraging multi-modal knowledge fusion and propagation. We extend the approach to consider the case where 3D auxiliary information (e.g., 3D neighbours and a 3D location index) from volumetric data is also available, and show that this improves image synthesis. The proposed approach is evaluated quantitatively and qualitatively with comparison to real fetal MR images and other approaches to synthesis, demonstrating its feasibility of synthesising realistic MR images.Comment: IEEE Transactions on Medical Imaging 202

    CheckList [assistive memory system]

    Get PDF
    There are countless times where cellphones, wallets, or keys have been forgotten at home, in the office or on public transit.  Our solution to this problem is the CheckList. With this handy device, anyone will be able to tag an item and add it onto their CheckList.  Before walking out of any place, the CheckList will detect whether or not you have everything you need.  Simply press the handy \u27Check\u27 button and the CheckList will inform the user if anything is missing.  Forgetting will be a thing of the past.  Just remember the CheckList

    An automated method for tendon image segmentation on ultrasound using grey-level co-occurrence matrix features and hidden Gaussian Markov random fields

    Get PDF
    Background: Despite knowledge of qualitative changes that occur on ultrasound in tendinopathy, there is currently no objective and reliable means to quantify the severity or prognosis of tendinopathy on ultrasound. Objective: The primary objective of this study is to produce a quantitative and automated means of inferring potential structural changes in tendinopathy by developing and implementing an algorithm which performs a texture based segmentation of tendon ultrasound (US) images. Method: A model-based segmentation approach is used which combines Gaussian mixture models, Markov random field theory and grey-level co-occurrence (GLCM) features. The algorithm is trained and tested on 49 longitudinal B-mode ultrasound images of the Achilles tendons which are labelled as tendinopathic (24) or healthy (25). Hyperparameters are tuned, using a training set of 25 images, to optimise a decision tree based classification of the images from texture class proportions. We segment and classify the remaining test images using the decision tree. Results: Our approach successfully detects a difference in the texture profiles of tendinopathic and healthy tendons, with 22/24 of the test images accurately classified based on a simple texture proportion cut-off threshold. Results for the tendinopathic images are also collated to gain insight into the topology of structural changes that occur with tendinopathy. It is evident that distinct textures, which are predominantly present in tendinopathic tendons, appear most commonly near the transverse boundary of the tendon, though there was a large variability among diseased tendons. Conclusion: The GLCM based segmentation of tendons under ultrasound resulted in distinct segmentations between healthy and tendinopathic tendons and provides a potential tool to objectively quantify damage in tendinopathy

    Normative spatiotemporal fetal brain maturation with satisfactory development at 2 years

    Get PDF
    Maturation of the human fetal brain should follow precisely scheduled structural growth and folding of the cerebral cortex for optimal postnatal function1 . We present a normative digital atlas of fetal brain maturation based on a prospective international cohort of healthy pregnant women2 , selected using World Health Organization recommendations for growth standards3 . Their fetuses were accurately dated in the first trimester, with satisfactory growth and neurodevelopment from early pregnancy to 2 years of age4,5 . The atlas was produced using 1,059 optimal quality, three dimensional ultrasound brain volumes from 899 of the fetuses and an automated analysis pipeline6–8 . The atlas corresponds structurally to published magnetic resonance images9 , but with finer anatomical details in deep grey matter. The between study site variability represented less than 8.0% of the total variance of all brain measures, supporting pooling data from the eight study sites to produce patterns of normative maturation. We have thereby generated an average representation of each cerebral hemisphere between 14 and 31 weeks’ gestation with quantification of intracranial volume variability and growth patterns. Emergent asymmetries were detectable from as early as 14 weeks, with peak asymmetries in regions associated with language development and functional lateralization between 20 and 26 weeks’ gestation. These patterns were validated in 1,487 three-dimensional brain volumes from 1,295 different fetuses in the same cohort. We provide a unique spatiotemporal benchmark of fetal brain maturation from a large cohort with normative postnatal growth and neurodevelopment

    Machine learning to assess the fetal brain from ultrasound images

    No full text
    Obstetric care decisions fundamentally rely upon accurate estimation of gestational age (GA). Ultrasound- (US) based measurements provide reliable estimates of GA, if performed early in pregnancy. However, in low-income settings, the lack of appropriately trained sonographers and the tendency for women to present for care late in pregnancy are barriers to the use of US for dating purposes. In this thesis, we propose to exploit sonographic image patterns associated with dynamic fetal brain development to predict GA. We designed an algorithm which automatically estimates GA from an US scan collected from a single visit, thereby enabling clinically useful estimates of GA to be made even in the third trimester of pregnancy: a period complicated by biological variation and unreliable size-based estimates. The presented model was conceived on the basis that fetal brain development follows a precise spatiotemporal pattern, with folds emerging and disappearing on the surface of the brain (cerebral cortex) at fixed time points during pregnancy. This timing is so precise that post-mortem neuroanatomical and MRI evidence suggest that the 'developmental maturation' of the fetal brain may be a better predictor for GA than traditional size-based estimates. We capitalize on these age-related patterns to develop, for the first time, a unified model which combines sonographic image features and clinical measurements to predict GA and brain maturation. The framework benefits from a manifold surface representation of the fetal head which delineates the inner skull boundary and serves as a common coordinate system based on cranial position. This allows for fast and efficient sampling of anatomically-corresponding brain regions to achieve like-for-like structural comparison of different developmental stages. Bespoke features capture neurosonographic patterns in 3D images, and using a regression forest classifier, we characterize structural brain development both spatially and temporally to capture the natural variation existing in a healthy population (n=448) over an age range of active brain maturation (18 to 34 weeks). Our GA prediction results on a high-risk clinical dataset (n=187) strongly correlate with true GA (r=0.98, accurate within &plusmn; 6.10 days), confirming the link between maturational progression and neurosonographic activity observable across gestation. Our model also outperforms current clinical methods, particularly in the third trimester. Through feature selection, the model successfully identified regional biomarkers of neurodevelopmental progression over gestation. Guided by these regions, we present a novel approach for defining and testing hypotheses associated with neuropathological deviations.</p

    Data for paper 'Learning to segment key clinical anatomical structures in fetal neurosonography informed by a region-based descriptor'

    No full text
    Paper and statistic data for journal: Ruobing Huang, Ana Namburete, Alison Noble, "Learning to segment key clinical anatomical structures in fetal neurosonography informed by a region-based descriptor," J. Med. Imag. 5(1), 014007 (2018), doi: 10.1117/1.JMI.5.1.014007. Future research can therefore compare relative results

    FedHarmony: Unlearning Scanner Bias with Distributed Data

    Full text link
    The ability to combine data across scanners and studies is vital for neuroimaging, to increase both statistical power and the representation of biological variability. However, combining datasets across sites leads to two challenges: first, an increase in undesirable non-biological variance due to scanner and acquisition differences - the harmonisation problem - and second, data privacy concerns due to the inherently personal nature of medical imaging data, meaning that sharing them across sites may risk violation of privacy laws. To overcome these restrictions, we propose FedHarmony: a harmonisation framework operating in the federated learning paradigm. We show that to remove the scanner-specific effects, we only need to share the mean and standard deviation of the learned features, helping to protect individual subjects' privacy. We demonstrate our approach across a range of realistic data scenarios, using real multi-site data from the ABIDE dataset, thus showing the potential utility of our method for MRI harmonisation across studies. Our code is available at https://github.com/nkdinsdale/FedHarmony.Comment: Accepted to MICCAI 2022, Code available at: https://github.com/nkdinsdale/FedHarmon
    corecore