59 research outputs found

    A k-Space Model of Movement Artefacts: Application to Segmentation Augmentation and Artefact Removal

    Get PDF
    Patient movement during the acquisition of magnetic resonance images (MRI) can cause unwanted image artefacts. These artefacts may affect the quality of clinical diagnosis and cause errors in automated image analysis. In this work, we present a method for generating realistic motion artefacts from artefact-free magnitude MRI data to be used in deep learning frameworks, increasing training appearance variability and ultimately making machine learning algorithms such as convolutional neural networks (CNNs) more robust to the presence of motion artefacts. By modelling patient movement as a sequence of randomly-generated, ‘demeaned’, rigid 3D affine transforms, we resample artefact-free volumes and combine these in k-space to generate motion artefact data. We show that by augmenting the training of semantic segmentation CNNs with artefacts, we can train models that generalise better and perform more reliably in the presence of artefact data, with negligible cost to their performance on clean data. We show that the performance of models trained using artefact data on segmentation tasks on real-world test-retest image pairs is more robust. We also demonstrate that our augmentation model can be used to learn to retrospectively remove certain types of motion artefacts from real MRI scans. Finally, we show that measures of uncertainty obtained from motion augmented CNN models reflect the presence of artefacts and can thus provide relevant information to ensure the safe usage of deep learning extracted biomarkers in a clinical pipeline

    Test-time unsupervised domain adaptation

    Get PDF
    Convolutional neural networks trained on publicly available medical imaging datasets (source domain) rarely generalise to different scanners or acquisition protocols (target domain). This motivates the active field of domain adaptation. While some approaches to the problem require labelled data from the target domain, others adopt an unsupervised approach to domain adaptation (UDA). Evaluating UDA methods consists of measuring the model’s ability to generalise to unseen data in the target domain. In this work, we argue that this is not as useful as adapting to the test set directly. We therefore propose an evaluation framework where we perform test-time UDA on each subject separately. We show that models adapted to a specific target subject from the target domain outperform a domain adaptation method which has seen more data of the target domain but not this specific target subject. This result supports the thesis that unsupervised domain adaptation should be used at test-time, even if only using a single target-domain subject

    Hierarchical Brain Parcellation with Uncertainty

    Get PDF
    Many atlases used for brain parcellation are hierarchically organised, progressively dividing the brain into smaller sub-regions. However, state-of-the-art parcellation methods tend to ignore this structure and treat labels as if they are ‘flat’. We introduce a hierarchically-aware brain parcellation method that works by predicting the decisions at each branch in the label tree. We further show how this method can be used to model uncertainty separately for every branch in this label tree. Our method exceeds the performance of flat uncertainty methods, whilst also providing decomposed uncertainty estimates that enable us to obtain self-consistent parcellations and uncertainty maps at any level of the label hierarchy. We demonstrate a simple way these decision-specific uncertainty maps may be used to provided uncertainty-thresholded tissue maps at any level of the label tree

    The Role of MRI Physics in Brain Segmentation CNNs: Achieving Acquisition Invariance and Instructive Uncertainties

    Get PDF
    Being able to adequately process and combine data arising from different sites is crucial in neuroimaging, but is difficult, owing to site, sequence and acquisition-parameter dependent biases. It is important therefore to design algorithms that are not only robust to images of differing contrasts, but also be able to generalise well to unseen ones, with a quantifiable measure of uncertainty. In this paper we demonstrate the efficacy of a physics-informed, uncertainty-aware, segmentation network that employs augmentation-time MR simulations and homogeneous batch feature stratification to achieve acquisition invariance. We show that the proposed approach also accurately extrapolates to out-of-distribution sequence samples, providing well calibrated volumetric bounds on these. We demonstrate a significant improvement in terms of coefficients of variation, backed by uncertainty based volumetric validation

    Hetero-Modal Variational Encoder-Decoder for Joint Modality Completion and Segmentation

    Full text link
    We propose a new deep learning method for tumour segmentation when dealing with missing imaging modalities. Instead of producing one network for each possible subset of observed modalities or using arithmetic operations to combine feature maps, our hetero-modal variational 3D encoder-decoder independently embeds all observed modalities into a shared latent representation. Missing data and tumour segmentation can be then generated from this embedding. In our scenario, the input is a random subset of modalities. We demonstrate that the optimisation problem can be seen as a mixture sampling. In addition to this, we introduce a new network architecture building upon both the 3D U-Net and the Multi-Modal Variational Auto-Encoder (MVAE). Finally, we evaluate our method on BraTS2018 using subsets of the imaging modalities as input. Our model outperforms the current state-of-the-art method for dealing with missing modalities and achieves similar performance to the subset-specific equivalent networks.Comment: Accepted at MICCAI 201

    Let's Agree to Disagree: Learning Highly Debatable Multirater Labelling

    Get PDF
    Classification and differentiation of small pathological objects may greatly vary among human raters due to differences in training, expertise and their consistency over time. In a radiological setting, objects commonly have high within-class appearance variability whilst sharing certain characteristics across different classes, making their distinction even more difficult. As an example, markers of cerebral small vessel disease, such as enlarged perivascular spaces (EPVS) and lacunes, can be very varied in their appearance while exhibiting high inter-class similarity, making this task highly challenging for human raters. In this work, we investigate joint models of individual rater behaviour and multi-rater consensus in a deep learning setting, and apply it to a brain lesion object-detection task. Results show that jointly modelling both individual and consensus estimates leads to significant improvements in performance when compared to directly predicting consensus labels, while also allowing the characterization of human-rater consistency

    Geo-social gradients in predicted COVID-19 prevalence in Great Britain: results from 1 960 242 users of the COVID-19 Symptoms Study app

    Get PDF
    Understanding the geographical distribution of COVID-19 through the general population is key to the provision of adequate healthcare services. Using self-reported data from 1 960 242 unique users in Great Britain (GB) of the COVID-19 Symptom Study app, we estimated that, concurrent to the GB government sanctioning lockdown, COVID-19 was distributed across GB, with evidence of ’urban hotspots’. We found a geo-social gradient associated with predicted disease prevalence suggesting urban areas and areas of higher deprivation are most affected. Our results demonstrate use of self-reported symptoms data to provide focus on geographical areas with identified risk factors

    Geo-social gradients in predicted COVID-19 prevalence in Great Britain: results from 1 960 242 users of the COVID-19 Symptoms Study app

    Get PDF
    Understanding the geographical distribution of COVID-19 through the general population is key to the provision of adequate healthcare services. Using self-reported data from 1 960 242 unique users in Great Britain (GB) of the COVID-19 Symptom Study app, we estimated that, concurrent to the GB government sanctioning lockdown, COVID-19 was distributed across GB, with evidence of ’urban hotspots’. We found a geo-social gradient associated with predicted disease prevalence suggesting urban areas and areas of higher deprivation are most affected. Our results demonstrate use of self-reported symptoms data to provide focus on geographical areas with identified risk factors

    Detecting COVID-19 infection hotspots in England using large-scale self-reported data from a mobile application: a prospective, observational study

    Get PDF
    BACKGROUND: As many countries seek to slow the spread of COVID-19 without reimposing national restrictions, it has become important to track the disease at a local level to identify areas in need of targeted intervention. METHODS: In this prospective, observational study, we did modelling using longitudinal, self-reported data from users of the COVID Symptom Study app in England between March 24, and Sept 29, 2020. Beginning on April 28, in England, the Department of Health and Social Care allocated RT-PCR tests for COVID-19 to app users who logged themselves as healthy at least once in 9 days and then reported any symptom. We calculated incidence of COVID-19 using the invited swab (RT-PCR) tests reported in the app, and we estimated prevalence using a symptom-based method (using logistic regression) and a method based on both symptoms and swab test results. We used incidence rates to estimate the effective reproduction number, R(t), modelling the system as a Poisson process and using Markov Chain Monte-Carlo. We used three datasets to validate our models: the Office for National Statistics (ONS) Community Infection Survey, the Real-time Assessment of Community Transmission (REACT-1) study, and UK Government testing data. We used geographically granular estimates to highlight regions with rapidly increasing case numbers, or hotspots. FINDINGS: From March 24 to Sept 29, 2020, a total of 2 873 726 users living in England signed up to use the app, of whom 2 842 732 (98·9%) provided valid age information and daily assessments. These users provided a total of 120 192 306 daily reports of their symptoms, and recorded the results of 169 682 invited swab tests. On a national level, our estimates of incidence and prevalence showed a similar sensitivity to changes to those reported in the ONS and REACT-1 studies. On Sept 28, 2020, we estimated an incidence of 15 841 (95% CI 14 023-17 885) daily cases, a prevalence of 0·53% (0·45-0·60), and R(t) of 1·17 (1·15-1·19) in England. On a geographically granular level, on Sept 28, 2020, we detected 15 (75%) of the 20 regions with highest incidence according to government test data. INTERPRETATION: Our method could help to detect rapid case increases in regions where government testing provision is lower. Self-reported data from mobile applications can provide an agile resource to inform policy makers during a quickly moving pandemic, serving as a complementary resource to more traditional instruments for disease surveillance. FUNDING: Zoe Global, UK Government Department of Health and Social Care, Wellcome Trust, UK Engineering and Physical Sciences Research Council, UK National Institute for Health Research, UK Medical Research Council and British Heart Foundation, Alzheimer's Society, Chronic Disease Research Foundation

    Attributes and predictors of Long-COVID: analysis of COVID cases and their symptoms collected by the Covid Symptoms Study App

    Get PDF
    Reports of “Long-COVID”, are rising but little is known about prevalence, risk factors, or whether it is possible to predict a protracted course early in the disease. We analysed data from 4182 incident cases of COVID-19 who logged their symptoms prospectively in the COVID Symptom Study app. 558 (13.3%) had symptoms lasting >=28 days, 189 (4.5%) for >=8 weeks and 95 (2.3%) for >=12 weeks. Long-COVID was characterised by symptoms of fatigue, headache, dyspnoea and anosmia and was more likely with increasing age, BMI and female sex. Experiencing more than five symptoms during the first week of illness was associated with Long-COVID, OR=3.53 [2.76;4.50]. A simple model to distinguish between short and long-COVID at 7 days, which gained a ROC-AUC of 76%, was replicated in an independent sample of 2472 antibody positive individuals. This model could be used to identify individuals for clinical trials to reduce long-term symptoms and target education and rehabilitation services
    corecore