19 research outputs found

    Area-level deprivation and adiposity in children: is the relationship linear?

    Get PDF
    OBJECTIVE: It has been suggested that childhood obesity is inversely associated with deprivation, such that the prevalence is higher in more deprived groups. However, comparatively few studies actually use an area-level measure of deprivation, limiting the scope to assess trends in the association with obesity for this indicator. Furthermore, most assume a linear relationship. Therefore, the aim of this study was to investigate associations between area-level deprivation and three measures of adiposity in children: body mass index (BMI), waist circumference (WC) and waist-to-height ratio (WHtR). DESIGN: This is a cross-sectional study in which data were collected on three occasions a year apart (2005-2007). SUBJECTS: Data were available for 13,333 children, typically aged 11-12 years, from 37 schools and 542 lower super-output areas (LSOAs). MEASURES: Stature, mass and WC. Obesity was defined as a BMI and WC exceeding the 95th centile according to British reference data. WHtR exceeding 0.5 defined obesity. The Index of Multiple Deprivation affecting children (IDACI) was used to determine area-level deprivation. RESULTS: Considerable differences in the prevalence of obesity exist between the three different measures. However, for all measures of adiposity the highest probability of being classified as obese is in the middle of the IDACI range. This relationship is more marked in girls, such that the probability of being obese for girls living in areas at the two extremes of deprivation is around half that at the peak, occurring in the middle. CONCLUSION: These data confirm the high prevalence of obesity in children and suggest that the relationship between obesity and residential area-level deprivation is not linear. This is contrary to the 'deprivation theory' and questions the current understanding and interpretation of the relationship between obesity and deprivation in children. These results could help make informed decisions at the local level

    Predator-Induced Demographic Shifts in Coral Reef Fish Assemblages

    Get PDF
    In recent years, it has become apparent that human impacts have altered community structure in coastal and marine ecosystems worldwide. Of these, fishing is one of the most pervasive, and a growing body of work suggests that fishing can have strong effects on the ecology of target species, especially top predators. However, the effects of removing top predators on lower trophic groups of prey fishes are less clear, particularly in highly diverse and trophically complex coral reef ecosystems. We examined patterns of abundance, size structure, and age-based demography through surveys and collection-based studies of five fish species from a variety of trophic levels at Kiritimati and Palmyra, two nearby atolls in the Northern Line Islands. These islands have similar biogeography and oceanography, and yet Kiritimati has ∌10,000 people with extensive local fishing while Palmyra is a US National Wildlife Refuge with no permanent human population, no fishing, and an intact predator fauna. Surveys indicated that top predators were relatively larger and more abundant at unfished Palmyra, while prey functional groups were relatively smaller but showed no clear trends in abundance as would be expected from classic trophic cascades. Through detailed analyses of focal species, we found that size and longevity of a top predator were lower at fished Kiritimati than at unfished Palmyra. Demographic patterns also shifted dramatically for 4 of 5 fish species in lower trophic groups, opposite in direction to the top predator, including decreases in average size and longevity at Palmyra relative to Kiritimati. Overall, these results suggest that fishing may alter community structure in complex and non-intuitive ways, and that indirect demographic effects should be considered more broadly in ecosystem-based management

    Optimising convolutional neural networks for large-scale neuroimaging studies

    No full text
    Ageing has a pronounced effect on the human brain, leading to cognitive decline and an increased risk of neurodegenerative diseases. Thus, the ageing population presents a significant challenge for healthcare. The use of MRI and the availability of computational methods for analysing the MRI data is increasingly contributing to the understanding of healthy and diseased structural brain maturation and ageing. Increasingly, large cross sectional and longitudinal neuroimaging studies are becoming available, presenting opportunities for the application of deep learning to neuroimage analysis. There are however, many domain specific problems for applying deep learning to neuroimaging problems, which currently limit their wider applicability. This thesis explores three distinct problems. First, a model is developed to explore brain ageing. Both normal ageing and neurodegenerative disease cause morphological changes to the brain, and deep learning models are well suited to capturing these patterns. A 3D CNN architecture is developed to predict chronological age, using T1- weighted MRI from the UK Biobank dataset. The proposed method shows competitive performance on age prediction, but, most importantly, the CNN prediction errors correlated significantly with many clinical measurements from the UK Biobank in the female and male groups. In addition, having used images from only one imaging modality in this experiment, the relationships between ∆BrainAge and the image-derived phenotypes (IDPs) from all of the other imaging modalities in the UK Biobank are explored, showing correlations consistent with known patterns of ageing. The effect of the pre-processing is also explored. Specifically, it is shown that the use of non-linearly registered images to train the CNNs can lead to the network being driven by artefacts of the registration process, and therefore to miss subtle indicators of ageing, which would limit the clinical relevance of the model. Increasingly large MRI neuroimaging datasets are becoming available but many of these are highly “multi-site, multi-scanner”, which leads to an increase in variance due to nonbiological factors when these are combined. This increase in variance is due to factors such as differences in acquisition protocols and hardware, and can mask the signals of interest, and this is known as the harmonisation problem. A deep learning based scheme, developed from domain adaptation techniques, is used to create harmonised outputs. An iterative update approach is used to create scanner-invariant features, whilst simultaneously maintaining performance on the main task of interest, thus reducing the influence of the acquisition scanner on the network predictions. The framework is demonstrated for regression, classification, and segmentation tasks with two different network architectures. It is shown that not only can the framework harmonise multi-site datasets, but it can also adapt to many data scenarios, including biased datasets and limited training labels. Finally, it is shown that the framework can be extended for the removal of other known confounds in addition to scanner or data source. The overall framework is therefore flexible and should be applicable to a wide range of neuroimaging studies. Finally, the parameterisation of neural networks is considered: the vast number of parameters means that networks need large numbers of training examples and labels. Especially for medical image segmentation, large labelled datasets are rarely available. A method to train and prune UNet architectures simultaneously is developed, alongside an adaptive targeted dropout scheme that makes the network robust to the pruning – that is, the removal of filters from the model. It is shown that the pruned models outperform the standard UNet models, especially when working in very low data regimes, across medical imaging tasks. The framework is then applied to multisite MRI data: the standard UNet models trained on the data from one site suffer significant performance degradation when applied to the data from the other sites, but this is reduced when using the pruned models, due to a reduction in model overfitting. The generalisability is systematically explored, and it is shown that the pruned models have increased robustness, compared to the standard UNet models.</p

    Deep learning-based unlearning of dataset bias for MRI harmonisation and confound removal.

    No full text
    Increasingly large MRI neuroimaging datasets are becoming available, including many highly multi-site multi-scanner datasets. Combining the data from the different scanners is vital for increased statistical power; however, this leads to an increase in variance due to nonbiological factors such as the differences in acquisition protocols and hardware, which can mask signals of interest. We propose a deep learning based training scheme, inspired by domain adaptation techniques, which uses an iterative update approach to aim to create scanner-invariant features while simultaneously maintaining performance on the main task of interest, thus reducing the influence of scanner on network predictions. We demonstrate the framework for regression, classification and segmentation tasks with two different network architectures. We show that not only can the framework harmonise many-site datasets but it can also adapt to many data scenarios, including biased datasets and limited training labels. Finally, we show that the framework can be extended for the removal of other known confounds in addition to scanner. The overall framework is therefore flexible and should be applicable to a wide range of neuroimaging studies

    STAMP: Simultaneous Training and Model Pruning for low data regimes in medical image segmentation

    No full text
    Acquisition of high quality manual annotations is vital for the development of segmentation algorithms. However, to create them we require a substantial amount of expert time and knowledge. Large numbers of labels are required to train convolutional neural networks due to the vast number of parameters that must be learned in the optimisation process. Here, we develop the STAMP algorithm to allow the simultaneous training and pruning of a UNet architecture for medical image segmentation with targeted channelwise dropout to make the network robust to the pruning. We demonstrate the technique across segmentation tasks and imaging modalities. It is then shown that, through online pruning, we are able to train networks to have much higher performance than the equivalent standard UNet models while reducing their size by more than 85% in terms of parameters. This has the potential to allow networks to be directly trained on datasets where very low numbers of labels are available

    Omni-supervised domain adversarial training for white matter hyperintensity segmentation in the UK Biobank

    No full text
    White matter hyperintensities (WMHs, or lesions) appear as hyperintense, localized regions on T2-weighted and FLAIR brain MR images. The heterogeneity in lesion characteristics due to subject-level (e.g., local intensity/contrast) and population-level (e.g., demographic, scanner-related) variations make their segmentation highly challenging. Here, we propose a framework for adapting a state-of-the-art WMH segmentation method with high accuracy from a small, labeled source data (MICCAI WMH segmentation challenge 2017 training data) to a larger dataset such as the UK Biobank without the need of additional manual training labels, using domain adversarial training with omni-supervised learning. Given the well-known association of WMHs with age, the proposed method uses a multi-tasking model for learning lesion segmentation, domain adaptation and age prediction simultaneously. On a subset of the UK Biobank dataset, the proposed method achieves a lesion-level recall, lesion-level F1-measure and Dice overlap value of 0.95, 0.65 and 0.84 respectively, when compared to values of 0.75, 0.49 and 0.80 obtained from the pretrained state-of-the-art baseline method. The code for the method is available at https://github.com/v-sundaresan/omnisup_agepred_semidann

    Omni-supervised domain adversarial training for white matter hyperintensity segmentation in the UK Biobank

    No full text
    White matter hyperintensities (WMHs, or lesions) appear as hyperintense, localized regions on T2-weighted and FLAIR brain MR images. The heterogeneity in lesion characteristics due to subject-level (e.g., local intensity/contrast) and population-level (e.g., demographic, scanner-related) variations make their segmentation highly challenging. Here, we propose a framework for adapting a state-of-the-art WMH segmentation method with high accuracy from a small, labeled source data (MICCAI WMH segmentation challenge 2017 training data) to a larger dataset such as the UK Biobank without the need of additional manual training labels, using domain adversarial training with omni-supervised learning. Given the well-known association of WMHs with age, the proposed method uses a multi-tasking model for learning lesion segmentation, domain adaptation and age prediction simultaneously. On a subset of the UK Biobank dataset, the proposed method achieves a lesion-level recall, lesion-level F1-measure and Dice overlap value of 0.95, 0.65 and 0.84 respectively, when compared to values of 0.75, 0.49 and 0.80 obtained from the pretrained state-of-the-art baseline method. The code for the method is available at https://github.com/v-sundaresan/omnisup_agepred_semidann

    TEDS-Net: enforcing diffeomorphisms in spatial transformers to guarantee topology preservation in segmentations

    No full text
    Accurate topology is key when performing meaningful anatomical segmentations, however, it is often overlooked in traditional deep learning methods. In this work we propose TEDS-Net: a novel segmentation method that guarantees accurate topology. Our method is built upon a continuous diffeomorphic framework, which enforces topology preservation. However, in practice, diffeomorphic fields are represented using a finite number of parameters and sampled using methods such as linear interpolation, violating the theoretical guarantees. We therefore introduce additional modifications to more strictly enforce it. Our network learns how to warp a binary prior, with the desired topological characteristics, to complete the segmentation task. We tested our method on myocardium segmentation from an open-source 2D heart dataset. TEDS-Net preserved topology in 100 % of the cases, compared to 90 % from the U-Net, without sacrificing on Hausdorff Distance or Dice performance. Code will be made available at: www.github.com/mwyburd/TEDS-Net

    Challenges for machine learning in clinical translation of big data imaging studies

    No full text
    Combining deep learning image analysis methods and large-scale imaging datasets offers many opportunities to neuroscience imaging and epidemiology. However, despite these opportunities and the success of deep learning when applied to a range of neuroimaging tasks and domains, significant barriers continue to limit the impact of large-scale datasets and analysis tools. Here, we examine the main challenges and the approaches that have been explored to overcome them. We focus on issues relating to data availability, interpretability, evaluation, and logistical challenges and discuss the problems that still need to be tackled to enable the success of “big data” deep learning approaches beyond research
    corecore