1,402 research outputs found

    Many-Task Computing and Blue Waters

    Full text link
    This report discusses many-task computing (MTC) generically and in the context of the proposed Blue Waters systems, which is planned to be the largest NSF-funded supercomputer when it begins production use in 2012. The aim of this report is to inform the BW project about MTC, including understanding aspects of MTC applications that can be used to characterize the domain and understanding the implications of these aspects to middleware and policies. Many MTC applications do not neatly fit the stereotypes of high-performance computing (HPC) or high-throughput computing (HTC) applications. Like HTC applications, by definition MTC applications are structured as graphs of discrete tasks, with explicit input and output dependencies forming the graph edges. However, MTC applications have significant features that distinguish them from typical HTC applications. In particular, different engineering constraints for hardware and software must be met in order to support these applications. HTC applications have traditionally run on platforms such as grids and clusters, through either workflow systems or parallel programming systems. MTC applications, in contrast, will often demand a short time to solution, may be communication intensive or data intensive, and may comprise very short tasks. Therefore, hardware and software for MTC must be engineered to support the additional communication and I/O and must minimize task dispatch overheads. The hardware of large-scale HPC systems, with its high degree of parallelism and support for intensive communication, is well suited for MTC applications. However, HPC systems often lack a dynamic resource-provisioning feature, are not ideal for task communication via the file system, and have an I/O system that is not optimized for MTC-style applications. Hence, additional software support is likely to be required to gain full benefit from the HPC hardware

    Deep Quality Estimation: Creating Surrogate Models for Human Quality Ratings

    Full text link
    Human ratings are abstract representations of segmentation quality. To approximate human quality ratings on scarce expert data, we train surrogate quality estimation models. We evaluate on a complex multi-class segmentation problem, specifically glioma segmentation following the BraTS annotation protocol. The training data features quality ratings from 15 expert neuroradiologists on a scale ranging from 1 to 6 stars for various computer-generated and manual 3D annotations. Even though the networks operate on 2D images and with scarce training data, we can approximate segmentation quality within a margin of error comparable to human intra-rater reliability. Segmentation quality prediction has broad applications. While an understanding of segmentation quality is imperative for successful clinical translation of automatic segmentation quality algorithms, it can play an essential role in training new segmentation models. Due to the split-second inference times, it can be directly applied within a loss function or as a fully-automatic dataset curation mechanism in a federated learning setting

    Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries

    Get PDF
    This two-volume set LNCS 12962 and 12963 constitutes the thoroughly refereed proceedings of the 7th International MICCAI Brainlesion Workshop, BrainLes 2021, as well as the RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge, the Federated Tumor Segmentation (FeTS) Challenge, the Cross-Modality Domain Adaptation (CrossMoDA) Challenge, and the challenge on Quantification of Uncertainties in Biomedical Image Quantification (QUBIQ). These were held jointly at the 23rd Medical Image Computing for Computer Assisted Intervention Conference, MICCAI 2020, in September 2021. The 91 revised papers presented in these volumes were selected form 151 submissions. Due to COVID-19 pandemic the conference was held virtually. This is an open access book

    Uncertainty quantification in medical image synthesis

    Get PDF
    Machine learning approaches to medical image synthesis have shown outstanding performance, but often do not convey uncertainty information. In this chapter, we survey uncertainty quantification methods in medical image synthesis and advocate the use of uncertainty for improving clinicians’ trust in machine learning solutions. First, we describe basic concepts in uncertainty quantification and discuss its potential benefits in downstream applications. We then review computational strategies that facilitate inference, and identify the main technical and clinical challenges. We provide a first comprehensive review to inform how to quantify, communicate and use uncertainty in medical synthesis applications

    Modelling individual variations in brain structure and function using multimodal MRI

    Get PDF
    Every brain is different. Understanding this variability is crucial for investigating the neural substrate underlying individuals’ unique behaviour and developing personalised diagnosis and treatments. This thesis presents novel computational approaches to study individual variability in brain structure and function using magnetic resonance imaging (MRI) data. It comprises three main chapters, each addressing a specific challenge in the field. In Chapter 3, the thesis proposes a novel Image Quality Transfer (IQT) technique, HQ-augmentation, to accurately localise a Deep Brain Stimulation (DBS) target in low-quality clinical-like data. Leveraging high-quality diffusion MRI datasets from the Human Connectome Project (HCP), the HQ-augmentation approach is robust to corruptions in data quality while preserving the individual anatomical variability of the DBS target. It outperforms existing alternatives and generalises to unseen low-quality diffusion MRI datasets with different acquisition protocols, such as the UK Biobank (UKB) dataset. In Chapter 4, the thesis presents a framework for enhancing prediction accuracy of individual task-fMRI activation profiles using the variability of resting-state fMRI. Assuming resting-state functional modes underlie task-evoked activity, this chapter demonstrates that shape and intensity of individualised task activations can be separately modelled. This chapter introduced the concept of "residualisation" and showed that training on residuals leads to better individualised predictions. The framework’s prediction accuracy, validated on HCP and UKB data, is on par with task-fMRI test-retest reliability, suggesting potential for supplementing traditional task localisers. In Chapter 5, the thesis presents a novel framework for individualised retinotopic mapping using resting-state fMRI, from the primary visual cortex to visual cortex area 4. The proposed approach reproduces task-elicited retinotopy and captures individual differences in retinotopic organisation. The proposed framework delineates borders of early visual areas more accurately than group-average parcellation and is effective with both high-field 7T and more common 3T resting-state fMRI data, providing a valuable alternative to resource-intensive retinotopy task-fMRI experiments. Overall, this thesis demonstrates the potential of advanced MRI analysis techniques to study individual variability in brain structure and function, paving the way for improved clinical applications tailored to individual patients and a better understanding of neural mechanisms underlying unique human behaviour

    Advancing efficiency and robustness of neural networks for imaging

    Get PDF
    Enabling machines to see and analyze the world is a longstanding research objective. Advances in computer vision have the potential of influencing many aspects of our lives as they can enable machines to tackle a variety of tasks. Great progress in computer vision has been made, catalyzed by recent progress in machine learning and especially the breakthroughs achieved by deep artificial neural networks. Goal of this work is to alleviate limitations of deep neural networks that hinder their large-scale adoption for real-world applications. To this end, it investigates methodologies for constructing and training deep neural networks with low computational requirements. Moreover, it explores strategies for achieving robust performance on unseen data. Of particular interest is the application of segmenting volumetric medical scans because of the technical challenges it imposes, as well as its clinical importance. The developed methodologies are generic and of relevance to a broader computer vision and machine learning audience. More specifically, this work introduces an efficient 3D convolutional neural network architecture, which achieves high performance for segmentation of volumetric medical images, an application previously hindered by high computational requirements of 3D networks. It then investigates sensitivity of network performance on hyper-parameter configuration, which we interpret as overfitting the model configuration to the data available during development. It is shown that ensembling a set of models with diverse configurations mitigates this and improves generalization. The thesis then explores how to utilize unlabelled data for learning representations that generalize better. It investigates domain adaptation and introduces an architecture for adversarial networks tailored for adaptation of segmentation networks. Finally, a novel semi-supervised learning method is proposed that introduces a graph in the latent space of a neural network to capture relations between labelled and unlabelled samples. It then regularizes the embedding to form a compact cluster per class, which improves generalization.Open Acces
    • …
    corecore