22,490 research outputs found

    Towards continual learning in medical imaging

    Get PDF
    This work investigates continual learning of two segmentation tasks in brain MRI with neural networks. To explore in this context the capabilities of current methods for countering catastrophic forgetting of the first task when a new one is learned, we investigate elastic weight consolidation, a recently proposed method based on Fisher information, originally evaluated on reinforcement learning of Atari games. We use it to sequentially learn segmentation of normal brain structures and then segmentation of white matter lesions. Our findings show this recent method reduces catastrophic forgetting, while large room for improvement exists in these challenging settings for continual learning

    Development and Application of Semi-automated ITK Tools Development and Application of Semi-automated ITK Tools for the Segmentation of Brain MR Images

    Get PDF
    Image segmentation is a process to identify regions of interest from digital images. Image segmentation plays an important role in medical image processing which enables a variety of clinical applications. It is also a tool to facilitate the detection of abnormalities such as cancerous lesions in the brain. Although numerous efforts in recent years have advanced this technique, no single approach solves the problem of segmentation for the large variety of image modalities existing today. Consequently, brain MRI segmentation remains a challenging task. The purpose of this thesis is to demonstrate brain MRI segmentation for delineation of tumors, ventricles and other anatomical structures using Insight Segmentation and Registration Toolkit (ITK) routines as the foundation. ITK is an open-source software system to support the Visible Human Project. Visible Human Project is the creation of complete, anatomically detailed, three-dimensional representations of the normal male and female human bodies. Currently under active development, ITK employs leading-edge segmentation and registration algorithms in two, three, and more dimensions. A goal of this thesis is to implement those algorithms to facilitate brain segmentation for a brain cancer research scientist

    The ENIGMA Stroke Recovery Working Group: Big data neuroimaging to study brain–behavior relationships after stroke

    Get PDF
    The goal of the Enhancing Neuroimaging Genetics through Meta‐Analysis (ENIGMA) Stroke Recovery working group is to understand brain and behavior relationships using well‐powered meta‐ and mega‐analytic approaches. ENIGMA Stroke Recovery has data from over 2,100 stroke patients collected across 39 research studies and 10 countries around the world, comprising the largest multisite retrospective stroke data collaboration to date. This article outlines the efforts taken by the ENIGMA Stroke Recovery working group to develop neuroinformatics protocols and methods to manage multisite stroke brain magnetic resonance imaging, behavioral and demographics data. Specifically, the processes for scalable data intake and preprocessing, multisite data harmonization, and large‐scale stroke lesion analysis are described, and challenges unique to this type of big data collaboration in stroke research are discussed. Finally, future directions and limitations, as well as recommendations for improved data harmonization through prospective data collection and data management, are provided

    Soft Null Hypotheses: A Case Study of Image Enhancement Detection in Brain Lesions

    Get PDF
    This work is motivated by a study of a population of multiple sclerosis (MS) patients using dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) to identify active brain lesions. At each visit, a contrast agent is administered intravenously to a subject and a series of images is acquired to reveal the location and activity of MS lesions within the brain. Our goal is to identify and quantify lesion enhancement location at the subject level and lesion enhancement patterns at the population level. With this example, we aim to address the difficult problem of transforming a qualitative scientific null hypothesis, such as "this voxel does not enhance", to a well-defined and numerically testable null hypothesis based on existing data. We call the procedure "soft null hypothesis" testing as opposed to the standard "hard null hypothesis" testing. This problem is fundamentally different from: 1) testing when a quantitative null hypothesis is given; 2) clustering using a mixture distribution; or 3) identifying a reasonable threshold with a parametric null assumption. We analyze a total of 20 subjects scanned at 63 visits (~30Gb), the largest population of such clinical brain images
    • …
    corecore