16 research outputs found

    Plumes in kinetic transport: how the simple random walk can be too simple

    Full text link
    We consider a discrete time particle model for kinetic transport on the two dimensional integer lattice. The particle can move due to advection in the xx-direction and due to dispersion. This happens when the particle is free, but it can also be adsorbed and then does not move. When the dispersion of the particle is modeled by simple random walk, strange phenomena occur. In the second half of the paper, we resolve these problems and give expressions for the shape of the plume consisting of many particles.Comment: 12 pages, 4 figure

    Transfer learning improves supervised image segmentation across imaging protocols

    Get PDF
    The variation between images obtained with different scanners or different imaging protocols presents a major challenge in automatic segmentation of biomedical images. This variation especially hampers the application of otherwise successful supervised-learning techniques which, in order to perform well, often require a large amount of labeled training data that is exactly representative of the target data. We therefore propose to use transfer learning for image segmentation. Transfer-learning techniques can cope with differences in distributions between training and target data, and therefore may improve performance over supervised learning for segmentation across scanners and scan protocols. We present four transfer classifiers that can train a classification scheme with only a small amount of representative training data, in addition to a larger amount of other training data with slightly different characteristics. The performance of the four transfer classifiers was compared to that of standard supervised classification on two magnetic resonance imaging brain-segmentation tasks with multi-site data: white matter, gray matter, and cerebrospinal fluid segmentation; and white-matter-/MS-lesion segmentation. The experiments showed that when there is only a small amount of representative training data available, transfer learning can greatly outperform common supervised-learning approaches, minimizing classification errors by up to 60%

    Transfer learning by feature-space transformation: A method for Hippocampus segmentation across scanners

    Get PDF
    Many successful approaches in MR brain segmentation use supervised voxel classification, which requires manually labeled training images that are representative of the test images to segment. However, the performance of such methods often deteriorates if training and test images are acquired with different scanners or scanning parameters, since this leads to differences in feature representations between training and test data. In this paper we propose a feature-space transformation (FST) to overcome such differences in feature representations. The proposed FST is derived from unlabeled images of a subject that was scanned with both the source and the target scan protocol. After an affine registration, these images give a mapping between source and target voxels in the feature space. This mapping is then used to map all training samples to the feature representation of the test samples. We evaluated the benefit of the proposed FST on hippocampus segmentation. Experiments were performed on two datasets: one with relatively small differences between training and test images and one with large differences. In both cases, the FST significantly improved the performance compared to using only image normalization. Additionally, we showed that our FST can be used to improve the performance of a state-of-the-art patch-based-atlas-fusion technique in case of large differences between scanners

    MRBrainS Challenge: Online Evaluation Framework for Brain Image Segmentation in 3T MRI Scans

    Get PDF
    Many methods have been proposed for tissue segmentation in brain MRI scans. The multitude of methods proposed complicates the choice of one method above others. We have therefore established the MRBrainS online evaluation framework for evaluating (semi) automatic algorithms that segment gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) on 3T brain MRI scans of elderly subjects (65-80 y). Participants apply their algorithms to the provided data, after which their results are evaluated and ranked. Full manual segmentations of GM, WM, and CSF are available for all scans and used as the reference standard. Five datasets are provided for training and fifteen for testing. The evaluated methods are ranked based on their overall performance to segment GM, WM, and CSF and evaluated using three evaluation metrics (Dice, H95, and AVD) and the results are published on the MRBrainS13 website. We present the results of eleven segmentation algorithms that participated in the MRBrainS13 challenge workshop at MICCAI, where the framework was launched, and three commonly used freeware packages: FreeSurfer, FSL, and SPM. The MRBrainS evaluation framework provides an objective and direct comparison of all evaluated algorithms and can aid in selecting the best performing method for the segmentation goal at hand.This study was financially supported by IMDI Grant 104002002 (Brainbox) from ZonMw, the Netherlands Organisation for Health Research and Development, within kind sponsoring by Philips, the University Medical Center Utrecht, and Eindhoven University of Technology. The authors would like to acknowledge the following members of the Utrecht Vascular Cognitive Impairment Study Group who were not included as coauthors of this paper but were involved in the recruitment of study participants and MRI acquisition at the UMC Utrecht (in alphabetical order by department): E. van den Berg, M. Brundel, S. Heringa, and L. J. Kappelle of the Department of Neurology, P. R. Luijten and W. P. Th. M. Mali of the Department of Radiology, and A. Algra and G. E. H. M. Rutten of the Julius Center for Health Sciences and Primary Care. The research of Geert Jan Biessels and the VCI group was financially supported by VIDI Grant 91711384 from ZonMw and by Grant 2010T073 of the Netherlands Heart Foundation. The research of Jeroen de Bresser is financially supported by a research talent fellowship of the University Medical Center Utrecht (Netherlands). The research of Annegreet van Opbroek and Marleen de Bruijne is financially supported by a research grant from NWO (the Netherlands Organisation for Scientific Research). The authors would like to acknowledge MeVis Medical Solutions AG (Bremen, Germany) for providing MeVisLab. Duygu Sarikaya and Liang Zhao acknowledge their Advisor Professor Jason Corso for his guidance. Duygu Sarikaya is supported by NIH 1 R21CA160825-01 and Liang Zhao is partially supported by the China Scholarship Council (CSC).info:eu-repo/semantics/publishedVersio

    Weighting training images by maximizing distribution similarity for supervised segmentation across scanners

    No full text
    Many automatic segmentation methods are based on supervised machine learning. Such methods have proven to perform well, on the condition that they are trained on a sufficiently large manually labeled training set that is representative of the images to segment. However, due to differences between scanners, scanning parameters, and patients such a training set may be difficult to obtain. We present a transfer-learning approach to segmentation by multi-feature voxelwise classification. The presented method can be trained using a heterogeneous set of training images that may be obtained with different scanners than the target image. In our approach each training image is given a weight based on the distribution of its voxels in the feature space. These image weights are chosen as to minimize the difference between the weighted probability density function (PDF) of the voxels of the training images and the PDF of the voxels of the target image. The voxels and weights of the training images are then used to train a weighted classifier. We tested our method on three segmentation tasks: brain-tissue segmentation, skull stripping, and white-matter-lesion segmentation. For all three applications, the proposed weighted classifier significantly outperformed an unweighted classifier on all training images, reducing classification errors by up to 42%. For brain-tissue segmentation and skull stripping our method even significantly outperformed the traditional approach of training on representative training images from the same study as the target image. (C) 2015 Elsevier B.V. All rights reserved

    Feature-space transformation improves supervised segmentation across scanners

    No full text
    Image-segmentation techniques based on supervised classification generally perform well on the condition that training and test samples have the same feature distribution. However, if training and test images are acquired with different scanners or scanning parameters, their feature distributions can be very different, which can hurt the performance of such techniques. We propose a feature-space-transformation method to overcome these differences in feature distributions. Our method learns a mapping of the feature values of training voxels to values observed in images from the test scanner. This transformation is learned from unlabeled images of subjects scanned on both the training scanner and the test scanner. We evaluated our method on hippocampus segmentation on 27 images of the Harmonized Hippocampal Protocol (HarP), a heterogeneous dataset consisting of 1.5T and 3T MR images. The results showed that our feature space transformation improved the Dice overlap of segmentations obtained with an SVM classifier from 0.36 to 0.85 when only 10 atlases were used and from 0.79 to 0.85 when around 100 atlases were used
    corecore