18,662 research outputs found

    TED: a tolerant edit distance for segmentation evaluation

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/In this paper, we present a novel error measure to compare a computer-generated segmentation of images or volumes against ground truth. This measure, which we call Tolerant Edit Distance (TED), is motivated by two observations that we usually encounter in biomedical image processing: (1) Some errors, like small boundary shifts, are tolerable in practice. Which errors are tolerable is application dependent and should be explicitly expressible in the measure. (2) Non-tolerable errors have to be corrected manually. The effort needed to do so should be reflected by the error measure. Our measure is the minimal weighted sum of split and merge operations to apply to one segmentation such that it resembles another segmentation within specified tolerance bounds. This is in contrast to other commonly used measures like Rand index or variation of information, which integrate small, but tolerable, differences. Additionally, the TED provides intuitive numbers and allows the localization and classification of errors in images or volumes. We demonstrate the applicability of the TED on 3D segmentations of neurons in electron microscopy images where topological correctness is arguable more important than exact boundary locations. Furthermore, we show that the TED is not just limited to evaluation tasks. We use it as the loss function in a max-margin learning framework to find parameters of an automatic neuron segmentation algorithm. We show that training to minimize the TED, i.e., to minimize crucial errors, leads to higher segmentation accuracy compared to other learning methods.Peer ReviewedPostprint (author's final draft

    A Comparison Between Alignment and Integral Based Kernels for Vessel Trajectories

    Get PDF
    In this paper we present a comparison between two important types of similarity measures for moving object trajectories for machine learning from vessel movement data. These similarities are compared in the tasks of clustering, classication and outlier detection. The rst similarity type are alignment measures, such as dynamic time warping and edit distance. The second type are based on the integral over time between two trajectories. Following earlier work we dene these measures in the context of kernel methods, which provide state-of-the-art, robust algorithms for the tasks studied. Furthermore, we include the in uence of applying piecewise linear segmentation as pre-processing to the vessel trajectories when computing alignment measures, since this has been shown to give a positive eect in computation time and performance. In our experiments the alignment based measures show the best performance. Regular versions of edit distance give the best performance in clustering and classication, whereas the softmax variant of dynamic time warping works best in outlier detection. Moreover, piecewise linear segmentation has a positive eect on alignments, which seems to be due to the fact salient points in a trajectory, especially important in clustering and outlier detection, are highlighted by the segmentation and have a large in uence in the alignments
    • …
    corecore