39,426 research outputs found

    A Methodology for Evaluating Image Segmentation Algorithms

    Get PDF
    The purpose of this paper is to describe a framework for evaluating image segmentation algorithms. Image segmentation consists of object recognition and delineation. For evaluating segmentation methods, three factors - precision (reproducibility), accuracy (agreement with truth), and efficiency (time taken) – need to be considered for both recognition and delineation. To assess precision, we need to choose a figure of merit (FOM), repeat segmentation considering all sources of variation, and determine variations in FOM via statistical analysis. It is impossible usually to establish true segmentation. Hence, to assess accuracy, we need to choose a surrogate of true segmentation and proceed as for precision. To assess efficiency, both the computational and the user time required for algorithm and operator training and for algorithm execution should be measured and analyzed. Precision, accuracy, and efficiency are interdependent. It is difficult to improve one factor without affecting others. Segmentation methods must be compared based on all three factors. The weight given to each factor depends on application

    Comparative evaluation of active contour model extensions for automated cardiac MR image segmentation by regional error assessment

    Get PDF
    Objective: In the field of cardiac MR image segmentation, active contour models, or snakes have been extensively used, owing to their promising results and to the numerous extensions proposed to improve their performance. This paper explores a methodology for evaluating cardiac MR image segmentation algorithms, which assesses the distance between computer-generated and the observer's hand-outlined boundaries. This metric was applied to various external force extensions of the traditional snake, since no systematic comparison has been performed. Materials and methods: Cardiac MRI from six patients were analyzed. Imaging was performed on a 1.5T MR scanner with ECG-gated balanced steady-state free precession (b-SSFP) sequences. Segmentation performances were established for traditional snake, gradient vector flow snake, standard- and guided- pressure force-based snake. The use of a pre-treatment with non-linear anisotropic filtering was also compared to non-filtered images. Results: Agreement between manual and segmentation algorithms was satisfactory for ejection fraction for every segmentation scheme. However end-systolic and end-diastolic volumes were systematically underestimated. Conclusion: The developed regional error metric provided a more rigorous evaluation of the segmentation schemes in comparison to the classical derived parameters based on left ventricle volume estimation, usually used in functional cardiac MR studies. These derived parameters can furthermore mask local segmentation error

    Methodology for extensive evaluation of semiautomatic and interactive segmentation algorithms using simulated Interaction models

    Get PDF
    Performance of semiautomatic and interactive segmentation(SIS) algorithms are usually evaluated by employing a small number of human operators to segment the images. The human operators typically provide the approximate location of objects of interest and their boundaries in an interactive phase, which is followed by an automatic phase where the segmentation is performed under the constraints of the operator-provided guidance. The segmentation results produced from this small set of interactions do not represent the true capability and potential of the algorithm being evaluated. For example, due to inter-operator variability, human operators may make choices that may provide either overestimated or underestimated results. As well, their choices may not be realistic when compared to how the algorithm is used in the field, since interaction may be influenced by operator fatigue and lapses in judgement. Other drawbacks to using human operators to assess SIS algorithms, include: human error, the lack of available expert users, and the expense. A methodology for evaluating segmentation performance is proposed here which uses simulated Interaction models to programmatically generate large numbers of interactions to ensure the presence of interactions throughout the object region. These interactions are used to segment the objects of interest and the resulting segmentations are then analysed using statistical methods. The large number of interactions generated by simulated interaction models capture the variabilities existing in the set of user interactions by considering each and every pixel inside the entire region of the object as a potential location for an interaction to be placed with equal probability. Due to the practical limitation imposed by the enormous amount of computation for the enormous number of possible interactions, uniform sampling of interactions at regular intervals is used to generate the subset of all possible interactions which still can represent the diverse pattern of the entire set of interactions. Categorization of interactions into different groups, based on the position of the interaction inside the object region and texture properties of the image region where the interaction is located, provides the opportunity for fine-grained algorithm performance analysis based on these two criteria. Application of statistical hypothesis testing make the analysis more accurate, scientific and reliable in comparison to conventional evaluation of semiautomatic segmentation algorithms. The proposed methodology has been demonstrated by two case studies through implementation of seven different algorithms using three different types of interaction modes making a total of nine segmentation applications to assess the efficacy of the methodology. Application of this methodology has revealed in-depth, fine details about the performance of the segmentation algorithms which currently existing methods could not achieve due to the absence of a large, unbiased set of interactions. Practical application of the methodology for a number of algorithms and diverse interaction modes have shown its feasibility and generality for it to be established as an appropriate methodology. Development of this methodology to be used as a potential application for automatic evaluation of the performance of SIS algorithms looks very promising for users of image segmentation

    How to collect high quality segmentations: use human or computer drawn object boundaries?

    Full text link
    High quality segmentations must be captured consistently for applications such as biomedical image analysis. While human drawn segmentations are often collected because they provide a consistent level of quality, computer drawn segmentations can be collected efficiently and inexpensively. In this paper, we examine how to leverage available human and computer resources to consistently create high quality segmentations. We propose a quality control methodology. We demonstrate how to apply this approach using crowdsourced and domain expert votes for the "best" segmentation from a collection of human and computer drawn segmentations for 70 objects from a public dataset and 274 objects from biomedical images. We publicly share the library of biomedical images which includes 1,879 manual annotations of the boundaries of 274 objects. We found for the 344 objects that no single segmentation source was preferred and that human annotations are not always preferred over computer annotations. These results motivated us to examine the traditional approach to evaluate segmentation algorithms, which involves comparing the segmentations produced by the algorithms to manual annotations on benchmark datasets. We found that algorithm benchmarking results change when the comparison is made to consensus-voted segmentations. Our results led us to suggest a new segmentation approach that uses machine learning to predict the optimal segmentation source and a modified segmentation evaluation approach.National Science Foundation (IIS-0910908

    Image segmentation evaluation using an integrated framework

    Get PDF
    In this paper we present a general framework we have developed for running and evaluating automatic image and video segmentation algorithms. This framework was designed to allow effortless integration of existing and forthcoming image segmentation algorithms, and allows researchers to focus more on the development and evaluation of segmentation methods, relying on the framework for encoding/decoding and visualization. We then utilize this framework to automatically evaluate four distinct segmentation algorithms, and present and discuss the results and statistical findings of the experiment

    A framework for evaluating stereo-based pedestrian detection techniques

    Get PDF
    Automated pedestrian detection, counting, and tracking have received significant attention in the computer vision community of late. As such, a variety of techniques have been investigated using both traditional 2-D computer vision techniques and, more recently, 3-D stereo information. However, to date, a quantitative assessment of the performance of stereo-based pedestrian detection has been problematic, mainly due to the lack of standard stereo-based test data and an agreed methodology for carrying out the evaluation. This has forced researchers into making subjective comparisons between competing approaches. In this paper, we propose a framework for the quantitative evaluation of a short-baseline stereo-based pedestrian detection system. We provide freely available synthetic and real-world test data and recommend a set of evaluation metrics. This allows researchers to benchmark systems, not only with respect to other stereo-based approaches, but also with more traditional 2-D approaches. In order to illustrate its usefulness, we demonstrate the application of this framework to evaluate our own recently proposed technique for pedestrian detection and tracking

    Color image segmentation using a spatial k-means clustering algorithm

    Get PDF
    This paper details the implementation of a new adaptive technique for color-texture segmentation that is a generalization of the standard K-Means algorithm. The standard K-Means algorithm produces accurate segmentation results only when applied to images defined by homogenous regions with respect to texture and color since no local constraints are applied to impose spatial continuity. In addition, the initialization of the K-Means algorithm is problematic and usually the initial cluster centers are randomly picked. In this paper we detail the implementation of a novel technique to select the dominant colors from the input image using the information from the color histograms. The main contribution of this work is the generalization of the K-Means algorithm that includes the primary features that describe the color smoothness and texture complexity in the process of pixel assignment. The resulting color segmentation scheme has been applied to a large number of natural images and the experimental data indicates the robustness of the new developed segmentation algorithm
    corecore