51 research outputs found

    Distance measures for image segmentation evaluation

    Get PDF
    In this paper we present a study of evaluation measures that enable the quantification of the quality of an image segmentation result. Despite significant advances in image segmentation techniques, evaluation of these techniques thus far has been largely subjective. Typically, the effectiveness of a new algorithm is demonstrated only by the presentation of a few segmented images and is otherwise left to subjective evaluation by the reader. Such an evaluation criterion can be useful for different applications: the comparison of segmentation results, the automatic choice of the best fitted parameters of a segmentation method for a given image, or the definition of new segmentation methods by optimization. We first present the state of art of distance evaluation measures, and then, we compare several evaluation criteria

    Performance evaluation of image segmentation

    Get PDF
    In spite of significant advances in image segmentation techniques, evaluation of these methods thus far has been largely subjective. Typically, the effectiveness of a new algorithm is demonstrated only by the presentation of a few segmented images that are evaluated by some method, or it is otherwise left to subjective evaluation by the reader. We propose a new approach for evaluation of segmentation that takes into account not only the accuracy of the boundary localization of the created segments but also the under-segmentation and over-segmentation effects, regardless to the number of regions in each partition. In addition, it takes into account the way humans perceive visual information. This new metric can be applied both to automatically provide a ranking among different segmentation algorithms and to find an optimal set of input parameters of a given algorithm

    Influence of image segmentation on one-dimensional fluid dynamics predictions in the mouse pulmonary arteries

    Get PDF
    Computational fluid dynamics (CFD) models are emerging as tools for assisting in diagnostic assessment of cardiovascular disease. Recent advances in image segmentation has made subject-specific modelling of the cardiovascular system a feasible task, which is particularly important in the case of pulmonary hypertension (PH), which requires a combination of invasive and non-invasive procedures for diagnosis. Uncertainty in image segmentation can easily propagate to CFD model predictions, making uncertainty quantification crucial for subject-specific models. This study quantifies the variability of one-dimensional (1D) CFD predictions by propagating the uncertainty of network geometry and connectivity to blood pressure and flow predictions. We analyse multiple segmentations of an image of an excised mouse lung using different pre-segmentation parameters. A custom algorithm extracts vessel length, vessel radii, and network connectivity for each segmented pulmonary network. We quantify uncertainty in geometric features by constructing probability densities for vessel radius and length, and then sample from these distributions and propagate uncertainties of haemodynamic predictions using a 1D CFD model. Results show that variation in network connectivity is a larger contributor to haemodynamic uncertainty than vessel radius and length

    Point-wise mutual information-based video segmentation with high temporal consistency

    Full text link
    In this paper, we tackle the problem of temporally consistent boundary detection and hierarchical segmentation in videos. While finding the best high-level reasoning of region assignments in videos is the focus of much recent research, temporal consistency in boundary detection has so far only rarely been tackled. We argue that temporally consistent boundaries are a key component to temporally consistent region assignment. The proposed method is based on the point-wise mutual information (PMI) of spatio-temporal voxels. Temporal consistency is established by an evaluation of PMI-based point affinities in the spectral domain over space and time. Thus, the proposed method is independent of any optical flow computation or previously learned motion models. The proposed low-level video segmentation method outperforms the learning-based state of the art in terms of standard region metrics

    Discrete and Continuous Optimization for Motion Estimation

    Get PDF
    The study of motion estimation reaches back decades and has become one of the central topics of research in computer vision. Even so, there are situations where current approaches fail, such as when there are extreme lighting variations, significant occlusions, or very large motions. In this thesis, we propose several approaches to address these issues. First, we propose a novel continuous optimization framework for estimating optical flow based on a decomposition of the image domain into triangular facets. We show how this allows for occlusions to be easily and naturally handled within our optimization framework without any post-processing. We also show that a triangular decomposition enables us to use a direct Cholesky decomposition to solve the resulting linear systems by reducing its memory requirements. Second, we introduce a simple method for incorporating additional temporal information into optical flow using inertial estimates of the flow, which leads to a significant reduction in error. We evaluate our methods on several datasets and achieve state-of-the-art results on MPI-Sintel. Finally, we introduce a discrete optimization framework for optical flow computation. Discrete approaches have generally been avoided in optical flow because of the relatively large label space that makes them computationally expensive. In our approach, we use recent advances in image segmentation to build a tree-structured graphical model that conforms to the image content. We show how the optimal solution to these discrete optical flow problems can be computed efficiently by making use of optimization methods from the object recognition literature, even for large images with hundreds of thousands of labels

    Land-Use/Land-Cover Characterization Using an Object-Based Classifier for the Buffalo River Sub-Basin in North-Central Arkansas

    Get PDF
    Sensors for remote sensing have improved enormously over the past few years and now deliver high resolution multispectral data on an operational basis. Most Land-use/Land-cover (LULC) classifications of high spatial resolution imagery, however, still rely on basic image processing concepts (i.e., image classification using single pixel-based classifiers) developed in the 1970s. This study developed the methodology using an object-based classifier to characterize the LULC for the Buffalo River sub-basin and surrounding areas with a 0.81- hectare (2-acre) minimum mapping unit (MMU). Base imagery for the 11-county classification was orthorectified color-infrared aerial photographs taken from 2000 to 2002 with a one-meter spatial resolution. The object-based classification was conducted using Feature Analyst® , Imagine® , and ArcGIS® software. Feature Analyst® employs hierarchical machine learning techniques to extract the feature class information from the imagery using both spectral and inherent spatial relationships of objects. The methodology developed for the 7-class classification involved both automated and manual interpretation of objects. The overall accuracy of this LULC classification method, which identified more than 146,000 features, was 87.8% for the Buffalo River sub basin and surrounding areas
    • …
    corecore