56,698 research outputs found
Can the Use of nonlinear Color Metrics systematically improve Segmentation?
Image segmentation is a procedure where an image is split into its constituent parts, according to some criterion. In the literature, there are different well-known approaches for segmentation, such as clustering, thresholding, graph theory and region growing. Such approaches, additionally, can be combined with color distance metrics, playing an important role for color similarity computation. Aiming to investigate general approaches able to enhance the performance of segmentation methods, this work presents an empirical study of the effect of a nonlinear color metric on segmentation procedures. For this purpose, three algorithms were chosen: Mumford-Shah, Color Structure Code and Felzenszwalb and Huttenlocher Segmentation. The color similarity metric employed by these algorithms (L2-norm) was replaced by the Polynomial Mahalanobis Distance. This metric is an extension of the statistical Mahalanobis Distance used to measure the distance between coordinates and distribution centers. An evaluation based upon automated comparison of segmentation results against ground truths from the Berkeley Dataset was performed. All three segmentation approaches were compared to their traditional implementations, against each other and also to a large set of other segmentation methods. The statistical analysis performed has indicated a systematic improvement of segmentation results for all three segmentation approaches when the nonlinear metric was employed
Conditional Entropies as Over-Segmentation and Under-Segmentation Metrics for Multi-Part Image Segmentation
In this paper, we define two conditional entropy measures for performance evaluation of general image segmentation. Given a segmentation label map and a ground truth label map, our measures describe their compatibility in two ways. The first one is the conditional entropy of the segmentation given the ground truth, which indicates the oversegmentation rate. The second one is that of the ground truth given the segmentation, which indicates the under-segmentation rate. The two conditional entropies indicate the trade-off between smaller and larger granularities like false positive rate and false negative rate in ROC, and precision and recall in PR curve. Our measures are easy to implement, and involve no threshold or other parameter, have very intuitive explanation and many good theoretical properties, e.g., good bounds, monotonicity, continuity. Experiments show that our measures work well on Berkeley Image Segmentation Benchmark using three segmentation algorithms, Efficient Graph- Based segmentation, Mean Shift and Normalized Cut. We also give an asymmetric similarity measure based on the two entropies and compared it with Variation of Information. The comparison revealled that our method has advantages in many situations.We also checked the coarse-to-fine compatibility of segmentation results with changing parameters and ground truths from different annotators
Multiclass Data Segmentation using Diffuse Interface Methods on Graphs
We present two graph-based algorithms for multiclass segmentation of
high-dimensional data. The algorithms use a diffuse interface model based on
the Ginzburg-Landau functional, related to total variation compressed sensing
and image processing. A multiclass extension is introduced using the Gibbs
simplex, with the functional's double-well potential modified to handle the
multiclass case. The first algorithm minimizes the functional using a convex
splitting numerical scheme. The second algorithm is a uses a graph adaptation
of the classical numerical Merriman-Bence-Osher (MBO) scheme, which alternates
between diffusion and thresholding. We demonstrate the performance of both
algorithms experimentally on synthetic data, grayscale and color images, and
several benchmark data sets such as MNIST, COIL and WebKB. We also make use of
fast numerical solvers for finding the eigenvectors and eigenvalues of the
graph Laplacian, and take advantage of the sparsity of the matrix. Experiments
indicate that the results are competitive with or better than the current
state-of-the-art multiclass segmentation algorithms.Comment: 14 page
Superpixels: An Evaluation of the State-of-the-Art
Superpixels group perceptually similar pixels to create visually meaningful
entities while heavily reducing the number of primitives for subsequent
processing steps. As of these properties, superpixel algorithms have received
much attention since their naming in 2003. By today, publicly available
superpixel algorithms have turned into standard tools in low-level vision. As
such, and due to their quick adoption in a wide range of applications,
appropriate benchmarks are crucial for algorithm selection and comparison.
Until now, the rapidly growing number of algorithms as well as varying
experimental setups hindered the development of a unifying benchmark. We
present a comprehensive evaluation of 28 state-of-the-art superpixel algorithms
utilizing a benchmark focussing on fair comparison and designed to provide new
insights relevant for applications. To this end, we explicitly discuss
parameter optimization and the importance of strictly enforcing connectivity.
Furthermore, by extending well-known metrics, we are able to summarize
algorithm performance independent of the number of generated superpixels,
thereby overcoming a major limitation of available benchmarks. Furthermore, we
discuss runtime, robustness against noise, blur and affine transformations,
implementation details as well as aspects of visual quality. Finally, we
present an overall ranking of superpixel algorithms which redefines the
state-of-the-art and enables researchers to easily select appropriate
algorithms and the corresponding implementations which themselves are made
publicly available as part of our benchmark at
davidstutz.de/projects/superpixel-benchmark/
- …