13 research outputs found

    Image Segmentation using Sparse Subset Selection

    Full text link
    In this paper, we present a new image segmentation method based on the concept of sparse subset selection. Starting with an over-segmentation, we adopt local spectral histogram features to encode the visual information of the small segments into high-dimensional vectors, called superpixel features. Then, the superpixel features are fed into a novel convex model which efficiently leverages the features to group the superpixels into a proper number of coherent regions. Our model automatically determines the optimal number of coherent regions and superpixels assignment to shape final segments. To solve our model, we propose a numerical algorithm based on the alternating direction method of multipliers (ADMM), whose iterations consist of two highly parallelizable sub-problems. We show each sub-problem enjoys closed-form solution which makes the ADMM iterations computationally very efficient. Extensive experiments on benchmark image segmentation datasets demonstrate that our proposed method in combination with an over-segmentation can provide high quality and competitive results compared to the existing state-of-the-art methods.Comment: IEEE Winter Conference on Applications of Computer Vision (WACV), 201

    Compassionately Conservative Normalized Cuts for Image Segmentation

    Get PDF
    Image segmentation is a process used in computer vision to partition an image into regions with similar characteristics. One category of image segmentation algorithms is graph-based, where pixels in an image are represented by vertices in a graph and the similarity between pixels is represented by weighted edges. A segmentation of the image can be found by cutting edges between dissimilar groups of pixels in the graph, leaving different clusters or partitions of the data. A popular graph-based method for segmenting images is the Normalized Cuts (NCuts) algorithm, which quantifies the cost for graph partitioning in a way that biases clusters or segments that are balanced towards having lower values than unbalanced partitionings. This bias is so strong, however, that the NCuts algorithm avoids any singleton partitions, even when vertices are weakly connected to the rest of the graph. For this reason, we propose the Compassionately Conservative Normalized Cut (CCNCut) objective function, which strikes a better compromise between the desire to avoid too many singleton partitions and the notion that all partitions should be balanced. We demonstrate how CCNCut minimization can be relaxed into the problem of computing Piecewise Flat Embeddings (PFE) and provide an overview of, as well as two efficiency improvements to, the Splitting Orthogonality Constraint (SOC) algorithm previously used to approximate PFE. We then present a new algorithm for computing PFE based on iteratively minimizing a sequence of reweighted Rayleigh quotients (IRRQ) and run a series of experiments to compare CCNCut-based image segmentation via SOC and IRRQ to NCut-based image segmentation on the BSDS500 dataset. Our results indicate that CCNCut-based image segmentation yields more accurate results with respect to ground truth than NCut-based segmentation, and IRRQ is less sensitive to initialization than SOC

    Robust Path-based Image Segmentation Using Superpixel Denoising

    Get PDF
    Clustering is the important task of partitioning data into groups with similar characteristics, with one category being spectral clustering where data points are represented as vertices of a graph connected by weighted edges signifying similarity based on distance. The longest leg path distance (LLPD) has shown promise when used in spectral clustering, but is sensitive to noisy data, therefore requiring a data denoising procedure to achieve good performance. Previous denoising techniques have involved identifying and removing noisy data points, however this is not a desirable pre-clustering step for data sets with a specific structure like images. The process of partitioning an image into regions of similar features known as image segmentation can be represented as a clustering problem by defining the vector of intensity and spatial information at each pixel as data point. We therefore propose the method of pre-cluster denoising to formulate a robust LLPD clustering framework. By creating a fine clustering of approximately equal-sized groups and averaging each, a reduced number of data points can be defined that represent the relevant information of the original data set by locally averaging out noise influence. We can then construct a smaller graph representation of the data based on the LLPD between the reduced data points, and identify the spectral embedding coordinates for each reduced point. An out-of-sample extension procedure is then used to compute spectral embedding coordinates at each of the original data points, after which a simple (k-means) clustering is performed to compute the final cluster labels. In the context of image segmentation, computing superpixels provides a nice structure for performing this type of pre-clustering. We show how the above LLPD framework can be carried out in the context of image segmentation, and show that a simple computationally efficient spatial interpolation procedure can be used instead to extend the embedding in a way that yields better segmentation performance with respect to ground truth on a publicly available data set. Similar experiments are also performed using the standard Euclidean distance in place of the LLPD to show the proficiency of the LLPD for image segmentation
    corecore