308 research outputs found
Multiclass Data Segmentation using Diffuse Interface Methods on Graphs
We present two graph-based algorithms for multiclass segmentation of
high-dimensional data. The algorithms use a diffuse interface model based on
the Ginzburg-Landau functional, related to total variation compressed sensing
and image processing. A multiclass extension is introduced using the Gibbs
simplex, with the functional's double-well potential modified to handle the
multiclass case. The first algorithm minimizes the functional using a convex
splitting numerical scheme. The second algorithm is a uses a graph adaptation
of the classical numerical Merriman-Bence-Osher (MBO) scheme, which alternates
between diffusion and thresholding. We demonstrate the performance of both
algorithms experimentally on synthetic data, grayscale and color images, and
several benchmark data sets such as MNIST, COIL and WebKB. We also make use of
fast numerical solvers for finding the eigenvectors and eigenvalues of the
graph Laplacian, and take advantage of the sparsity of the matrix. Experiments
indicate that the results are competitive with or better than the current
state-of-the-art multiclass segmentation algorithms.Comment: 14 page
Deep Interactive Region Segmentation and Captioning
With recent innovations in dense image captioning, it is now possible to
describe every object of the scene with a caption while objects are determined
by bounding boxes. However, interpretation of such an output is not trivial due
to the existence of many overlapping bounding boxes. Furthermore, in current
captioning frameworks, the user is not able to involve personal preferences to
exclude out of interest areas. In this paper, we propose a novel hybrid deep
learning architecture for interactive region segmentation and captioning where
the user is able to specify an arbitrary region of the image that should be
processed. To this end, a dedicated Fully Convolutional Network (FCN) named
Lyncean FCN (LFCN) is trained using our special training data to isolate the
User Intention Region (UIR) as the output of an efficient segmentation. In
parallel, a dense image captioning model is utilized to provide a wide variety
of captions for that region. Then, the UIR will be explained with the caption
of the best match bounding box. To the best of our knowledge, this is the first
work that provides such a comprehensive output. Our experiments show the
superiority of the proposed approach over state-of-the-art interactive
segmentation methods on several well-known datasets. In addition, replacement
of the bounding boxes with the result of the interactive segmentation leads to
a better understanding of the dense image captioning output as well as accuracy
enhancement for the object detection in terms of Intersection over Union (IoU).Comment: 17, pages, 9 figure
Graph Spectral Image Processing
Recent advent of graph signal processing (GSP) has spurred intensive studies
of signals that live naturally on irregular data kernels described by graphs
(e.g., social networks, wireless sensor networks). Though a digital image
contains pixels that reside on a regularly sampled 2D grid, if one can design
an appropriate underlying graph connecting pixels with weights that reflect the
image structure, then one can interpret the image (or image patch) as a signal
on a graph, and apply GSP tools for processing and analysis of the signal in
graph spectral domain. In this article, we overview recent graph spectral
techniques in GSP specifically for image / video processing. The topics covered
include image compression, image restoration, image filtering and image
segmentation
Deep Extreme Cut: From Extreme Points to Object Segmentation
This paper explores the use of extreme points in an object (left-most,
right-most, top, bottom pixels) as input to obtain precise object segmentation
for images and videos. We do so by adding an extra channel to the image in the
input of a convolutional neural network (CNN), which contains a Gaussian
centered in each of the extreme points. The CNN learns to transform this
information into a segmentation of an object that matches those extreme points.
We demonstrate the usefulness of this approach for guided segmentation
(grabcut-style), interactive segmentation, video object segmentation, and dense
segmentation annotation. We show that we obtain the most precise results to
date, also with less user input, in an extensive and varied selection of
benchmarks and datasets. All our models and code are publicly available on
http://www.vision.ee.ethz.ch/~cvlsegmentation/dextr/.Comment: CVPR 2018 camera ready. Project webpage and code:
http://www.vision.ee.ethz.ch/~cvlsegmentation/dextr
Segmentation by transduction
International audienceThis paper addresses the problem of segmenting an image into regions consistent with user-supplied seeds (e.g., a sparse set of broad brush strokes). We view this task as a statistical transductive inference, in which some pixels are already associated with given zones and the remaining ones need to be classified. Our method relies on the Laplacian graph regularizer, a powerful manifold learning tool that is based on the estimation of variants of the Laplace-Beltrami operator and is tightly related to diffusion processes. Segmentation is modeled as the task of finding matting coefficients for unclassified pixels given known matting coefficients for seed pixels. The proposed algorithm essentially relies on a high margin assumption in the space of pixel characteristics. It is simple, fast, and accurate, as demonstrated by qualitative results on natural images and a quantitative comparison with state-of-the-art methods on the Microsoft GrabCut segmentation database
- …