98,495 research outputs found
Research in interactive scene analysis
Cooperative (man-machine) scene analysis techniques were developed whereby humans can provide a computer with guidance when completely automated processing is infeasible. An interactive approach promises significant near-term payoffs in analyzing various types of high volume satellite imagery, as well as vehicle-based imagery used in robot planetary exploration. This report summarizes the work accomplished over the duration of the project and describes in detail three major accomplishments: (1) the interactive design of texture classifiers; (2) a new approach for integrating the segmentation and interpretation phases of scene analysis; and (3) the application of interactive scene analysis techniques to cartography
ClassCut for Unsupervised Class Segmentation
Abstract. We propose a novel method for unsupervised class segmentation on a set of images. It alternates between segmenting object instances and learning a class model. The method is based on a segmentation energy defined over all images at the same time, which can be optimized efficiently by techniques used before in interactive segmentation. Over iterations, our method progressively learns a class model by integrating observations over all images. In addition to appearance, this model captures the location and shape of the class with respect to an automatically determined coordinate frame common across images. This frame allows us to build stronger shape and location models, similar to those used in object class detection. Our method is inspired by interactive segmentation methods [1], but it is fully automatic and learns models characteristic for the object class rather than specific to one particular object/image. We experimentally demonstrate on the Caltech4, Caltech101, and Weizmann horses datasets that our method (a) transfers class knowledge across images and this improves results compared to segmenting every image independently; (b) outperforms Grabcut [1] for the task of unsupervised segmentation; (c) offers competitive performance compared to the state-of-the-art in unsupervised segmentation and in particular it outperforms the topic model [2]. 
SnakeCut : an Integrated Approach Based on Active Contour and GrabCut for Automatic Foreground Object Segmentation
Interactive techniques for extracting the foreground object from an image have been the interest of research in computer vision for a long time. This paper addresses the problem of an efficient, semi-interactive extraction of a foreground object from an image. Snake (also known as Active contour) and GrabCut are two popular techniques, extensively used for this task. Active contour is a deformable contour, which segments the object using boundary discontinuities by minimizing the energy function associated with the contour. GrabCut provides a convenient way to encode color features as segmentation cues to obtain foreground segmentation from local pixel similarities using modified iterated graph-cuts. This paper first presents a comparative study of these two segmentation techniques, and illustrates conditions under which either or both of them fail. We then propose a novel formulation for integrating these two complimentary techniques to obtain an automatic foreground object segmentation. We call our proposed integrated approach as "SnakeCut", which is based on a probabilistic framework. To validate our approach, we show results both on simulated and natural images
Quality Control in Crowdsourced Object Segmentation
International audienceThis paper explores processing techniques to deal with noisy data in crowdsourced object segmentation tasks. We use the data collected with "Click'n'Cut", an online interactive segmentation tool, and we perform several experiments towards improving the segmentation results. First, we introduce different superpixel-based techniques to filter users' traces, and assess their impact on the segmentation result. Second, we present different criteria to detect and discard the traces from potential bad users, resulting in a remarkable increase in performance. Finally, we show a novel superpixel-based segmentation algorithm which does not require any prior filtering and is based on weighting each user's contribution according to his/her level of expertise
A comparative evaluation of interactive segmentation algorithms
In this paper we present a comparative evaluation of four popular interactive segmentation algorithms. The evaluation was carried out as a series of user-experiments, in which participants were tasked with extracting 100 objects from a common dataset: 25 with each algorithm, constrained within a time limit of 2 min for each object. To facilitate the experiments, a “scribble-driven” segmentation tool was developed to enable interactive image segmentation by simply marking areas of foreground and background with the mouse. As the participants refined and improved their respective segmentations, the corresponding updated segmentation mask was stored along with the elapsed time. We then collected and evaluated each recorded mask against a manually segmented ground truth, thus allowing us to gauge segmentation accuracy over time. Two benchmarks were used for the evaluation: the well-known Jaccard index for measuring object accuracy, and a new fuzzy metric, proposed in this paper, designed for measuring boundary accuracy. Analysis of the experimental results demonstrates the effectiveness of the suggested measures and provides valuable insights into the performance and characteristics of the evaluated algorithms
Deep Interactive Region Segmentation and Captioning
With recent innovations in dense image captioning, it is now possible to
describe every object of the scene with a caption while objects are determined
by bounding boxes. However, interpretation of such an output is not trivial due
to the existence of many overlapping bounding boxes. Furthermore, in current
captioning frameworks, the user is not able to involve personal preferences to
exclude out of interest areas. In this paper, we propose a novel hybrid deep
learning architecture for interactive region segmentation and captioning where
the user is able to specify an arbitrary region of the image that should be
processed. To this end, a dedicated Fully Convolutional Network (FCN) named
Lyncean FCN (LFCN) is trained using our special training data to isolate the
User Intention Region (UIR) as the output of an efficient segmentation. In
parallel, a dense image captioning model is utilized to provide a wide variety
of captions for that region. Then, the UIR will be explained with the caption
of the best match bounding box. To the best of our knowledge, this is the first
work that provides such a comprehensive output. Our experiments show the
superiority of the proposed approach over state-of-the-art interactive
segmentation methods on several well-known datasets. In addition, replacement
of the bounding boxes with the result of the interactive segmentation leads to
a better understanding of the dense image captioning output as well as accuracy
enhancement for the object detection in terms of Intersection over Union (IoU).Comment: 17, pages, 9 figure
- …
