7,305 research outputs found
Object Segmentation in Images using EEG Signals
This paper explores the potential of brain-computer interfaces in segmenting
objects from images. Our approach is centered around designing an effective
method for displaying the image parts to the users such that they generate
measurable brain reactions. When an image region, specifically a block of
pixels, is displayed we estimate the probability of the block containing the
object of interest using a score based on EEG activity. After several such
blocks are displayed, the resulting probability map is binarized and combined
with the GrabCut algorithm to segment the image into object and background
regions. This study shows that BCI and simple EEG analysis are useful in
locating object boundaries in images.Comment: This is a preprint version prior to submission for peer-review of the
paper accepted to the 22nd ACM International Conference on Multimedia
(November 3-7, 2014, Orlando, Florida, USA) for the High Risk High Reward
session. 10 page
Quality Control in Crowdsourced Object Segmentation
International audienceThis paper explores processing techniques to deal with noisy data in crowdsourced object segmentation tasks. We use the data collected with "Click'n'Cut", an online interactive segmentation tool, and we perform several experiments towards improving the segmentation results. First, we introduce different superpixel-based techniques to filter users' traces, and assess their impact on the segmentation result. Second, we present different criteria to detect and discard the traces from potential bad users, resulting in a remarkable increase in performance. Finally, we show a novel superpixel-based segmentation algorithm which does not require any prior filtering and is based on weighting each user's contribution according to his/her level of expertise
Object segmentation in depth maps with one user click and a synthetically trained fully convolutional network
With more and more household objects built on planned obsolescence and
consumed by a fast-growing population, hazardous waste recycling has become a
critical challenge. Given the large variability of household waste, current
recycling platforms mostly rely on human operators to analyze the scene,
typically composed of many object instances piled up in bulk. Helping them by
robotizing the unitary extraction is a key challenge to speed up this tedious
process. Whereas supervised deep learning has proven very efficient for such
object-level scene understanding, e.g., generic object detection and
segmentation in everyday scenes, it however requires large sets of per-pixel
labeled images, that are hardly available for numerous application contexts,
including industrial robotics. We thus propose a step towards a practical
interactive application for generating an object-oriented robotic grasp,
requiring as inputs only one depth map of the scene and one user click on the
next object to extract. More precisely, we address in this paper the middle
issue of object seg-mentation in top views of piles of bulk objects given a
pixel location, namely seed, provided interactively by a human operator. We
propose a twofold framework for generating edge-driven instance segments.
First, we repurpose a state-of-the-art fully convolutional object contour
detector for seed-based instance segmentation by introducing the notion of
edge-mask duality with a novel patch-free and contour-oriented loss function.
Second, we train one model using only synthetic scenes, instead of manually
labeled training data. Our experimental results show that considering edge-mask
duality for training an encoder-decoder network, as we suggest, outperforms a
state-of-the-art patch-based network in the present application context.Comment: This is a pre-print of an article published in Human Friendly
Robotics, 10th International Workshop, Springer Proceedings in Advanced
Robotics, vol 7. The final authenticated version is available online at:
https://doi.org/10.1007/978-3-319-89327-3\_16, Springer Proceedings in
Advanced Robotics, Siciliano Bruno, Khatib Oussama, In press, Human Friendly
Robotics, 10th International Workshop,
- âŠ