2,860 research outputs found
Backtracking Spatial Pyramid Pooling (SPP)-based Image Classifier for Weakly Supervised Top-down Salient Object Detection
Top-down saliency models produce a probability map that peaks at target
locations specified by a task/goal such as object detection. They are usually
trained in a fully supervised setting involving pixel-level annotations of
objects. We propose a weakly supervised top-down saliency framework using only
binary labels that indicate the presence/absence of an object in an image.
First, the probabilistic contribution of each image region to the confidence of
a CNN-based image classifier is computed through a backtracking strategy to
produce top-down saliency. From a set of saliency maps of an image produced by
fast bottom-up saliency approaches, we select the best saliency map suitable
for the top-down task. The selected bottom-up saliency map is combined with the
top-down saliency map. Features having high combined saliency are used to train
a linear SVM classifier to estimate feature saliency. This is integrated with
combined saliency and further refined through a multi-scale
superpixel-averaging of saliency map. We evaluate the performance of the
proposed weakly supervised topdown saliency and achieve comparable performance
with fully supervised approaches. Experiments are carried out on seven
challenging datasets and quantitative results are compared with 40 closely
related approaches across 4 different applications.Comment: 14 pages, 7 figure
Salient Objects in Clutter: Bringing Salient Object Detection to the Foreground
We provide a comprehensive evaluation of salient object detection (SOD)
models. Our analysis identifies a serious design bias of existing SOD datasets
which assumes that each image contains at least one clearly outstanding salient
object in low clutter. The design bias has led to a saturated high performance
for state-of-the-art SOD models when evaluated on existing datasets. The
models, however, still perform far from being satisfactory when applied to
real-world daily scenes. Based on our analyses, we first identify 7 crucial
aspects that a comprehensive and balanced dataset should fulfill. Then, we
propose a new high quality dataset and update the previous saliency benchmark.
Specifically, our SOC (Salient Objects in Clutter) dataset, includes images
with salient and non-salient objects from daily object categories. Beyond
object category annotations, each salient image is accompanied by attributes
that reflect common challenges in real-world scenes. Finally, we report
attribute-based performance assessment on our dataset.Comment: ECCV 201
Instance-Level Salient Object Segmentation
Image saliency detection has recently witnessed rapid progress due to deep
convolutional neural networks. However, none of the existing methods is able to
identify object instances in the detected salient regions. In this paper, we
present a salient instance segmentation method that produces a saliency mask
with distinct object instance labels for an input image. Our method consists of
three steps, estimating saliency map, detecting salient object contours and
identifying salient object instances. For the first two steps, we propose a
multiscale saliency refinement network, which generates high-quality salient
region masks and salient object contours. Once integrated with multiscale
combinatorial grouping and a MAP-based subset optimization framework, our
method can generate very promising salient object instance segmentation
results. To promote further research and evaluation of salient instance
segmentation, we also construct a new database of 1000 images and their
pixelwise salient instance annotations. Experimental results demonstrate that
our proposed method is capable of achieving state-of-the-art performance on all
public benchmarks for salient region detection as well as on our new dataset
for salient instance segmentation.Comment: To appear in CVPR201
Deep Contrast Learning for Salient Object Detection
Salient object detection has recently witnessed substantial progress due to
powerful features extracted using deep convolutional neural networks (CNNs).
However, existing CNN-based methods operate at the patch level instead of the
pixel level. Resulting saliency maps are typically blurry, especially near the
boundary of salient objects. Furthermore, image patches are treated as
independent samples even when they are overlapping, giving rise to significant
redundancy in computation and storage. In this CVPR 2016 paper, we propose an
end-to-end deep contrast network to overcome the aforementioned limitations.
Our deep network consists of two complementary components, a pixel-level fully
convolutional stream and a segment-wise spatial pooling stream. The first
stream directly produces a saliency map with pixel-level accuracy from an input
image. The second stream extracts segment-wise features very efficiently, and
better models saliency discontinuities along object boundaries. Finally, a
fully connected CRF model can be optionally incorporated to improve spatial
coherence and contour localization in the fused result from these two streams.
Experimental results demonstrate that our deep model significantly improves the
state of the art.Comment: To appear in CVPR 201
- …