278 research outputs found
Saliency-guided Adaptive Seeding for Supervoxel Segmentation
We propose a new saliency-guided method for generating supervoxels in 3D
space. Rather than using an evenly distributed spatial seeding procedure, our
method uses visual saliency to guide the process of supervoxel generation. This
results in densely distributed, small, and precise supervoxels in salient
regions which often contain objects, and larger supervoxels in less salient
regions that often correspond to background. Our approach largely improves the
quality of the resulting supervoxel segmentation in terms of boundary recall
and under-segmentation error on publicly available benchmarks.Comment: 6 pages, accepted to IROS201
Integrated Deep and Shallow Networks for Salient Object Detection
Deep convolutional neural network (CNN) based salient object detection
methods have achieved state-of-the-art performance and outperform those
unsupervised methods with a wide margin. In this paper, we propose to integrate
deep and unsupervised saliency for salient object detection under a unified
framework. Specifically, our method takes results of unsupervised saliency
(Robust Background Detection, RBD) and normalized color images as inputs, and
directly learns an end-to-end mapping between inputs and the corresponding
saliency maps. The color images are fed into a Fully Convolutional Neural
Networks (FCNN) adapted from semantic segmentation to exploit high-level
semantic cues for salient object detection. Then the results from deep FCNN and
RBD are concatenated to feed into a shallow network to map the concatenated
feature maps to saliency maps. Finally, to obtain a spatially consistent
saliency map with sharp object boundaries, we fuse superpixel level saliency
map at multi-scale. Extensive experimental results on 8 benchmark datasets
demonstrate that the proposed method outperforms the state-of-the-art
approaches with a margin.Comment: Accepted by IEEE International Conference on Image Processing (ICIP)
201
Improved salient object detection via boundary components affinity
Referring to the existing model that considers the image boundary as the image background, the model is still not able to produce an optimum detection. This paper is introducing the combination features at the boundary known as boundary components affinity that is capable to produce an optimum measure on the image background. It consists of contrast, spatial location, force interaction and boundary ratio that contribute to a novel boundary connectivity measure. The integrated features are capable to produce clearer background with minimum unwanted foreground patches compared to the ground truth. The extracted boundary features are integrated as the boundary components affinity. These features were used for measuring the image background through its boundary connectivity to obtain the final salient object detection. Using the verified datasets, the performance of the proposed model was measured and compared with the 4 state-of-art models. In addition, the model performance was tested on the close contrast images. The detection performance was compared and analysed based on the precision, recall, true positive rate, false positive rate, F Measure and Mean Absolute Error (MAE). The model had successfully reduced the MAE by maximum of 9.4%
- …