8,719 research outputs found
Multi-focus image fusion using maximum symmetric surround saliency detection
In digital photography, two or more objects of a scene cannot be focused at the same time. If we focus one object, we may lose information about other objects and vice versa. Multi-focus image fusion is a process of generating an all-in-focus image from several out-of-focus images. In this paper, we propose a new multi-focus image fusion method based on two-scale image decomposition and saliency detection using maximum symmetric surround. This method is very beneficial because the saliency map used in this method can highlight the saliency information present in the source images with well defined boundaries. A weight map construction method based on saliency information is developed in this paper. This weight map can identify the focus and defocus regions present in the image very well. So we implemented a new fusion algorithm based on weight map which integrate only focused region information into the fused image. Unlike multi-scale image fusion methods, in this method two-scale image decomposition is sufficient. So, it is computationally efficient. Proposed method is tested on several multi-focus image datasets and it is compared with traditional and recently proposed fusion methods using various fusion metrics. Results justify that our proposed method gives stable and promising performance when compared to that of the existing methods
Hierarchical image simplification and segmentation based on Mumford-Shah-salient level line selection
Hierarchies, such as the tree of shapes, are popular representations for
image simplification and segmentation thanks to their multiscale structures.
Selecting meaningful level lines (boundaries of shapes) yields to simplify
image while preserving intact salient structures. Many image simplification and
segmentation methods are driven by the optimization of an energy functional,
for instance the celebrated Mumford-Shah functional. In this paper, we propose
an efficient approach to hierarchical image simplification and segmentation
based on the minimization of the piecewise-constant Mumford-Shah functional.
This method conforms to the current trend that consists in producing
hierarchical results rather than a unique partition. Contrary to classical
approaches which compute optimal hierarchical segmentations from an input
hierarchy of segmentations, we rely on the tree of shapes, a unique and
well-defined representation equivalent to the image. Simply put, we compute for
each level line of the image an attribute function that characterizes its
persistence under the energy minimization. Then we stack the level lines from
meaningless ones to salient ones through a saliency map based on extinction
values defined on the tree-based shape space. Qualitative illustrations and
quantitative evaluation on Weizmann segmentation evaluation database
demonstrate the state-of-the-art performance of our method.Comment: Pattern Recognition Letters, Elsevier, 201
Saliency-guided integration of multiple scans
we present a novel method..
Inner and Inter Label Propagation: Salient Object Detection in the Wild
In this paper, we propose a novel label propagation based method for saliency
detection. A key observation is that saliency in an image can be estimated by
propagating the labels extracted from the most certain background and object
regions. For most natural images, some boundary superpixels serve as the
background labels and the saliency of other superpixels are determined by
ranking their similarities to the boundary labels based on an inner propagation
scheme. For images of complex scenes, we further deploy a 3-cue-center-biased
objectness measure to pick out and propagate foreground labels. A
co-transduction algorithm is devised to fuse both boundary and objectness
labels based on an inter propagation scheme. The compactness criterion decides
whether the incorporation of objectness labels is necessary, thus greatly
enhancing computational efficiency. Results on five benchmark datasets with
pixel-wise accurate annotations show that the proposed method achieves superior
performance compared with the newest state-of-the-arts in terms of different
evaluation metrics.Comment: The full version of the TIP 2015 publicatio
Deep Saliency with Encoded Low level Distance Map and High Level Features
Recent advances in saliency detection have utilized deep learning to obtain
high level features to detect salient regions in a scene. These advances have
demonstrated superior results over previous works that utilize hand-crafted low
level features for saliency detection. In this paper, we demonstrate that
hand-crafted features can provide complementary information to enhance
performance of saliency detection that utilizes only high level features. Our
method utilizes both high level and low level features for saliency detection
under a unified deep learning framework. The high level features are extracted
using the VGG-net, and the low level features are compared with other parts of
an image to form a low level distance map. The low level distance map is then
encoded using a convolutional neural network(CNN) with multiple 1X1
convolutional and ReLU layers. We concatenate the encoded low level distance
map and the high level features, and connect them to a fully connected neural
network classifier to evaluate the saliency of a query region. Our experiments
show that our method can further improve the performance of state-of-the-art
deep learning-based saliency detection methods.Comment: Accepted by IEEE Conference on Computer Vision and Pattern
Recognition(CVPR) 2016. Project page:
https://github.com/gylee1103/SaliencyEL
- …