10,175 research outputs found
Saliency detection
Postoje mnoge metode za detekciju istaknutih dijelova slike, a neke od njih su: metoda
detekcije pomoću frekvencije, metoda detekcije pomoću globalnog i lokalnog kontrasta te
metoda detekcije pomoću konteksta. Metoda detekcije pomoću frekvencije koristi prostornu
frekvenciju. Metoda detekcije pomoću globalnog kontrasta koristi histograme ili regije. Metoda
detekcije na temelju lokalnog kontrasta koristi filtre. Metoda detekcije pomoću konteksta jedina
izdvaja i kontekst slike te daje dobre rezultat ukoliko postoji barem jedan istaknuti objekt na slici
koji se razlikuje od svoje pozadineThere are a lot of methods for detection of salient image regions, and some of them are:
frequency based saliency detection, local and global contrast based saliency detection and
context-aware saliency detection. Frequency based saliency detection uses spatial frequencies.
Local contrast based saliency detection uses histograms or regions. Global contrast based
saliency detection uses filters. Context-aware saliency detection is the only detection that
includes context and provides good results if there is at least one salient object in the picture
which differs from its background
Recurrent Attentional Networks for Saliency Detection
Convolutional-deconvolution networks can be adopted to perform end-to-end
saliency detection. But, they do not work well with objects of multiple scales.
To overcome such a limitation, in this work, we propose a recurrent attentional
convolutional-deconvolution network (RACDNN). Using spatial transformer and
recurrent network units, RACDNN is able to iteratively attend to selected image
sub-regions to perform saliency refinement progressively. Besides tackling the
scale problem, RACDNN can also learn context-aware features from past
iterations to enhance saliency refinement in future iterations. Experiments on
several challenging saliency detection datasets validate the effectiveness of
RACDNN, and show that RACDNN outperforms state-of-the-art saliency detection
methods.Comment: CVPR 201
Beyond saliency: understanding convolutional neural networks from saliency prediction on layer-wise relevance propagation
Despite the tremendous achievements of deep convolutional neural networks
(CNNs) in many computer vision tasks, understanding how they actually work
remains a significant challenge. In this paper, we propose a novel two-step
understanding method, namely Salient Relevance (SR) map, which aims to shed
light on how deep CNNs recognize images and learn features from areas, referred
to as attention areas, therein. Our proposed method starts out with a
layer-wise relevance propagation (LRP) step which estimates a pixel-wise
relevance map over the input image. Following, we construct a context-aware
saliency map, SR map, from the LRP-generated map which predicts areas close to
the foci of attention instead of isolated pixels that LRP reveals. In human
visual system, information of regions is more important than of pixels in
recognition. Consequently, our proposed approach closely simulates human
recognition. Experimental results using the ILSVRC2012 validation dataset in
conjunction with two well-established deep CNN models, AlexNet and VGG-16,
clearly demonstrate that our proposed approach concisely identifies not only
key pixels but also attention areas that contribute to the underlying neural
network's comprehension of the given images. As such, our proposed SR map
constitutes a convenient visual interface which unveils the visual attention of
the network and reveals which type of objects the model has learned to
recognize after training. The source code is available at
https://github.com/Hey1Li/Salient-Relevance-Propagation.Comment: 35 pages, 15 figure
- …