Due to the extreme complexity of scale and shape as well as the uncertainty
of the predicted location, salient object detection in optical remote sensing
images (RSI-SOD) is a very difficult task. The existing SOD methods can satisfy
the detection performance for natural scene images, but they are not well
adapted to RSI-SOD due to the above-mentioned image characteristics in remote
sensing images. In this paper, we propose a novel Attention Guided Network
(AGNet) for SOD in optical RSIs, including position enhancement stage and
detail refinement stage. Specifically, the position enhancement stage consists
of a semantic attention module and a contextual attention module to accurately
describe the approximate location of salient objects. The detail refinement
stage uses the proposed self-refinement module to progressively refine the
predicted results under the guidance of attention and reverse attention. In
addition, the hybrid loss is applied to supervise the training of the network,
which can improve the performance of the model from three perspectives of
pixel, region and statistics. Extensive experiments on two popular benchmarks
demonstrate that AGNet achieves competitive performance compared to other
state-of-the-art methods. The code will be available at
https://github.com/NuaaYH/AGNet.Comment: accepted by ICANN2022, The code is available at
https://github.com/NuaaYH/AGNe