1,016 research outputs found
TriPINet: Tripartite Progressive Integration Network for Image Manipulation Localization
Image manipulation localization aims at distinguishing forged regions from
the whole test image. Although many outstanding prior arts have been proposed
for this task, there are still two issues that need to be further studied: 1)
how to fuse diverse types of features with forgery clues; 2) how to
progressively integrate multistage features for better localization
performance. In this paper, we propose a tripartite progressive integration
network (TriPINet) for end-to-end image manipulation localization. First, we
extract both visual perception information, e.g., RGB input images, and visual
imperceptible features, e.g., frequency and noise traces for forensic feature
learning. Second, we develop a guided cross-modality dual-attention (gCMDA)
module to fuse different types of forged clues. Third, we design a set of
progressive integration squeeze-and-excitation (PI-SE) modules to improve
localization performance by appropriately incorporating multiscale features in
the decoder. Extensive experiments are conducted to compare our method with
state-of-the-art image forensics approaches. The proposed TriPINet obtains
competitive results on several benchmark datasets
CIR-Net: Cross-modality Interaction and Refinement for RGB-D Salient Object Detection
Focusing on the issue of how to effectively capture and utilize
cross-modality information in RGB-D salient object detection (SOD) task, we
present a convolutional neural network (CNN) model, named CIR-Net, based on the
novel cross-modality interaction and refinement. For the cross-modality
interaction, 1) a progressive attention guided integration unit is proposed to
sufficiently integrate RGB-D feature representations in the encoder stage, and
2) a convergence aggregation structure is proposed, which flows the RGB and
depth decoding features into the corresponding RGB-D decoding streams via an
importance gated fusion unit in the decoder stage. For the cross-modality
refinement, we insert a refinement middleware structure between the encoder and
the decoder, in which the RGB, depth, and RGB-D encoder features are further
refined by successively using a self-modality attention refinement unit and a
cross-modality weighting refinement unit. At last, with the gradually refined
features, we predict the saliency map in the decoder stage. Extensive
experiments on six popular RGB-D SOD benchmarks demonstrate that our network
outperforms the state-of-the-art saliency detectors both qualitatively and
quantitatively.Comment: Accepted by IEEE Transactions on Image Processing 2022, 16 pages, 11
figure
- …