759 research outputs found
Scale-wise Convolution for Image Restoration
While scale-invariant modeling has substantially boosted the performance of
visual recognition tasks, it remains largely under-explored in deep networks
based image restoration. Naively applying those scale-invariant techniques
(e.g. multi-scale testing, random-scale data augmentation) to image restoration
tasks usually leads to inferior performance. In this paper, we show that
properly modeling scale-invariance into neural networks can bring significant
benefits to image restoration performance. Inspired from spatial-wise
convolution for shift-invariance, "scale-wise convolution" is proposed to
convolve across multiple scales for scale-invariance. In our scale-wise
convolutional network (SCN), we first map the input image to the feature space
and then build a feature pyramid representation via bi-linear down-scaling
progressively. The feature pyramid is then passed to a residual network with
scale-wise convolutions. The proposed scale-wise convolution learns to
dynamically activate and aggregate features from different input scales in each
residual building block, in order to exploit contextual information on multiple
scales. In experiments, we compare the restoration accuracy and parameter
efficiency among our model and many different variants of multi-scale neural
networks. The proposed network with scale-wise convolution achieves superior
performance in multiple image restoration tasks including image
super-resolution, image denoising and image compression artifacts removal. Code
and models are available at: https://github.com/ychfan/scn_srComment: AAAI 202
Hybrid LSTM and Encoder-Decoder Architecture for Detection of Image Forgeries
With advanced image journaling tools, one can easily alter the semantic
meaning of an image by exploiting certain manipulation techniques such as
copy-clone, object splicing, and removal, which mislead the viewers. In
contrast, the identification of these manipulations becomes a very challenging
task as manipulated regions are not visually apparent. This paper proposes a
high-confidence manipulation localization architecture which utilizes
resampling features, Long-Short Term Memory (LSTM) cells, and encoder-decoder
network to segment out manipulated regions from non-manipulated ones.
Resampling features are used to capture artifacts like JPEG quality loss,
upsampling, downsampling, rotation, and shearing. The proposed network exploits
larger receptive fields (spatial maps) and frequency domain correlation to
analyze the discriminative characteristics between manipulated and
non-manipulated regions by incorporating encoder and LSTM network. Finally,
decoder network learns the mapping from low-resolution feature maps to
pixel-wise predictions for image tamper localization. With predicted mask
provided by final layer (softmax) of the proposed architecture, end-to-end
training is performed to learn the network parameters through back-propagation
using ground-truth masks. Furthermore, a large image splicing dataset is
introduced to guide the training process. The proposed method is capable of
localizing image manipulations at pixel level with high precision, which is
demonstrated through rigorous experimentation on three diverse datasets
- …