263 research outputs found
Hybrid LSTM and Encoder-Decoder Architecture for Detection of Image Forgeries
With advanced image journaling tools, one can easily alter the semantic
meaning of an image by exploiting certain manipulation techniques such as
copy-clone, object splicing, and removal, which mislead the viewers. In
contrast, the identification of these manipulations becomes a very challenging
task as manipulated regions are not visually apparent. This paper proposes a
high-confidence manipulation localization architecture which utilizes
resampling features, Long-Short Term Memory (LSTM) cells, and encoder-decoder
network to segment out manipulated regions from non-manipulated ones.
Resampling features are used to capture artifacts like JPEG quality loss,
upsampling, downsampling, rotation, and shearing. The proposed network exploits
larger receptive fields (spatial maps) and frequency domain correlation to
analyze the discriminative characteristics between manipulated and
non-manipulated regions by incorporating encoder and LSTM network. Finally,
decoder network learns the mapping from low-resolution feature maps to
pixel-wise predictions for image tamper localization. With predicted mask
provided by final layer (softmax) of the proposed architecture, end-to-end
training is performed to learn the network parameters through back-propagation
using ground-truth masks. Furthermore, a large image splicing dataset is
introduced to guide the training process. The proposed method is capable of
localizing image manipulations at pixel level with high precision, which is
demonstrated through rigorous experimentation on three diverse datasets
Camera-based Image Forgery Localization using Convolutional Neural Networks
Camera fingerprints are precious tools for a number of image forensics tasks.
A well-known example is the photo response non-uniformity (PRNU) noise pattern,
a powerful device fingerprint. Here, to address the image forgery localization
problem, we rely on noiseprint, a recently proposed CNN-based camera model
fingerprint. The CNN is trained to minimize the distance between same-model
patches, and maximize the distance otherwise. As a result, the noiseprint
accounts for model-related artifacts just like the PRNU accounts for
device-related non-uniformities. However, unlike the PRNU, it is only mildly
affected by residuals of high-level scene content. The experiments show that
the proposed noiseprint-based forgery localization method improves over the
PRNU-based reference
To Beta or Not To Beta: Information Bottleneck for DigitaL Image Forensics
We consider an information theoretic approach to address the problem of
identifying fake digital images. We propose an innovative method to formulate
the issue of localizing manipulated regions in an image as a deep
representation learning problem using the Information Bottleneck (IB), which
has recently gained popularity as a framework for interpreting deep neural
networks. Tampered images pose a serious predicament since digitized media is a
ubiquitous part of our lives. These are facilitated by the easy availability of
image editing software and aggravated by recent advances in deep generative
models such as GANs. We propose InfoPrint, a computationally efficient solution
to the IB formulation using approximate variational inference and compare it to
a numerical solution that is computationally expensive. Testing on a number of
standard datasets, we demonstrate that InfoPrint outperforms the
state-of-the-art and the numerical solution. Additionally, it also has the
ability to detect alterations made by inpainting GANs.Comment: 10 page
Digital image processing of the Ghent altarpiece : supporting the painting's study and conservation treatment
In this article, we show progress in certain image processing
techniques that can support the physical restoration of the painting, its art-historical analysis, or both. We show how analysis of the crack patterns could indicate possible areas of overpaint, which may be of great value for the physical restoration campaign, after further validation. Next, we explore how digital image inpainting can serve as a simulation for the restoration of paint losses. Finally, we explore how the statistical analysis of the relatively simple and frequently recurring objects (such as pearls in this masterpiece) may characterize the consistency of the painter’s style and thereby aid both art-historical interpretation and physical restoration campaign
r-BTN: Cross-domain Face Composite and Synthesis from Limited Facial Patches
We start by asking an interesting yet challenging question, "If an eyewitness
can only recall the eye features of the suspect, such that the forensic artist
can only produce a sketch of the eyes (e.g., the top-left sketch shown in Fig.
1), can advanced computer vision techniques help generate the whole face
image?" A more generalized question is that if a large proportion (e.g., more
than 50%) of the face/sketch is missing, can a realistic whole face
sketch/image still be estimated. Existing face completion and generation
methods either do not conduct domain transfer learning or can not handle large
missing area. For example, the inpainting approach tends to blur the generated
region when the missing area is large (i.e., more than 50%). In this paper, we
exploit the potential of deep learning networks in filling large missing region
(e.g., as high as 95% missing) and generating realistic faces with
high-fidelity in cross domains. We propose the recursive generation by
bidirectional transformation networks (r-BTN) that recursively generates a
whole face/sketch from a small sketch/face patch. The large missing area and
the cross domain challenge make it difficult to generate satisfactory results
using a unidirectional cross-domain learning structure. On the other hand, a
forward and backward bidirectional learning between the face and sketch domains
would enable recursive estimation of the missing region in an incremental
manner (Fig. 1) and yield appealing results. r-BTN also adopts an adversarial
constraint to encourage the generation of realistic faces/sketches. Extensive
experiments have been conducted to demonstrate the superior performance from
r-BTN as compared to existing potential solutions.Comment: Accepted by AAAI 201
- …