13 research outputs found
Application of Steganography for Anonymity through the Internet
In this paper, a novel steganographic scheme based on chaotic iterations is
proposed. This research work takes place into the information hiding security
framework. The applications for anonymity and privacy through the Internet are
regarded too. To guarantee such an anonymity, it should be possible to set up a
secret communication channel into a web page, being both secure and robust. To
achieve this goal, we propose an information hiding scheme being stego-secure,
which is the highest level of security in a well defined and studied category
of attacks called "watermark-only attack". This category of attacks is the best
context to study steganography-based anonymity through the Internet. The
steganalysis of our steganographic process is also studied in order to show it
security in a real test framework.Comment: 14 page
An In-Depth Study on Open-Set Camera Model Identification
Camera model identification refers to the problem of linking a picture to the
camera model used to shoot it. As this might be an enabling factor in different
forensic applications to single out possible suspects (e.g., detecting the
author of child abuse or terrorist propaganda material), many accurate camera
model attribution methods have been developed in the literature. One of their
main drawbacks, however, is the typical closed-set assumption of the problem.
This means that an investigated photograph is always assigned to one camera
model within a set of known ones present during investigation, i.e., training
time, and the fact that the picture can come from a completely unrelated camera
model during actual testing is usually ignored. Under realistic conditions, it
is not possible to assume that every picture under analysis belongs to one of
the available camera models. To deal with this issue, in this paper, we present
the first in-depth study on the possibility of solving the camera model
identification problem in open-set scenarios. Given a photograph, we aim at
detecting whether it comes from one of the known camera models of interest or
from an unknown one. We compare different feature extraction algorithms and
classifiers specially targeting open-set recognition. We also evaluate possible
open-set training protocols that can be applied along with any open-set
classifier, observing that a simple of those alternatives obtains best results.
Thorough testing on independent datasets shows that it is possible to leverage
a recently proposed convolutional neural network as feature extractor paired
with a properly trained open-set classifier aiming at solving the open-set
camera model attribution problem even to small-scale image patches, improving
over state-of-the-art available solutions.Comment: Published through IEEE Access journa
A Reduce Identical Composite Event Transmission Algorithm for Wireless Sensor Networks
Abstract: In this paper, a Reduce Identical Composite Event Transmission (RICET) algorithm is proposed to solve the problem of detecting composite events in wireless sensor networks. The RICET algorithm extends the traditional data aggregation algorithm to detect composite events, and this algorithm can eliminate redundant transmission and save power consumption, thereby extending the lifetime of the entire wireless sensor network. According to the experimental results, the proposed algorithm not only reduces power consumption by approximately 64.78% and 62.67%, but it also enhances the sensor node's lifetime by up to 8.97 times compared with some traditional algorithms
TTP-free Asymmetric Fingerprinting based on Client Side Embedding
In this paper, we propose a solution for implementing an asymmetric fingerprinting protocol within a client-side embedding distribution framework. The scheme is based on two novel client-side embedding techniques that are able to reliably transmit a binary fingerprint. The first one relies on standard spread-spectrum like client-side embedding, while the second one is based on an innovative client-side informed embedding technique. The proposed techniques enable secure distribution of personalized decryption keys containing the Buyer's fingerprint by means of existing asymmetric protocols, without using a trusted third party. Simulation results show that the fingerprint can be reliably recovered by using either non-blind decoding with standard embedding or blind decoding with informed embedding, and in both cases it is robust with respect to common attacks. To the best of our knowledge, the proposed scheme is the first solution addressing asymmetric fingerprinting within a clientside framework, representing a valid solution to both customer's rights and scalability issues in multimedia content distributio
Image Forgery Localization via Fine-Grained Analysis of CFA Artifacts
In this paper, a forensic tool able to discriminate between original and forged regions in an image captured by a digital camera is presented. We make the assumption that the image is acquired using a Color Filter Array, and that tampering removes the artifacts due to the demosaicking algorithm. The proposed method is based on a new feature measuring the presence of demosaicking artifacts at a local level, and on a new statistical model allowing to derive the tampering probability of each 2 Ă— 2 image block without requiring to know a priori the position of the forged region. Experimental results on different cameras equipped with different demosaicking algorithms demonstrate both the validity of the theoretical model and the effectiveness of our schem
Image Forgery Localization via Block-Grained Analysis of JPEG Artifacts
In this paper, we propose a forensic algorithm to discriminate between original and forged regions in JPEG images, under the hypothesis that the tampered image presents a double JPEG compression, either aligned (A-DJPG) or non-aligned (NA-DJPG). Unlike previous approaches, the proposed algorithm does not need to manually select a suspect region in order to test the presence or the absence of double compression artifacts. Based on an improved and unified statistical model characterizing the artifacts that appear in the presence of both A-DJPG or NA-DJPG, the proposed algorithm automatically computes a likelihood map indicating the probability for each discrete cosine transform block of being doubly compressed. The validity of the proposed approach has been assessed by evaluating the performance of a detector based on thresholding the likelihood map, considering different forensic scenarios. The effectiveness of the proposed method is also confirmed by tests carried on realistic tampered images. An interesting property of the proposed Bayesian approach is that it can be easily extended to work with traces left by other kinds of processin
A Framework for Decision Fusion in Image Forensics Based on Dempster-Shafer Theory of Evidence
In this work, we present a decision fusion strategy for image forensics. We define a framework that exploits information provided by available forensic tools to yield a global judgment about the authenticity of an image. Sources of information are modeled and fused using Dempster-Shafer Theory of Evidence, since this theory allows us to handle uncertain answers from tools and lack of knowledge about prior probabilities better than the classical Bayesian approach. The proposed framework permits us to exploit any available information about tools reliability and about the compatibility between the traces the forensic tools look for. The framework is easily extendable: new tools can be added incrementally with a little effort. Comparison with logical disjunction- and SVM-based fusion approaches shows an improvement in classification accuracy, particularly when strong generalization capabilities are neede