32,246 research outputs found
Wide spread spectrum watermarking with side information and interference cancellation
Nowadays, a popular method used for additive watermarking is wide spread
spectrum. It consists in adding a spread signal into the host document. This
signal is obtained by the sum of a set of carrier vectors, which are modulated
by the bits to be embedded. To extract these embedded bits, weighted
correlations between the watermarked document and the carriers are computed.
Unfortunately, even without any attack, the obtained set of bits can be
corrupted due to the interference with the host signal (host interference) and
also due to the interference with the others carriers (inter-symbols
interference (ISI) due to the non-orthogonality of the carriers). Some recent
watermarking algorithms deal with host interference using side informed
methods, but inter-symbols interference problem is still open. In this paper,
we deal with interference cancellation methods, and we propose to consider ISI
as side information and to integrate it into the host signal. This leads to a
great improvement of extraction performance in term of signal-to-noise ratio
and/or watermark robustness.Comment: 12 pages, 8 figure
Reversible Embedding to Covers Full of Boundaries
In reversible data embedding, to avoid overflow and underflow problem, before
data embedding, boundary pixels are recorded as side information, which may be
losslessly compressed. The existing algorithms often assume that a natural
image has little boundary pixels so that the size of side information is small.
Accordingly, a relatively high pure payload could be achieved. However, there
actually may exist a lot of boundary pixels in a natural image, implying that,
the size of side information could be very large. Therefore, when to directly
use the existing algorithms, the pure embedding capacity may be not sufficient.
In order to address this problem, in this paper, we present a new and efficient
framework to reversible data embedding in images that have lots of boundary
pixels. The core idea is to losslessly preprocess boundary pixels so that it
can significantly reduce the side information. Experimental results have shown
the superiority and applicability of our work
Reversible watermarking scheme with image-independent embedding capacity
Permanent distortion is one of the main drawbacks of all the irreversible watermarking schemes. Attempts to recover the original signal after the signal passing the authentication process are being made starting just a few years ago. Some common problems, such as salt-and-pepper artefacts owing to intensity wraparound and low embedding capacity, can now be resolved. However, some significant problems remain unsolved. First, the embedding capacity is signal-dependent, i.e., capacity varies significantly depending on the nature of the host signal. The direct impact of this is compromised security for signals with low capacity. Some signals may be even non-embeddable. Secondly, while seriously tackled in irreversible watermarking schemes, the well-known problem of block-wise dependence, which opens a security gap for the vector quantisation attack and transplantation attack, are not addressed by researchers of the reversible schemes. This work proposes a reversible watermarking scheme with near-constant signal-independent embedding capacity and immunity to the vector quantisation attack and transplantation attack
Prediction-error of Prediction Error (PPE)-based Reversible Data Hiding
This paper presents a novel reversible data hiding (RDH) algorithm for
gray-scaled images, in which the prediction-error of prediction error (PPE) of
a pixel is used to carry the secret data. In the proposed method, the pixels to
be embedded are firstly predicted with their neighboring pixels to obtain the
corresponding prediction errors (PEs). Then, by exploiting the PEs of the
neighboring pixels, the prediction of the PEs of the pixels can be determined.
And, a sorting technique based on the local complexity of a pixel is used to
collect the PPEs to generate an ordered PPE sequence so that, smaller PPEs will
be processed first for data embedding. By reversibly shifting the PPE histogram
(PPEH) with optimized parameters, the pixels corresponding to the altered PPEH
bins can be finally modified to carry the secret data. Experimental results
have implied that the proposed method can benefit from the prediction procedure
of the PEs, sorting technique as well as parameters selection, and therefore
outperform some state-of-the-art works in terms of payload-distortion
performance when applied to different images.Comment: There has no technical difference to previous versions, but rather
some minor word corrections. A 2-page summary of this paper was accepted by
ACM IH&MMSec'16 "Ongoing work session". My homepage: hzwu.github.i
Robust high-capacity audio watermarking based on FFT amplitude modification
This paper proposes a novel robust audio watermarking algorithm to embed data and extract it in a bit-exact manner based on changing the magnitudes of the FFT spectrum. The key point is selecting a frequency band for embedding based on the comparison between the original and the MP3 compressed/decompressed signal and on a suitable scaling factor. The experimental results show that the method has a very high capacity (about 5 kbps), without significant perceptual distortion (ODG about -0.25) and provides robustness against common audio signal processing such as added noise, filtering and MPEG compression (MP3). Furthermore, the proposed method has a larger capacity (number of embedded bits to number of host bits rate) than recent image data hiding methods
Bounded-Distortion Metric Learning
Metric learning aims to embed one metric space into another to benefit tasks
like classification and clustering. Although a greatly distorted metric space
has a high degree of freedom to fit training data, it is prone to overfitting
and numerical inaccuracy. This paper presents {\it bounded-distortion metric
learning} (BDML), a new metric learning framework which amounts to finding an
optimal Mahalanobis metric space with a bounded-distortion constraint. An
efficient solver based on the multiplicative weights update method is proposed.
Moreover, we generalize BDML to pseudo-metric learning and devise the
semidefinite relaxation and a randomized algorithm to approximately solve it.
We further provide theoretical analysis to show that distortion is a key
ingredient for stability and generalization ability of our BDML algorithm.
Extensive experiments on several benchmark datasets yield promising results
Perfectly Secure Steganography: Capacity, Error Exponents, and Code Constructions
An analysis of steganographic systems subject to the following perfect
undetectability condition is presented in this paper. Following embedding of
the message into the covertext, the resulting stegotext is required to have
exactly the same probability distribution as the covertext. Then no statistical
test can reliably detect the presence of the hidden message. We refer to such
steganographic schemes as perfectly secure. A few such schemes have been
proposed in recent literature, but they have vanishing rate. We prove that
communication performance can potentially be vastly improved; specifically, our
basic setup assumes independently and identically distributed (i.i.d.)
covertext, and we construct perfectly secure steganographic codes from public
watermarking codes using binning methods and randomized permutations of the
code. The permutation is a secret key shared between encoder and decoder. We
derive (positive) capacity and random-coding exponents for perfectly-secure
steganographic systems. The error exponents provide estimates of the code
length required to achieve a target low error probability. We address the
potential loss in communication performance due to the perfect-security
requirement. This loss is the same as the loss obtained under a weaker order-1
steganographic requirement that would just require matching of first-order
marginals of the covertext and stegotext distributions. Furthermore, no loss
occurs if the covertext distribution is uniform and the distortion metric is
cyclically symmetric; steganographic capacity is then achieved by randomized
linear codes. Our framework may also be useful for developing computationally
secure steganographic systems that have near-optimal communication performance.Comment: To appear in IEEE Trans. on Information Theory, June 2008; ignore
Version 2 as the file was corrupte
- …