1,262 research outputs found
Video object watermarking robust to manipulations
This paper presents a watermarking scheme that embeds a signature in video objects for the MPEG-4 video standard.
The different constraints associated with this standard are quite different from classical video watermarking
schemes. The mark detection had to be achieved after different video object manipulations such as rotation or scaling
operations. Principal component analysis and warping methods are used to enable the synchronization of the
mark after geometric manipulations. The embedding of the mark is done adding an oriented random sequence
and the detection of the mark is processed using a correlation criterion. The different results point out the fact that
the presented scheme can detect the mark after bit-rate modification, object shape sub-sampling and geometric
manipulations (scaling and rotations).Cet article présente un schéma de tatouage permettant de marquer des objets vidéo tels qu'ils sont décrits dans le cadre de la norme MPEG-4. Les contraintes liées à cette norme sont différentes de celles connues en tatouage de séquences classiques. Dans un tel contexte, la détection de la signature doit en effet être possible après diverses manipulations de l'objet vidéo telles que des rotations ou changements d'échelle. La méthode proposée utilise la forme de l'objet vidéo pour permettre la synchronisation de la signature. Cette étape est effectuée en utilisant des techniques d'analyse en composantes principales et de « morphing » de séquences de forme prédéfinie. L'insertion de la signature s'effectue ensuite par addition d'une séquence aléatoire orientée, et la détection s'opère par corrélation. Les tests appliqués sur des objets vidéo indiquent que le schéma présenté permet la détection de la signature après des opérations telles que la réduction du débit, le sous-échantillonnage du masque associé à l'objet, ou encore des manipulations géométriques (rotations, changements d'échelle)
Understanding the Role of Willingness to Cannibalize in New Service Development
The objective of the present study is to develop a model of explaining new service development behavior using the concept of willingness to cannibalize existing sales, current capabilities and prior investments. The paper is structured as follows. First, we review the literature relevant for our work. Second, we explain our conceptual model. Next, we report on the research method used and present empirical evidence from 217 service firms. We close with a discussion and recommendations for future research.
Side-Informed Steganography for JPEG Images by Modeling Decompressed Images
Side-informed steganography has always been among the most secure approaches
in the field. However, a majority of existing methods for JPEG images use the
side information, here the rounding error, in a heuristic way. For the first
time, we show that the usefulness of the rounding error comes from its
covariance with the embedding changes. Unfortunately, this covariance between
continuous and discrete variables is not analytically available. An estimate of
the covariance is proposed, which allows to model steganography as a change in
the variance of DCT coefficients. Since steganalysis today is best performed in
the spatial domain, we derive a likelihood ratio test to preserve a model of a
decompressed JPEG image. The proposed method then bounds the power of this test
by minimizing the Kullback-Leibler divergence between the cover and stego
distributions. We experimentally demonstrate in two popular datasets that it
achieves state-of-the-art performance against deep learning detectors.
Moreover, by considering a different pixel variance estimator for images
compressed with Quality Factor 100, even greater improvements are obtained.Comment: 13 pages, 7 figures, 1 table, submitted to IEEE Transactions on
Information Forensics & Securit
Using multiple re-embeddings for quantitative steganalysis and image reliability estimation
The quantitative steganalysis problem aims at estimating the amount of payload embedded inside a document. In this paper, JPEG images are considered, and by the use of a re-embedding based methodology, it is possible to estimate the number of original embedding changes performed on the image by a stego source and to slightly improve the estimation regarding classical quantitative steganalysis methods. The major advance of this methodology is that it also enables to obtain a confidence interval on this estimated payload. This confidence interval then permits to evaluate the difficulty of an image, in terms of steganalysis by estimating the reliability of the output. The regression technique comes from the OP-ELM and the reliability is estimated using linear approximation. The methodology is applied with a publicly available stego algorithm, regression model and database of images. The methodology is generic and can be used for any quantitative steganalysis problem of this class
Errorless Robust JPEG Steganography using Outputs of JPEG Coders
Robust steganography is a technique of hiding secret messages in images so
that the message can be recovered after additional image processing. One of the
most popular processing operations is JPEG recompression. Unfortunately, most
of today's steganographic methods addressing this issue only provide a
probabilistic guarantee of recovering the secret and are consequently not
errorless. That is unacceptable since even a single unexpected change can make
the whole message unreadable if it is encrypted. We propose to create a robust
set of DCT coefficients by inspecting their behavior during recompression,
which requires access to the targeted JPEG compressor. This is done by dividing
the DCT coefficients into 64 non-overlapping lattices because one embedding
change can potentially affect many other coefficients from the same DCT block
during recompression. The robustness is then combined with standard
steganographic costs creating a lattice embedding scheme robust against JPEG
recompression. Through experiments, we show that the size of the robust set and
the scheme's security depends on the ordering of lattices during embedding. We
verify the validity of the proposed method with three typical JPEG compressors
and benchmark its security for various embedding payloads, three different ways
of ordering the lattices, and a range of Quality Factors. Finally, this method
is errorless by construction, meaning the embedded message will always be
readable.Comment: 10 pages, 11 figures, 1 table, submitted to IEEE Transactions on
Dependable and Secure Computin
Compatibility and Timing Attacks for JPEG Steganalysis
This paper introduces a novel compatibility attack to detect a steganographic
message embedded in the DCT domain of a JPEG image at high-quality factors
(close to 100). Because the JPEG compression is not a surjective function, i.e.
not every DCT blocks can be mapped from a pixel block, embedding a message in
the DCT domain can create incompatible blocks. We propose a method to find such
a block, which directly proves that a block has been modified during the
embedding. This theoretical method provides many advantages such as being
completely independent to Cover Source Mismatch, having good detection power,
and perfect reliability since false alarms are impossible as soon as
incompatible blocks are found. We show that finding an incompatible block is
equivalent to proving the infeasibility of an Integer Linear Programming
problem. However, solving such a problem requires considerable computational
power and has not been reached for 8x8 blocks. Instead, a timing attack
approach is presented to perform steganalysis without potentially any false
alarms for large computing power.Comment: Workshop on Information Hiding and Multimedia Security, ACM, Jun
2023, Chicago, United State
Quantification adaptative pour la stéganalyse d'images texturées
National audienceNous cherchons à améliorer les performances d'un schéma de stéganalyse (i.e. la détection de messages cachées) pour des images texturées. Le schéma de stéganographie étudié consiste à modifier certains pixels de l'image par une perturbation +/-1, et le schéma de stéganalyse utilise les caractéristiques construites à partir de la probabilité conditionnelle empirique de différences de 4 pixels voisins. Dans sa version originale, la stéganalyse n'est pas trés efficace sur des images texturées et ce travail vise à †explorer plusieurs techniques de quantification en utilisant d'abord un pas de quantification plus important puis une quantification adaptative scalaire ou vectorielle. Les cellules de la quantification adaptative sont générées en utilisant un K-means ou un K-means ''équilibré'' de manière à ce chaque cellule quantifie approximativement le même nombre d'échantillon. Nous obtenons un gain maximal de classification de 3% pour un pas de quantification uniforme de 3. En utilisant l'algorithme K-means équilibré sur [-18,18], le gain par rapport à la version de base est de 4.7\%
A Natural Steganography Embedding Scheme Dedicated to Color Sensors in the JPEG Domain
International audienceUsing Natural Steganography (NS), a cover raw image acquired at sensitivity ISO 1 is transformed into a stego image whose statistical distribution is similar to a cover image acquired at sensitivity ISO 2 > ISO 1. This paper proposes such an embedding scheme for color sensors in the JPEG domain, extending thus the prior art proposed for the pixel domain and the JPEG domain for monochrome sensors. We first show that color sensors generate strong intra-block and inter-block dependencies between DCT coefficients and that theses dependencies are due to the demosaicking step in the development process. Capturing theses dependencies using an empirical covariance matrix, we propose a pseudo-embedding algorithm on greyscale JPEG images which uses up to four sub-lattices and 64 lattices to embed information while preserving the estimated correlations among DCT coefficients. We then compute an approximation of the average embedding rate w.r.t. the JPEG quality factor and evaluate the empirical security of the proposed scheme for linear and non-linear demosaicing schemes. Our experiments show that we can achieve high capacity (around 2 bit per nzAC) with a high empirical security (P E 30% using DCTR at QF 95)
- …