1,503 research outputs found
Deep Learning for Reversible Steganography: Principles and Insights
Deep-learning\textendash{centric} reversible steganography has emerged as a
promising research paradigm. A direct way of applying deep learning to
reversible steganography is to construct a pair of encoder and decoder, whose
parameters are trained jointly, thereby learning the steganographic system as a
whole. This end-to-end framework, however, falls short of the reversibility
requirement because it is difficult for this kind of monolithic system, as a
black box, to create or duplicate intricate reversible mechanisms. In response
to this issue, a recent approach is to carve up the steganographic system and
work on modules independently. In particular, neural networks are deployed in
an analytics module to learn the data distribution, while an established
mechanism is called upon to handle the remaining tasks. In this paper, we
investigate the modular framework and deploy deep neural networks in a
reversible steganographic scheme referred to as prediction-error modulation, in
which an analytics module serves the purpose of pixel intensity prediction. The
primary focus of this study is on deep-learning\textendash{based} context-aware
pixel intensity prediction. We address the unsolved issues reported in related
literature, including the impact of pixel initialisation on prediction accuracy
and the influence of uncertainty propagation in dual-layer embedding.
Furthermore, we establish a connection between context-aware pixel intensity
prediction and low-level computer vision and analyse the performance of several
advanced neural networks
Watermarking Graph Neural Networks by Random Graphs
Many learning tasks require us to deal with graph data which contains rich
relational information among elements, leading increasing graph neural network
(GNN) models to be deployed in industrial products for improving the quality of
service. However, they also raise challenges to model authentication. It is
necessary to protect the ownership of the GNN models, which motivates us to
present a watermarking method to GNN models in this paper. In the proposed
method, an Erdos-Renyi (ER) random graph with random node feature vectors and
labels is randomly generated as a trigger to train the GNN to be protected
together with the normal samples. During model training, the secret watermark
is embedded into the label predictions of the ER graph nodes. During model
verification, by activating a marked GNN with the trigger ER graph, the
watermark can be reconstructed from the output to verify the ownership. Since
the ER graph was randomly generated, by feeding it to a non-marked GNN, the
label predictions of the graph nodes are random, resulting in a low false alarm
rate (of the proposed work). Experimental results have also shown that, the
performance of a marked GNN on its original task will not be impaired.
Moreover, it is robust against model compression and fine-tuning, which has
shown the superiority and applicability.Comment: https://hzwu.github.io
Ensemble Reversible Data Hiding
The conventional reversible data hiding (RDH) algorithms often consider the
host as a whole to embed a secret payload. In order to achieve satisfactory
rate-distortion performance, the secret bits are embedded into the noise-like
component of the host such as prediction errors. From the rate-distortion
optimization view, it may be not optimal since the data embedding units use the
identical parameters. This motivates us to present a segmented data embedding
strategy for efficient RDH in this paper, in which the raw host could be
partitioned into multiple subhosts such that each one can freely optimize and
use the data embedding parameters. Moreover, it enables us to apply different
RDH algorithms within different subhosts, which is defined as ensemble. Notice
that, the ensemble defined here is different from that in machine learning.
Accordingly, the conventional operation corresponds to a special case of the
proposed work. Since it is a general strategy, we combine some state-of-the-art
algorithms to construct a new system using the proposed embedding strategy to
evaluate the rate-distortion performance. Experimental results have shown that,
the ensemble RDH system could outperform the original versions in most cases,
which has shown the superiority and applicability.Comment: Fig. 1 was updated due to a minor erro
Reversible Image Watermarking Using Modified Quadratic Difference Expansion and Hybrid Optimization Technique
With increasing copyright violation cases, watermarking of digital images is a very popular solution for securing online media content. Since some sensitive applications require image recovery after watermark extraction, reversible watermarking is widely preferred. This article introduces a Modified Quadratic Difference Expansion (MQDE) and fractal encryption-based reversible watermarking for securing the copyrights of images. First, fractal encryption is applied to watermarks using Tromino's L-shaped theorem to improve security. In addition, Cuckoo Search-Grey Wolf Optimization (CSGWO) is enforced on the cover image to optimize block allocation for inserting an encrypted watermark such that it greatly increases its invisibility. While the developed MQDE technique helps to improve coverage and visual quality, the novel data-driven distortion control unit ensures optimal performance. The suggested approach provides the highest level of protection when retrieving the secret image and original cover image without losing the essential information, apart from improving transparency and capacity without much tradeoff. The simulation results of this approach are superior to existing methods in terms of embedding capacity. With an average PSNR of 67 dB, the method shows good imperceptibility in comparison to other schemes
Design and Analysis of Reversible Data Hiding Using Hybrid Cryptographic and Steganographic approaches for Multiple Images
Data concealing is the process of including some helpful information on images. The majority of sensitive applications, such sending authentication data, benefit from data hiding. Reversible data hiding (RDH), also known as invertible or lossless data hiding in the field of signal processing, has been the subject of a lot of study. A piece of data that may be recovered from an image to disclose the original image is inserted into the image during the RDH process to generate a watermarked image. Lossless data hiding is being investigated as a strong and popular way to protect copyright in many sensitive applications, such as law enforcement, medical diagnostics, and remote sensing. Visible and invisible watermarking are the two types of watermarking algorithms. The watermark must be bold and clearly apparent in order to be visible. To be utilized for invisible watermarking, the watermark must be robust and visibly transparent. Reversible data hiding (RDH) creates a marked signal by encoding a piece of data into the host signal. Once the embedded data has been recovered, the original signal may be accurately retrieved. For photos shot in poor illumination, visual quality is more important than a high PSNR number. The DH method increases the contrast of the host picture while maintaining a high PSNR value. Histogram equalization may also be done concurrently by repeating the embedding process in order to relocate the top two bins in the input image's histogram for data embedding. It's critical to assess the images after data concealment to see how much the contrast has increased. Common picture quality assessments include peak signal to noise ratio (PSNR), relative structural similarity (RSS), relative mean brightness error (RMBE), relative entropy error (REE), relative contrast error (RCE), and global contrast factor (GCF). The main objective of this paper is to investigate the various quantitative metrics for evaluating contrast enhancement. The results show that the visual quality may be preserved by including a sufficient number of message bits in the input photographs
Recent Advances in Signal Processing
The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity
Information Analysis for Steganography and Steganalysis in 3D Polygonal Meshes
Information hiding, which embeds a watermark/message over a cover signal, has recently found extensive applications in, for example, copyright protection, content authentication and covert communication. It has been widely considered as an appealing technology to complement conventional cryptographic processes in the field of multimedia security by embedding information into the signal being protected. Generally, information hiding can be classified into two categories: steganography and watermarking. While steganography attempts to embed as much information as possible into a cover signal, watermarking tries to emphasize the robustness of the embedded information at the expense of embedding capacity.
In contrast to information hiding, steganalysis aims at detecting whether a given medium has hidden message in it, and, if possible, recover that hidden message. It can be used to measure the security performance of information hiding techniques, meaning a steganalysis resistant steganographic/watermarking method should be imperceptible not only to Human Vision Systems (HVS), but also to intelligent analysis.
As yet, 3D information hiding and steganalysis has received relatively less attention compared to image information hiding, despite the proliferation of 3D computer graphics models which are fairly promising information carriers. This thesis focuses on this relatively neglected research area and has the following primary objectives: 1) to investigate the trade-off between embedding capacity and distortion by considering the correlation between spatial and normal/curvature noise in triangle meshes; 2) to design satisfactory 3D steganographic algorithms, taking into account this trade-off; 3) to design robust 3D watermarking algorithms; 4) to propose a steganalysis framework for detecting the existence of the hidden information in 3D models and introduce a universal 3D steganalytic method under this framework. %and demonstrate the performance of the proposed steganalysis by testing it against six well-known 3D steganographic/watermarking methods.
The thesis is organized as follows. Chapter 1 describes in detail the background relating to information hiding and steganalysis, as well as the research problems this thesis will be studying. Chapter 2 conducts a survey on the previous information hiding techniques for digital images, 3D models and other medium and also on image steganalysis algorithms.
Motivated by the observation that the knowledge of the spatial accuracy of the mesh vertices does not easily translate into information related to the accuracy of other visually important mesh attributes such as normals, Chapters 3 and 4 investigate the impact of modifying vertex coordinates of 3D triangle models on the mesh normals. Chapter 3 presents the results of an empirical investigation, whereas Chapter 4 presents the results of a theoretical study. Based on these results, a high-capacity 3D steganographic algorithm capable of controlling embedding distortion is also presented in Chapter 4.
In addition to normal information, several mesh interrogation, processing and rendering algorithms make direct or indirect use of curvature information. Motivated by this, Chapter 5 studies the relation between Discrete Gaussian Curvature (DGC) degradation and vertex coordinate modifications.
Chapter 6 proposes a robust watermarking algorithm for 3D polygonal models, based on modifying the histogram of the distances from the model vertices to a point in 3D space. That point is determined by applying Principal Component Analysis (PCA) to the cover model. The use of PCA makes the watermarking method robust against common 3D operations, such as rotation, translation and vertex reordering. In addition, Chapter 6 develops a 3D specific steganalytic algorithm to detect the existence of the hidden messages embedded by one well-known watermarking method. By contrast, the focus of Chapter 7 will be on developing a 3D watermarking algorithm that is resistant to mesh editing or deformation attacks that change the global shape of the mesh.
By adopting a framework which has been successfully developed for image steganalysis, Chapter 8 designs a 3D steganalysis method to detect the existence of messages hidden in 3D models with existing steganographic and watermarking algorithms. The efficiency of this steganalytic algorithm has been evaluated on five state-of-the-art 3D watermarking/steganographic methods. Moreover, being a universal steganalytic algorithm can be used as a benchmark for measuring the anti-steganalysis performance of other existing and most importantly future watermarking/steganographic algorithms.
Chapter 9 concludes this thesis and also suggests some potential directions for future work
- …