97 research outputs found
Comparative Performance Analysis Of Deep Learning-Based Image Steganography Using U-Net, V-Net, And U-Net++ Encoders
Digital Imaging steganography is the act of hiding information in a cover picture in a way that can't be found or recovered. Three main types of methods are used in digital image steganography: neural network methods, spatial methods, and transform methods. The pixel values of an image are changed by spatial methods to embed information. On the other hand, the frequency of the image is changed by transform methods to embed information that is hidden. There are methods that use neural networks to hide things, and this is what the suggested method is all about. Through digital image steganography, this study looks into how deep convolutional neural networks (CNNs) can be used. With the increasing concerns about data infringement during transmission and storage, image steganography techniques have gained attention for hiding secret information within cover images. Traditional methods suffer from limitations such as low embedding capacity and poor reconstruction quality. To address these challenges, deep learning-based approaches have been proposed in the literature. Among them, the Convolutional Neural Network (CNN) based U-Net encoder has been extensively studied. However, its comparative performance with other CNN-based encoders like V-Net and U-Net++ remains unexplored in the context of image steganography.
In this paper, we implement V-Net and U-Net++ encoders for image steganography and conduct a comprehensive performance assessment alongside U-Net architecture. These architectures are utilized to conceal a secret image within a cover image, and a unified and robust decoder is designed to extract the hidden information. Through experimental evaluations, we compare the embedding capacity, stego quality, and reconstruction quality of the three architectures. The U-Net architecture outperforms V-Net and U-Net++ in terms of embedding capacity and the quality of stego and reconstructed secret images. This research provides valuable insights into the effectiveness of different deep learning-based encoders for image steganography applications, aiding in the selection of appropriate architectures for securing digital images against unauthorized access.
 
Universal Adversarial Perturbations Through the Lens of Deep Steganography: Towards A Fourier Perspective
The booming interest in adversarial attacks stems from a misalignment between
human vision and a deep neural network (DNN), i.e. a human imperceptible
perturbation fools the DNN. Moreover, a single perturbation, often called
universal adversarial perturbation (UAP), can be generated to fool the DNN for
most images. A similar misalignment phenomenon has recently also been observed
in the deep steganography task, where a decoder network can retrieve a secret
image back from a slightly perturbed cover image. We attempt explaining the
success of both in a unified manner from the Fourier perspective. We perform
task-specific and joint analysis and reveal that (a) frequency is a key factor
that influences their performance based on the proposed entropy metric for
quantifying the frequency distribution; (b) their success can be attributed to
a DNN being highly sensitive to high-frequency content. We also perform feature
layer analysis for providing deep insight on model generalization and
robustness. Additionally, we propose two new variants of universal
perturbations: (1) Universal Secret Adversarial Perturbation (USAP) that
simultaneously achieves attack and hiding; (2) high-pass UAP (HP-UAP) that is
less visible to the human eye.Comment: Accepted to AAAI 202
InvVis: Large-Scale Data Embedding for Invertible Visualization
We present InvVis, a new approach for invertible visualization, which is
reconstructing or further modifying a visualization from an image. InvVis
allows the embedding of a significant amount of data, such as chart data, chart
information, source code, etc., into visualization images. The encoded image is
perceptually indistinguishable from the original one. We propose a new method
to efficiently express chart data in the form of images, enabling
large-capacity data embedding. We also outline a model based on the invertible
neural network to achieve high-quality data concealing and revealing. We
explore and implement a variety of application scenarios of InvVis.
Additionally, we conduct a series of evaluation experiments to assess our
method from multiple perspectives, including data embedding quality, data
restoration accuracy, data encoding capacity, etc. The result of our
experiments demonstrates the great potential of InvVis in invertible
visualization.Comment: IEEE VIS 202
THInImg: Cross-modal Steganography for Presenting Talking Heads in Images
Cross-modal Steganography is the practice of concealing secret signals in
publicly available cover signals (distinct from the modality of the secret
signals) unobtrusively. While previous approaches primarily concentrated on
concealing a relatively small amount of information, we propose THInImg, which
manages to hide lengthy audio data (and subsequently decode talking head video)
inside an identity image by leveraging the properties of human face, which can
be effectively utilized for covert communication, transmission and copyright
protection. THInImg consists of two parts: the encoder and decoder. Inside the
encoder-decoder pipeline, we introduce a novel architecture that substantially
increase the capacity of hiding audio in images. Moreover, our framework can be
extended to iteratively hide multiple audio clips into an identity image,
offering multiple levels of control over permissions. We conduct extensive
experiments to prove the effectiveness of our method, demonstrating that
THInImg can present up to 80 seconds of high quality talking-head video
(including audio) in an identity image with 160x160 resolution.Comment: Accepted at WACV 202
Learning Iterative Neural Optimizers for Image Steganography
Image steganography is the process of concealing secret information in images
through imperceptible changes. Recent work has formulated this task as a
classic constrained optimization problem. In this paper, we argue that image
steganography is inherently performed on the (elusive) manifold of natural
images, and propose an iterative neural network trained to perform the
optimization steps. In contrast to classical optimization methods like L-BFGS
or projected gradient descent, we train the neural network to also stay close
to the manifold of natural images throughout the optimization. We show that our
learned neural optimization is faster and more reliable than classical
optimization approaches. In comparison to previous state-of-the-art
encoder-decoder-based steganography methods, it reduces the recovery error rate
by multiple orders of magnitude and achieves zero error up to 3 bits per pixel
(bpp) without the need for error-correcting codes.Comment: International Conference on Learning Representations (ICLR) 202
CopyRNeRF: Protecting the CopyRight of Neural Radiance Fields
Neural Radiance Fields (NeRF) have the potential to be a major representation
of media. Since training a NeRF has never been an easy task, the protection of
its model copyright should be a priority. In this paper, by analyzing the pros
and cons of possible copyright protection solutions, we propose to protect the
copyright of NeRF models by replacing the original color representation in NeRF
with a watermarked color representation. Then, a distortion-resistant rendering
scheme is designed to guarantee robust message extraction in 2D renderings of
NeRF. Our proposed method can directly protect the copyright of NeRF models
while maintaining high rendering quality and bit accuracy when compared among
optional solutions.Comment: 11 pages, 6 figures, accepted by iccv 2023 non-camera-ready versio
- …