6 research outputs found
Image disentanglement autoencoder for steganography without embedding
Conventional steganography approaches embed a secret message into a carrier for concealed communication but are prone to attack by recent advanced steganalysis tools. In this paper, we propose Image DisEntanglement Autoencoder for Steganography (IDEAS) as a novel steganography without embedding (SWE) technique. Instead of directly embedding the secret message into a carrier image, our approach hides it by transforming it into a synthesised image, and is thus fundamentally immune to typical steganalysis attacks. By disentangling an image into two representations for structure and texture, we exploit the stability of structure representation to improve secret message extraction while increasing synthesis diversity via randomising texture representations to enhance steganography security. In addition, we design an adaptive mapping mechanism to further enhance the diversity of synthesised images when ensuring different required extraction levels. Experimental results convincingly demonstrate IDEAS to achieve superior performance in terms of enhanced security, reliable secret message extraction and flexible adaptation for different extraction levels, compared to state-of-the-art SWE methods.</div
Robust steganography without embedding based on secure container synthesis and iterative message recovery
Synthesis-based steganography without embedding (SWE) methods transform secret messages to container images synthesised by generative networks, which eliminates distortions of container images and thus can fundamentally resist typical steganalysis tools. However, existing methods suffer from weak message recovery robustness, synthesis fidelity, and the risk of message leakage. To address these problems, we propose a novel robust steganography without embedding method in this paper. In particular, we design a secure weight modulation-based generator by introducing secure factors to hide secret messages in synthesised container images. In this manner, the synthesised results are modulated by secure factors and thus the secret messages are inaccessible when using fake factors, thus reducing the risk of message leakage. Furthermore, we design a difference predictor via the reconstruction of tampered container images together with an adversarial training strategy to iteratively update the estimation of hidden messages. This ensures robustness of recovering hidden messages, while degradation of synthesis fidelity is reduced since the generator is not included in the adversarial training. Extensive experimental results convincingly demonstrate that our proposed method is effective in avoiding message leakage and superior to other existing methods in terms of recovery robustness and synthesis fidelity.</p
Micro-expression video clip synthesis method based on spatial-temporal statistical model and motion intensity evaluation function
Micro-expression (ME) recognition is an effective
method to detect lies and other subtle human emotions. Machine
learning-based and deep learning-based models have achieved
remarkable results recently. However, these models are
vulnerable to overfitting issue due to the scarcity of ME video clips.
These videos are much harder to collect and annotate than normal
expression video clips, thus limiting the recognition performance
improvement. To address this issue, we propose a microexpression video clip synthesis method based on spatial-temporal
statistical and motion intensity evaluation in this paper. In our
proposed scheme, we establish a micro-expression spatial and
temporal statistical model (MSTSM) by analyzing the dynamic
characteristics of micro-expressions and deploy this model to
provide the rules for micro-expressions video synthesis. In
addition, we design a motion intensity evaluation function (MIEF)
to ensure that the intensity of facial expression in the synthesized
video clips is consistent with those in real -ME. Finally, facial video
clips with MEs of new subjects can be generated by deploying the
MIEF together with the widely-used 3D facial morphable model
and the rules provided by the MSTSM. The experimental results
have demonstrated that the accuracy of micro-expression
recognition can be effectively improved by adding the synthesized
video clips generated by our proposed method
Joint compressive autoencoders for full-image-to-image hiding
Image hiding has received significant attentions due to the need of enhanced multimedia services, such as multimedia security and meta-information embedding for multimedia augmentation. Recently, deep learning-based methods have been introduced that are capable of significantly increasing the hidden capacity and supporting full size image hiding. However, these methods suffer from the necessity to balance the errors of the modified cover image and the recovered hidden image. In this paper, we propose a novel joint compressive autoencoder (J-CAE) framework to design an image hiding algorithm that achieves full-size image hidden capacity with small reconstruction errors of the hidden image. More importantly, it addresses the trade-off problem of previous deep learning-based methods by mapping the image representations in the latent spaces of the joint CAE models. Thus, both visual quality of the container image and recovery quality of the hidden image can be simultaneously improved. Extensive experimental results demonstrate that our proposed framework outperforms several state-of-the-art deep learning-based image hiding methods in terms of imperceptibility and recovery quality of the hidden images while maintaining full-size image hidden capacity
Hiding multiple images into a single image via joint compressive autoencoders
Interest in image hiding has been continually growing. Recently, deep learningbased image hiding approaches improve the hidden capacity significantly. However, the major challenges of the existing methods are that they are difficult to balance between the errors of the modified cover image and those of the recovered secret image. To solve this problem, in this paper, we develop an image hiding algorithm based on a joint compressive autoencoder framework. Further, we propose a novel strategy to enlarge the hidden capacity, i.e., hiding multi-images in one container image. Specifically, our approach provides an extremely high image hidden capacity coupled with small reconstruction errors of the secret image. More importantly, we tackle the trade-off problem of earlier approaches by mapping the image representations in the latent spaces of the joint compressive autoencoder models, leading to both high visual quality of the container image and low reconstruction error the secret image. In an extensive set of experiments, we confirm our proposed approach to outperform several state-of-the-art image hiding methods, yielding high imperceptibility and steganalysis resistance of the container images with high recovery quality of the secret images, while improving the image hidden capacity significantly (four times higher than full-image hiding capacity).</p
Camouflage generative adversarial network: Coverless full-image-to-image hiding
Image hiding, one of the most important data
hiding techniques, is widely used to enhance cybersecurity when
transmitting multimedia data. In recent years, deep learning based image hiding algorithms have been designed to improve the
embedding capacity whilst maintaining sufficient imperceptibility
to malicious eavesdroppers. These methods can hide a full-size
secret image into a cover image, thus allowing full-image-to image hiding. However, these methods suffer from a trade off challenge to balance the possibility of detection from the
container image against the recovery quality of secret image.
In this paper, we propose Camouflage Generative Adversarial
Network (Cam-GAN), a novel two-stage coverless full-image to-image hiding method named, to tackle this problem. Our
method offers a hiding solution through image synthesis to avoid
using a modified cover image as the image hiding container and
thus enhancing both image hiding imperceptibility and recovery
quality of secret images. Our experimental results demonstrate
that Cam-GAN outperforms state-of-the-art full-image-to-image
hiding algorithms on both aspects