3 research outputs found

    Micro-expression video clip synthesis method based on spatial-temporal statistical model and motion intensity evaluation function

    No full text
    Micro-expression (ME) recognition is an effective method to detect lies and other subtle human emotions. Machine learning-based and deep learning-based models have achieved remarkable results recently. However, these models are vulnerable to overfitting issue due to the scarcity of ME video clips. These videos are much harder to collect and annotate than normal expression video clips, thus limiting the recognition performance improvement. To address this issue, we propose a microexpression video clip synthesis method based on spatial-temporal statistical and motion intensity evaluation in this paper. In our proposed scheme, we establish a micro-expression spatial and temporal statistical model (MSTSM) by analyzing the dynamic characteristics of micro-expressions and deploy this model to provide the rules for micro-expressions video synthesis. In addition, we design a motion intensity evaluation function (MIEF) to ensure that the intensity of facial expression in the synthesized video clips is consistent with those in real -ME. Finally, facial video clips with MEs of new subjects can be generated by deploying the MIEF together with the widely-used 3D facial morphable model and the rules provided by the MSTSM. The experimental results have demonstrated that the accuracy of micro-expression recognition can be effectively improved by adding the synthesized video clips generated by our proposed method

    Joint compressive autoencoders for full-image-to-image hiding

    No full text
    Image hiding has received significant attentions due to the need of enhanced multimedia services, such as multimedia security and meta-information embedding for multimedia augmentation. Recently, deep learning-based methods have been introduced that are capable of significantly increasing the hidden capacity and supporting full size image hiding. However, these methods suffer from the necessity to balance the errors of the modified cover image and the recovered hidden image. In this paper, we propose a novel joint compressive autoencoder (J-CAE) framework to design an image hiding algorithm that achieves full-size image hidden capacity with small reconstruction errors of the hidden image. More importantly, it addresses the trade-off problem of previous deep learning-based methods by mapping the image representations in the latent spaces of the joint CAE models. Thus, both visual quality of the container image and recovery quality of the hidden image can be simultaneously improved. Extensive experimental results demonstrate that our proposed framework outperforms several state-of-the-art deep learning-based image hiding methods in terms of imperceptibility and recovery quality of the hidden images while maintaining full-size image hidden capacity

    Camouflage generative adversarial network: Coverless full-image-to-image hiding

    No full text
    Image hiding, one of the most important data hiding techniques, is widely used to enhance cybersecurity when transmitting multimedia data. In recent years, deep learning based image hiding algorithms have been designed to improve the embedding capacity whilst maintaining sufficient imperceptibility to malicious eavesdroppers. These methods can hide a full-size secret image into a cover image, thus allowing full-image-to image hiding. However, these methods suffer from a trade off challenge to balance the possibility of detection from the container image against the recovery quality of secret image. In this paper, we propose Camouflage Generative Adversarial Network (Cam-GAN), a novel two-stage coverless full-image to-image hiding method named, to tackle this problem. Our method offers a hiding solution through image synthesis to avoid using a modified cover image as the image hiding container and thus enhancing both image hiding imperceptibility and recovery quality of secret images. Our experimental results demonstrate that Cam-GAN outperforms state-of-the-art full-image-to-image hiding algorithms on both aspects
    corecore