30 research outputs found

    Discriminative and robust zero-watermarking scheme based on completed local binary pattern for authentication and copyright identification of medical images

    Get PDF
    Authentication and copyright identification are two critical security issues for medical images. Although zerowatermarking schemes can provide durable, reliable and distortion-free protection for medical images, the existing zerowatermarking schemes for medical images still face two problems. On one hand, they rarely considered the distinguishability for medical images, which is critical because different medical images are sometimes similar to each other. On the other hand, their robustness against geometric attacks, such as cropping, rotation and flipping, is insufficient. In this study, a novel discriminative and robust zero-watermarking (DRZW) is proposed to address these two problems. In DRZW, content-based features of medical images are first extracted based on completed local binary pattern (CLBP) operator to ensure the distinguishability and robustness, especially against geometric attacks. Then, master shares and ownership shares are generated from the content-based features and watermark according to (2,2) visual cryptography. Finally, the ownership shares are stored for authentication and copyright identification. For queried medical images, their content-based features are extracted and master shares are generated. Their watermarks for authentication and copyright identification are recovered by stacking the generated master shares and stored ownership shares. 200 different medical images of 5 types are collected as the testing data and our experimental results demonstrate that DRZW ensures both the accuracy and reliability of authentication and copyright identification. When fixing the false positive rate to 1.00%, the average value of false negative rates by using DRZW is only 1.75% under 20 common attacks with different parameters

    A novel robust reversible watermarking scheme for protecting authenticity and integrity of medical images

    Get PDF
    It is of great importance in telemedicine to protect authenticity and integrity of medical images. They are mainly addressed by two technologies, which are region of interest (ROI) lossless watermarking and reversible watermarking. However, the former causes biases on diagnosis by distorting region of none interest (RONI) and introduces security risks by segmenting image spatially for watermark embedding. The latter fails to provide reliable recovery function for the tampered areas when protecting image integrity. To address these issues, a novel robust reversible watermarking scheme is proposed in this paper. In our scheme, a reversible watermarking method is designed based on recursive dither modulation (RDM) to avoid biases on diagnosis. In addition, RDM is combined with Slantlet transform and singular value decomposition to provide a reliable solution for protecting image authenticity. Moreover, ROI and RONI are divided for watermark generation to design an effective recovery function under limited embedding capacity. Finally, watermarks are embedded into whole medical images to avoid the risks caused by segmenting image spatially. Experimental results demonstrate that our proposed lossless scheme not only has remarkable imperceptibility and sufficient robustness, but also provides reliable authentication, tamper detection, localization and recovery functions, which outperforms existing schemes for protecting medical image

    Secured federated learning model verification: A client-side backdoor triggered watermarking scheme

    No full text
    Federated learning (FL) has become an emerging distributed framework to build deep learning models with collaborative efforts from multiple participants. Consequently, copyright protection of FL deep model is urgently required because too many participants have access to the joint-trained model. Recently, encrypted FL framework is developed to address data leakage issue when central node is not fully trustable. This encryption process has made existing DL model watermarking schemes impossible to embed watermark at the central node. In this paper, we propose a novel clientside federated learning watermarking method to tackle the model verification issue under the encrypted FL framework. In specific, we design a backdoor-based watermarking scheme to allow model owners to embed their pre-designed noise patterns into the FL deep model. Thus, our method provides reliable copyright protection while ensuring the data privacy because the central node has no access to the encrypted gradient information. The experimental results have demonstrated the efficiency of our method in terms of both FL model performance and watermarking robustness

    DIBR zero-watermarking based on invariant feature and geometric rectification

    No full text
    Despite the success of watermarking technique for protecting DIBR 3D videos, existing methods still can hardly ensure the robustness against geometric attacks, lossless video quality and distinguishability between different videos simultaneously. In this paper, we propose a novel zero-watermarking scheme to address this challenge. Specifically, we design CT-SVD features to ensure both distinguishability and robustness against signal processing and DIBR conversion attacks. In addition, a logistic-logistic chaotic system is utilized to encrypt features for the enhanced security. Moreover, a rectification mechanism based on salient map detection and SIFT matching is designed to resist geometric attacks. Finally, we establish an attention-based fusion mechanism to explore the complementary robustness of rectified and unrectified features. Experimental results demonstrate that our proposed method outperforms the existing schemes in terms of losslessness, distinguishability and robustness against geometric attacks.</div

    Robust steganography without embedding based on secure container synthesis and iterative message recovery

    No full text
    Synthesis-based steganography without embedding (SWE) methods transform secret messages to container images synthesised by generative networks, which eliminates distortions of container images and thus can fundamentally resist typical steganalysis tools. However, existing methods suffer from weak message recovery robustness, synthesis fidelity, and the risk of message leakage. To address these problems, we propose a novel robust steganography without embedding method in this paper. In particular, we design a secure weight modulation-based generator by introducing secure factors to hide secret messages in synthesised container images. In this manner, the synthesised results are modulated by secure factors and thus the secret messages are inaccessible when using fake factors, thus reducing the risk of message leakage. Furthermore, we design a difference predictor via the reconstruction of tampered container images together with an adversarial training strategy to iteratively update the estimation of hidden messages. This ensures robustness of recovering hidden messages, while degradation of synthesis fidelity is reduced since the generator is not included in the adversarial training. Extensive experimental results convincingly demonstrate that our proposed method is effective in avoiding message leakage and superior to other existing methods in terms of recovery robustness and synthesis fidelity.</p

    Semantics-guided generative diffusion model with a 3DMM model condition for face swapping

    No full text
    Face swapping is a technique that replaces a face in a target media with another face of a different identity from a source face image. Currently, research on the effective utilisation of prior knowledge and semantic guidance for photo-realistic face swapping remains limited, despite the impressive synthesis quality achieved by recent generative models. In this paper, we propose a novel conditional Denoising Diffusion Probabilistic Model (DDPM) enforced by a two-level face prior guidance. Specifically, it includes (i) an image-level condition generated by a 3D Morphable Model (3DMM), and (ii) a high-semantic level guidance driven by information extracted from several pre-trained attribute classifiers, for high-quality face image synthesis. Although swapped face image from 3DMM does not achieve photo-realistic quality on its own, it provides a strong image-level prior, in parallel with high-level face semantics, to guide the DDPM for high fidelity image generation. The experimental results demonstrate that our method outperforms state-of-the-art face swapping methods on benchmark datasets in terms of its synthesis quality, and capability to preserve the target face attributes and swap the source face identity.</p

    Image disentanglement autoencoder for steganography without embedding

    No full text
    Conventional steganography approaches embed a secret message into a carrier for concealed communication but are prone to attack by recent advanced steganalysis tools. In this paper, we propose Image DisEntanglement Autoencoder for Steganography (IDEAS) as a novel steganography without embedding (SWE) technique. Instead of directly embedding the secret message into a carrier image, our approach hides it by transforming it into a synthesised image, and is thus fundamentally immune to typical steganalysis attacks. By disentangling an image into two representations for structure and texture, we exploit the stability of structure representation to improve secret message extraction while increasing synthesis diversity via randomising texture representations to enhance steganography security. In addition, we design an adaptive mapping mechanism to further enhance the diversity of synthesised images when ensuring different required extraction levels. Experimental results convincingly demonstrate IDEAS to achieve superior performance in terms of enhanced security, reliable secret message extraction and flexible adaptation for different extraction levels, compared to state-of-the-art SWE methods.</div

    A novel class activation map for visual explanations in multi-object scenes

    No full text
    Class activation maps (CAMs) have emerged as a popular technique to improve model interpretability of deep learningbased models. While existing CAM methods are able to extract salient semantic regions to provide high-confidence pseudo-labels for downstream tasks such as semantic segmentation, they are less effective when dealing with multi-object scenes. In this paper, we design a multi-channel weight assignment scheme that learns from both positive and negative regions to yield an improved CAM model for images comprising multiple objects. We demonstrate the effectiveness of our proposed method on two new data sets, a cat-and-dog dataset and a PASCAL VOC 2012-based multi-object dataset, and show it to compare favourably with other state-of-the-art CAM methods, outperforming them in terms of both mIoU and inter-object activation ratio (IAR), a new evaluation measure proposed to evaluate CAM performance in multi-object scenes.</p

    Discriminative and geometrically robust zero-watermarking scheme for protecting DIBR 3D videos

    No full text
    Copyright protection of depth image-based rendering (DIBR) 3D videos is crucial due to the popularity of these videos. Despite the success of recent watermarking schemes, it is still challenging to ensure the robustness against strong geometric attacks when both lossless quality and distinguishability of protected videos are required. In this paper, we propose a novel zero-watermarking scheme to improve the performance under strong geometric attacks when satisfying the other two requirements. In our scheme, CT-SVD-based features are extracted to ensure both distinguishability and robustness against signal processing and DIBR conversion attacks, while a SIFT-based rectification mechanism is designed to resist geometric attacks. Further, an attention-based fusion strategy is proposed to complement the robustness of rectified and unrectified CT-SVD features. Experimental results demonstrate that our scheme outperforms the existing zerowatermarking schemes in terms of distinguishability and robustness against strong geometric attacks such as rotation, cyclic translation and shearing

    Micro-expression video clip synthesis method based on spatial-temporal statistical model and motion intensity evaluation function

    No full text
    Micro-expression (ME) recognition is an effective method to detect lies and other subtle human emotions. Machine learning-based and deep learning-based models have achieved remarkable results recently. However, these models are vulnerable to overfitting issue due to the scarcity of ME video clips. These videos are much harder to collect and annotate than normal expression video clips, thus limiting the recognition performance improvement. To address this issue, we propose a microexpression video clip synthesis method based on spatial-temporal statistical and motion intensity evaluation in this paper. In our proposed scheme, we establish a micro-expression spatial and temporal statistical model (MSTSM) by analyzing the dynamic characteristics of micro-expressions and deploy this model to provide the rules for micro-expressions video synthesis. In addition, we design a motion intensity evaluation function (MIEF) to ensure that the intensity of facial expression in the synthesized video clips is consistent with those in real -ME. Finally, facial video clips with MEs of new subjects can be generated by deploying the MIEF together with the widely-used 3D facial morphable model and the rules provided by the MSTSM. The experimental results have demonstrated that the accuracy of micro-expression recognition can be effectively improved by adding the synthesized video clips generated by our proposed method
    corecore