337 research outputs found

    Scene-based imperceptible-visible watermarking for HDR video content

    Get PDF
    This paper presents the High Dynamic Range - Imperceptible Visible Watermarking for HDR video content (HDR-IVW-V) based on scene detection for robust copyright protection of HDR videos using a visually imperceptible watermarking methodology. HDR-IVW-V employs scene detection to reduce both computational complexity and undesired visual attention to watermarked regions. Visual imperceptibility is achieved by finding the region of a frame with the highest hiding capacities on which the Human Visual System (HVS) cannot recognize the embedded watermark. The embedded watermark remains visually imperceptible as long as the normal color calibration parameters are held. HDR-IVW-V is evaluated on PQ-encoded HDR video content successfully attaining visual imperceptibility, robustness to tone mapping operations and image quality preservation

    Additional information delivery to image content via improved unseen–visible watermarking

    Get PDF
    In a practical watermark scenario, watermarks are used to provide auxiliary information; in this way, an analogous digital approach called unseen–visible watermark has been introduced to deliver auxiliary information. In this algorithm, the embedding stage takes advantage of the visible and invisible watermarking to embed an owner logotype or barcodes as watermarks; in the exhibition stage, the equipped functions of the display devices are used to reveal the watermark to the naked eyes, eliminating any watermark exhibition algorithm. In this paper, a watermark complement strategy for unseen–visible watermarking is proposed to improve the embedding stage, reducing the histogram distortion and the visual degradation of the watermarked image. The presented algorithm exhibits the following contributions: first, the algorithm can be applied to any class of images with large smooth regions of low or high intensity; second, a watermark complement strategy is introduced to reduce the visual degradation and histogram distortion of the watermarked image; and third, an embedding error measurement is proposed. Evaluation results show that the proposed strategy has high performance in comparison with other algorithms, providing a high visual quality of the exhibited watermark and preserving its robustness in terms of readability and imperceptibility against geometric and processing attacks

    Preventing Unauthorized AI Over-Analysis by Medical Image Adversarial Watermarking

    Full text link
    The advancement of deep learning has facilitated the integration of Artificial Intelligence (AI) into clinical practices, particularly in computer-aided diagnosis. Given the pivotal role of medical images in various diagnostic procedures, it becomes imperative to ensure the responsible and secure utilization of AI techniques. However, the unauthorized utilization of AI for image analysis raises significant concerns regarding patient privacy and potential infringement on the proprietary rights of data custodians. Consequently, the development of pragmatic and cost-effective strategies that safeguard patient privacy and uphold medical image copyrights emerges as a critical necessity. In direct response to this pressing demand, we present a pioneering solution named Medical Image Adversarial watermarking (MIAD-MARK). Our approach introduces watermarks that strategically mislead unauthorized AI diagnostic models, inducing erroneous predictions without compromising the integrity of the visual content. Importantly, our method integrates an authorization protocol tailored for legitimate users, enabling the removal of the MIAD-MARK through encryption-generated keys. Through extensive experiments, we validate the efficacy of MIAD-MARK across three prominent medical image datasets. The empirical outcomes demonstrate the substantial impact of our approach, notably reducing the accuracy of standard AI diagnostic models to a mere 8.57% under white box conditions and 45.83% in the more challenging black box scenario. Additionally, our solution effectively mitigates unauthorized exploitation of medical images even in the presence of sophisticated watermark removal networks. Notably, those AI diagnosis networks exhibit a meager average accuracy of 38.59% when applied to images protected by MIAD-MARK, underscoring the robustness of our safeguarding mechanism

    Watermarks

    Get PDF

    WM-NET: Robust Deep 3D Watermarking with Limited Data

    Full text link
    The goal of 3D mesh watermarking is to embed the message in 3D meshes that can withstand various attacks imperceptibly and reconstruct the message accurately from watermarked meshes. Traditional methods are less robust against attacks. Recent DNN-based methods either introduce excessive distortions or fail to embed the watermark without the help of texture information. However, embedding the watermark in textures is insecure because replacing the texture image can completely remove the watermark. In this paper, we propose a robust deep 3D mesh watermarking WM-NET, which leverages attention-based convolutions in watermarking tasks to embed binary messages in vertex distributions without texture assistance. Furthermore, our WM-NET exploits the property that simplified meshes inherit similar relations from the original ones, where the relation is the offset vector directed from one vertex to its neighbor. By doing so, our method can be trained on simplified meshes(limited data) but remains effective on large-sized meshes (size adaptable) and unseen categories of meshes (geometry adaptable). Extensive experiments demonstrate our method brings 50% fewer distortions and 10% higher bit accuracy compared to previous work. Our watermark WM-NET is robust against various mesh attacks, e.g. Gauss, rotation, translation, scaling, and cropping

    Traceable and Authenticable Image Tagging for Fake News Detection

    Full text link
    To prevent fake news images from misleading the public, it is desirable not only to verify the authenticity of news images but also to trace the source of fake news, so as to provide a complete forensic chain for reliable fake news detection. To simultaneously achieve the goals of authenticity verification and source tracing, we propose a traceable and authenticable image tagging approach that is based on a design of Decoupled Invertible Neural Network (DINN). The designed DINN can simultaneously embed the dual-tags, \textit{i.e.}, authenticable tag and traceable tag, into each news image before publishing, and then separately extract them for authenticity verification and source tracing. Moreover, to improve the accuracy of dual-tags extraction, we design a parallel Feature Aware Projection Model (FAPM) to help the DINN preserve essential tag information. In addition, we define a Distance Metric-Guided Module (DMGM) that learns asymmetric one-class representations to enable the dual-tags to achieve different robustness performances under malicious manipulations. Extensive experiments, on diverse datasets and unseen manipulations, demonstrate that the proposed tagging approach achieves excellent performance in the aspects of both authenticity verification and source tracing for reliable fake news detection and outperforms the prior works
    • …
    corecore