6 research outputs found

    Prediction-error of Prediction Error (PPE)-based Reversible Data Hiding

    Full text link
    This paper presents a novel reversible data hiding (RDH) algorithm for gray-scaled images, in which the prediction-error of prediction error (PPE) of a pixel is used to carry the secret data. In the proposed method, the pixels to be embedded are firstly predicted with their neighboring pixels to obtain the corresponding prediction errors (PEs). Then, by exploiting the PEs of the neighboring pixels, the prediction of the PEs of the pixels can be determined. And, a sorting technique based on the local complexity of a pixel is used to collect the PPEs to generate an ordered PPE sequence so that, smaller PPEs will be processed first for data embedding. By reversibly shifting the PPE histogram (PPEH) with optimized parameters, the pixels corresponding to the altered PPEH bins can be finally modified to carry the secret data. Experimental results have implied that the proposed method can benefit from the prediction procedure of the PEs, sorting technique as well as parameters selection, and therefore outperform some state-of-the-art works in terms of payload-distortion performance when applied to different images.Comment: There has no technical difference to previous versions, but rather some minor word corrections. A 2-page summary of this paper was accepted by ACM IH&MMSec'16 "Ongoing work session". My homepage: hzwu.github.i

    Reversible Embedding to Covers Full of Boundaries

    Full text link
    In reversible data embedding, to avoid overflow and underflow problem, before data embedding, boundary pixels are recorded as side information, which may be losslessly compressed. The existing algorithms often assume that a natural image has little boundary pixels so that the size of side information is small. Accordingly, a relatively high pure payload could be achieved. However, there actually may exist a lot of boundary pixels in a natural image, implying that, the size of side information could be very large. Therefore, when to directly use the existing algorithms, the pure embedding capacity may be not sufficient. In order to address this problem, in this paper, we present a new and efficient framework to reversible data embedding in images that have lots of boundary pixels. The core idea is to losslessly preprocess boundary pixels so that it can significantly reduce the side information. Experimental results have shown the superiority and applicability of our work

    Watermarking Graph Neural Networks by Random Graphs

    Full text link
    Many learning tasks require us to deal with graph data which contains rich relational information among elements, leading increasing graph neural network (GNN) models to be deployed in industrial products for improving the quality of service. However, they also raise challenges to model authentication. It is necessary to protect the ownership of the GNN models, which motivates us to present a watermarking method to GNN models in this paper. In the proposed method, an Erdos-Renyi (ER) random graph with random node feature vectors and labels is randomly generated as a trigger to train the GNN to be protected together with the normal samples. During model training, the secret watermark is embedded into the label predictions of the ER graph nodes. During model verification, by activating a marked GNN with the trigger ER graph, the watermark can be reconstructed from the output to verify the ownership. Since the ER graph was randomly generated, by feeding it to a non-marked GNN, the label predictions of the graph nodes are random, resulting in a low false alarm rate (of the proposed work). Experimental results have also shown that, the performance of a marked GNN on its original task will not be impaired. Moreover, it is robust against model compression and fine-tuning, which has shown the superiority and applicability.Comment: https://hzwu.github.io

    Ensemble Reversible Data Hiding

    Full text link
    The conventional reversible data hiding (RDH) algorithms often consider the host as a whole to embed a secret payload. In order to achieve satisfactory rate-distortion performance, the secret bits are embedded into the noise-like component of the host such as prediction errors. From the rate-distortion optimization view, it may be not optimal since the data embedding units use the identical parameters. This motivates us to present a segmented data embedding strategy for efficient RDH in this paper, in which the raw host could be partitioned into multiple subhosts such that each one can freely optimize and use the data embedding parameters. Moreover, it enables us to apply different RDH algorithms within different subhosts, which is defined as ensemble. Notice that, the ensemble defined here is different from that in machine learning. Accordingly, the conventional operation corresponds to a special case of the proposed work. Since it is a general strategy, we combine some state-of-the-art algorithms to construct a new system using the proposed embedding strategy to evaluate the rate-distortion performance. Experimental results have shown that, the ensemble RDH system could outperform the original versions in most cases, which has shown the superiority and applicability.Comment: Fig. 1 was updated due to a minor erro
    corecore