8 research outputs found

    HQR-Scheme: A High Quality and resilient virtual primary key generation approach for watermarking relational data

    Get PDF
    Most of the watermarking techniques designed to protect relational data often use the Primary Key (PK) of relations to perform the watermark synchronization. Despite offering high confidence to the watermark detection, these approaches become useless if the PK can be erased or updated. A typical example is when an attacker wishes to use a stolen relation, unlinked to the rest of the database. In that case, the original values of the PK lose relevance, since they are not employed to check the referential integrity. Then, it is possible to erase or replace the PK, compromising the watermark detection with no need to perform the slightest modification on the rest of the data. To avoid the problems caused by the PK-dependency some schemes have been proposed to generate Virtual Primary Keys (VPK) used instead. Nevertheless, the quality of the watermark synchronized using VPKs is compromised due to the presence of duplicate values in the set of VPKs and the fragility of the VPK schemes against the elimination of attributes. In this paper, we introduce the metrics to allow precise measuring of the quality of the VPKs generated by any scheme without requiring to perform the watermark embedding. This way, time waste can be avoided in case of low-quality detection. We also analyze the main aspects to design the ideal VPK scheme, seeking the generation of high-quality VPK sets adding robustness to the process. Finally, a new scheme is presented along with the experiments carried out to validate and compare the results with the rest of the schemes proposed in the literature

    A Double Fragmentation Approach for Improving Virtual Primary Key-Based Watermark Synchronization

    Get PDF
    Relational data watermarking techniques using virtual primary key schemes try to avoid compromising watermark detection due to the deletion or replacement of the relation's primary key. Nevertheless, these techniques face the limitations that bring high redundancy of the generated set of virtual primary keys, which often compromises the quality of the embedded watermark. As a solution to this problem, this paper proposes double fragmentation of the watermark by using the existing redundancy in the set of virtual primary keys. This way, we guarantee the right identification of the watermark despite the deletion of any of the attributes of the relation. The experiments carried out to validate our proposal show an increment between 81.04% and 99.05% of detected marks with respect to previous solutions found in the literature. Furthermore, we found out that our approach takes advantage of the redundancy present in the set of virtual primary keys. Concerning the computational complexity of the solution, we performed a set of scalability tests that show the linear behavior of our approach with respect to the processes runtime and the number of tuples involved, making it feasible to use no matter the amount of data to be protected

    Semantic-driven watermarking of relational textual databases

    No full text
    In relational database watermarking, the semantic consistency between the original database and the distorted one is a challenging issue which is disregarded by most watermarking proposals, due to the well-known assumption for which a small amount of errors in the watermarked database is tolerable. We propose a semantic-driven watermarking approach of relational textual databases, which marks multi-word textual attributes, exploiting the synonym substitution technique for text watermarking together with notions in semantic similarity analysis, and dealing with the semantic perturbations provoked by the watermark embedding. We show the effectiveness of our approach through an experimental evaluation, highlighting the resulting capacity, robustness and imperceptibility watermarking requirements. We also prove the resilience of our approach with respect to the random synonym substitution attack

    An Area Efficient Composed CORDIC Architecture

    No full text
    This article presents a composed architecture for the CORDIC algorithm. CORDIC is a widely used technique to calculate basic trigonometric functions using only additions and shifts. This composed architecture combines an initial coarse stage to approximate sine and cosine functions, and a second stage to finely tune those values while CORDIC operates on rotation mode. Both stages contribute to shorten the algorithmic steps required to fully execute the CORDIC algorithm. For comparison purposes, the Xilinx CORDIC logiCORE IP and previously reported research are used. The proposed architecture aims at reducing hardware resources usage as its key objective
    corecore