144 research outputs found

    A Privacy-Preserving Framework for Large-Scale Content-Based Information Retrieval Using K-Secure Sum Protocol

    Get PDF
    We propose a privacy protection framework for large-scale content-based information retrieval. It offers two layers of protection. To begin with, robust hash values are utilized as quiries to avoid uncovering unique content or features. Second, the customer can choose to exclude certain bits in a hash values to further expand the ambiguity for the server. Due to the reduced information, it is computationally difficult for the server to know the customer's interest. The server needs to give back the hash values of every single possible to the customer. The customer performs a search within the candidate list to locate the best match. Since just hash values are exchanged between the client and the server, the privacy of both sides is ensured. We present the idea of tunable privacy, where the privacy protection level can be balanced by policy. It is acknowledged through hash-based piecewise inverted indexing. The thought is to gap a highlight vector into pieces and list every piece with a sub hash value. Each sub hash value is connected with an inverted index list. The framework has been broadly tested using a large scale image database. We have assessed both retrieval performance and privacy-preserving performance for a specific content identification application. Two unique developments of robust hash algorithms are utilized. One depends on random projections; the other depends on the discrete wavelet transform. Both algorithm exhibit satisfactory performances in comparison with state-of-the-art retrieval performances. The outcomes demonstrate that the privacy upgrade somewhat enhances the retrieval performance. We consider the majority voting attack for evaluating the query category and identification. The test results demonstrate that this attack is a threat when there are close duplicities, yet the achievement rate diminishes with the quantity of discarded bits and the number of distinct items

    Layer-based Privacy and Security Architecture for Cloud Data Sharing

    Get PDF
    The management of data while maintaining its utility and preservation of security scheme is a matter of concern for the cloud owner. In order to minimize the overhead at cloud service provider of applying security over each document and then send it to the client, we propose a layered architecture. This approach maintains security of the sensitive document and privacy of its data sensitivity. To make a balance between data security and utility, the proposed approach categorizes the data according to its sensitivity. Perseverance of various categorization requires different algorithmic schemes. We set up a cloud distributed environment where data is categorized into four levels of sensitivity; public, confidential, secret, top secret and a different approach has been used to preserve the security at each level. At the most sensitive layers i.e. secret and top secret data, we made a provision to detect the faulty node that is responsible for data leakage. Finally, experimental analysis is carried out to analyze the performance of the layered approach. The experimental results show that time taken (in ms) in processing 200 documents of size 20 MB is 437, 2239, 3142, 3900 for public, confidential, secret and top secret data respectively when the documents are distributed among distinct users, which proves the practicality of the proposed approach

    Privacy-Preserving Outsourced Media Search

    Get PDF
    International audienceThis work proposes a privacy-protection framework for an important application called outsourced media search. This scenario involves a data owner, a client, and an untrusted server, where the owner outsources a search service to the server. Due to lack of trust, the privacy of the client and the owner should be protected. The framework relies on multimedia hashing and symmetric encryption. It requires involved parties to participate in a privacy-enhancing protocol. Additional processing steps are carried out by the owner and the client: (i) before outsourcing low-level media features to the server, the owner has to one-way hash them, and partially encrypt each hash-value; (ii) the client completes the similarity search by re-ranking the most similar candidates received from the server. One-way hashing and encryption add ambiguity to data and make it difficult for the server to infer contents from database items and queries, so the privacy of both the owner and the client is enforced. The proposed framework realizes trade-offs among strength of privacy enforcement, quality of search, and complexity, because the information loss can be tuned during hashing and encryption. Extensive experiments demonstrate the effectiveness and the flexibility of the framework

    A buyer-seller watermarking protocol for digital secondary market

    Get PDF
    In the digital right management value chain, digital watermarking technology plays a very important role in digital product’s security, especially on its usage tracking and copyrights infringement authentication. However, watermark procedures can only effectively support copyright protection processes if they are applied as part of an appropriate watermark protocol. In this regard, a number of watermark protocols have been proposed in the literature and have been shown to facilitate the use of digital watermarking technology as copyright protection. One example of such protocols is the anonymous buyer-seller watermarking protocol. Although there are a number of protocols that have been proposed in the literature and provide suitable solutions, they are mainly designed as a watermarking protocol for the first-hand market and are unsuitable for second-hand transactions. As the complexity of online transaction increases, so does the size of the digital second-hand market. In this paper, we present a new buyer-seller watermark protocol that addresses the needs of customer’s rights problem in the digital secondary market. The proposed protocol consists of five sub-protocols that cover the registration process, watermarking process for the first, second and third-hand transactions as well as the identification & arbitration processes. This paper provides analysis that compares the proposed protocols with existing state-of-the-arts and shows that it has met not only all the buyer’s and seller’s requirements in the traditional sense but also accommodates the same requirements in the secondary market

    Squint Hard Enough: Evaluating Perceptual Hashing with Machine Learning

    Get PDF
    Many online communications systems use perceptual hash matching systems to detect illicit files in user content. These systems employ specialized perceptual hash functions such as Microsoft\u27s PhotoDNA or Facebook\u27s PDQ to produce a compact digest of an image file that can be approximately compared to a database of known illicit-content digests. Recently, several proposals have suggested that hash-based matching systems be incorporated into client-side and end-to-end encrypted (E2EE) systems: in these designs, files that register as illicit content will be reported to the provider, while the remaining content will be sent confidentially. By using perceptual hashing to determine confidentiality guarantees, this new setting significantly changes the function of existing perceptual hashing -- thus motivating the need to evaluate these functions from an adversarial perspective, using their perceptual capabilities against them. For example, an attacker may attempt to trigger a match on innocuous, but politically-charged, content in an attempt to stifle speech. In this work we develop threat models for perceptual hashing algorithms in an adversarial setting, and present attacks against the two most widely deployed algorithms: PhotoDNA and PDQ. Our results show that it is possible to efficiently generate targeted second-preimage attacks in which an attacker creates a variant of some source image that matches some target digest. As a complement to this main result, we also further investigate the production of images that facilitate detection avoidance attacks, continuing a recent investigation of Jain et al. Our work shows that existing perceptual hash functions are likely insufficiently robust to survive attacks on this new setting

    Steganography A Data Hiding Technique

    Get PDF
    Steganography implements an encryption technique in which communication takes place by hiding information. A hidden message is the combination of a secret message with the carrier message. This technique can be used to hide the message in an image, a video file, an audio file or in a file system. There are large variety of steganography techniques that will be used for hiding secret information in images. The final output image is called as a stego-image which consists of a secret message or information. Imperceptibility, payload, and robustness are three most important parameters for audio steganography. For a more secure approach, encryption can be used, which will encrypt the secret message using a secret key and then sent to the receiver. The receiver after receiving the message then decrypts the secret message to obtain the original one. In this paper, compared steganography with cryptography, which is an encrypting technique and explained how steganography provides better security in terms of hiding the secret message. In this paper, the various techniques are illustrated, which are used in steganography and studying the implementation of those techniques. Also, demonstrated the implementation process of one of the steganography techniques. A comparative analysis is performed between various steganographic tools by using the sample test images and test data. The quality metrics such as PSNR and SSIM are calculated for the final output images which are used for rating the tools. This paper also discusses about the Steganalysis which is known as the process of identifying the use of steganography
    • …
    corecore