809 research outputs found
A Secret Image Sharing Based on Logistic-Chebyshev Chaotic Map and Chinese Remainder Theorem
Visual secret sharing (VSS) was introduced in order to solve information security issues. It is a modern cryptographic technique. It involves breaking up a secret image into secured components known as shares. The secret image is recovered with utmost secrecy when all of these shares are lined up and piled together. A (3, 3)-secret image sharing scheme (SIS) is provided in this paper by fusing the Chinese Remainder Theorem (CRT) and the Logistic-Chebyshev map (LC). Sharing a confidential image created with CRT has various benefits, including lossless recovery, the lack of further encryption, and minimal recovery calculation overhead. Firstly, we build a chaotic sequence using an LC map. The secret value pixel for the secret image is permuted in order to fend off differential attackers. To encrypt the scrambled image, we apply our CRT technique to create three shares. Finally, the security analysis of our (3, 3)-SIS scheme is demonstrated and confirmed by some simulation results
Information-Theoretic Secure Outsourced Computation in Distributed Systems
Secure multi-party computation (secure MPC) has been established as the de facto paradigm for protecting privacy in distributed computation. One of the earliest secure MPC primitives is the Shamir\u27s secret sharing (SSS) scheme. SSS has many advantages over other popular secure MPC primitives like garbled circuits (GC) -- it provides information-theoretic security guarantee, requires no complex long-integer operations, and often leads to more efficient protocols. Nonetheless, SSS receives less attention in the signal processing community because SSS requires a larger number of honest participants, making it prone to collusion attacks. In this dissertation, I propose an agent-based computing framework using SSS to protect privacy in distributed signal processing. There are three main contributions to this dissertation. First, the proposed computing framework is shown to be significantly more efficient than GC. Second, a novel game-theoretical framework is proposed to analyze different types of collusion attacks. Third, using the proposed game-theoretical framework, specific mechanism designs are developed to deter collusion attacks in a fully distributed manner. Specifically, for a collusion attack with known detectors, I analyze it as games between secret owners and show that the attack can be effectively deterred by an explicit retaliation mechanism. For a general attack without detectors, I expand the scope of the game to include the computing agents and provide deterrence through deceptive collusion requests. The correctness and privacy of the protocols are proved under a covert adversarial model. Our experimental results demonstrate the efficiency of SSS-based protocols and the validity of our mechanism design
Recommended from our members
Emerging Trustworthiness Issues in Distributed Learning Systems
A distributed learning system allocates learning processes onto several workstations to enable faster learning algorithms. Federated Learning (FL) is an increasingly popular type of distributed learning which allows mutually untrusted clients to collaboratively train a common machine learning model without sharing their private/proprietary training data with each other. In this dissertation, we aim to address emerging trustworthiness issues in distributed learning systems, particularly in the field of FL.
First, we tackle the issue of robustness in FL and demonstrate its susceptibility by presenting a comprehensive analysis of the various poisoning attacks and defensive aggregation rules proposed in the literature and connecting them under a common framework. To address this issue, we propose Federated Rank Learning (FRL) which reduces the space of client updates from a continuous space of float numbers in standard FL to a discrete space of integer values, limiting the adversary\u27s options for poisoning attacks.
Next, we address the privacy concerns in FL, including access privacy and data privacy. An adversarial server in FL gets information about the data distribution of a target client by monitoring either I) local updates that the target submits throughout the FL training or II) the access pattern of the target, which can be privacy sensitive in many real-world scenarios. To preserve access privacy, we design Heterogeneous Private Information Retrieval (HPIR), which allows clients to fetch their specific model parameters from untrusted servers without leaking any information. We believe that HPIR will enable new application scenarios for private distributed learning systems, as well as improve the usability of some of the known applications of PIR. To preserve data privacy, we show that local rankings leak less information about private training data. We conduct a comprehensive investigation on the privacy of rankings in FRL to measure data leakage compared to weight parameter updates in standard FL in presence of the state-of-the-art white-box membership inference attack.
Finally, we address the issue of fairness in FL where a single model cannot represent all clients equally due to heterogeneity in their data distributions. To alleviate this issue, we propose Equal and Equitable Federated Learning (E2FL). E2FL produces fair federated learning models by preserving both equity and equality among the participating clients based on learning on parameter rankings where multiple global models are learned so that each group of clients can benefit from their personalized model
Privacy-preserving information hiding and its applications
The phenomenal advances in cloud computing technology have raised concerns about data privacy. Aided by the modern cryptographic techniques such as homomorphic encryption, it has become possible to carry out computations in the encrypted domain and process data without compromising information privacy. In this thesis, we study various classes of privacy-preserving information hiding schemes and their real-world applications for cyber security, cloud computing, Internet of things, etc.
Data breach is recognised as one of the most dreadful cyber security threats in which private data is copied, transmitted, viewed, stolen or used by unauthorised parties. Although encryption can obfuscate private information against unauthorised viewing, it may not stop data from illegitimate exportation. Privacy-preserving Information hiding can serve as a potential solution to this issue in such a manner that a permission code is embedded into the encrypted data and can be detected when transmissions occur.
Digital watermarking is a technique that has been used for a wide range of intriguing applications such as data authentication and ownership identification. However, some of the algorithms are proprietary intellectual properties and thus the availability to the general public is rather limited. A possible solution is to outsource the task of watermarking to an authorised cloud service provider, that has legitimate right to execute the algorithms as well as high computational capacity. Privacypreserving Information hiding is well suited to this scenario since it is operated in the encrypted domain and hence prevents private data from being collected by the cloud.
Internet of things is a promising technology to healthcare industry. A common framework consists of wearable equipments for monitoring the health status of an individual, a local gateway device for aggregating the data, and a cloud server for storing and analysing the data. However, there are risks that an adversary may attempt to eavesdrop the wireless communication, attack the gateway device or even access to the cloud server. Hence, it is desirable to produce and encrypt the data simultaneously and incorporate secret sharing schemes to realise access control. Privacy-preserving secret sharing is a novel research for fulfilling this function.
In summary, this thesis presents novel schemes and algorithms, including:
• two privacy-preserving reversible information hiding schemes based upon symmetric cryptography using arithmetic of quadratic residues and lexicographic permutations, respectively.
• two privacy-preserving reversible information hiding schemes based upon asymmetric cryptography using multiplicative and additive privacy homomorphisms, respectively.
• four predictive models for assisting the removal of distortions inflicted by information hiding based respectively upon projection theorem, image gradient, total variation denoising, and Bayesian inference.
• three privacy-preserving secret sharing algorithms with different levels of generality
Secret Sharing Approach for Securing Cloud-Based Image Processing
Ph.DDOCTOR OF PHILOSOPH
Efficient Protocols for Multi-Party Computation
Secure Multi-Party Computation (MPC) allows a group of parties to compute a join function on their inputs without revealing any information beyond the result of the computation. We demonstrate secure function evaluation protocols for branching programs, where the communication complexity is linear in the size of the inputs, and polynomial in the security parameter. Our result is based on the circular security of the Paillier\u27s encryption scheme. Our work followed the breakthrough results by Boyle et al. [9; 11]. They presented a Homomorphic Secret Sharing scheme which allows the non-interactive computation of Branching Programs over shares of the secret inputs. Their protocol is based on the Decisional Diffie-Hellman Assumption. Additionally, we offer a verification technique to directly check correctness of the actual computation, rather than the absence of a potential error as in [9]. This results in fewer repetitions of the overall computation for a given error bound. We also use Paillier’s encryption as the underlying scheme of publicly perceptual hashing. Perceptual hashing allows the computation of a robust fingerprint of media files, such that the fingerprint can be used to detect the same object even if it has been modified in per- ceptually non-significant ways (e.g., compression). The robustness of such functions relies on the use of secret keys both during the computation and the detection phase. We present examples of publicly evaluatable perceptual hash functions which allow a user to compute the perceptual hash of an image using a public key, while only the detection algorithm will use the secret key. Our technique can be used to encourage users to submit intimate images to blacklist databases to stop those images from ever being posted online – indeed using a publicly evaluatable perceptual hash function the user can privately submit the fingerprint, without ever revealing the image. We present formal definitions for the security of perceptual hash, a general theoretical result that uses Fully Homomorphic Encryption, and a specific construction using Paillier’s encryption. For the latter we show via extensive implementation tests that the cryptographic overhead can be made minimal, resulting in a very efficient construction
- …