9 research outputs found

    Elementary Remarks on Some Quadratic Based Identity Based Encryption Schemes

    Get PDF
    In the design of an identity-based encryption (IBE) scheme, the primary security assumptions center around quadratic residues, bilinear mappings, and lattices. Among these approaches, one of the most intriguing is introduced by Clifford Cocks and is based on quadratic residues. However, this scheme has a significant drawback: a large ciphertext to plaintext ratio. A different approach is taken by Zhao et al., who design an IBE still based on quadratic residues, but with an encryption process reminiscent of the Goldwasser-Micali cryptosystem. In the following pages, we will introduce an elementary method to accelerate Cocks\u27 encryption process and adapt a space-efficient encryption technique for both Cocks\u27 and Zhao et al.\u27s cryptosystems

    Anonymous IBE From Quadratic Residuosity With Fast Encryption

    Get PDF
    We develop two variants of Cocks\u27 identity-based encryption. One variant has faster encryption, where the most time-consuming part only requires several modular multiplications. The other variant makes the first variant anonymous under suitable complexity assumptions, while its decryption efficiency is about twice lower than the first one. Both the variants have ciphertext expansion twice more extensive than the original Cocks\u27 identity-based encryption. To alleviate the issue of the second variant\u27s large ciphertext expansion, we consider using it to construct a public-key encryption with keyword search scheme with a fast encryption algorithm

    Alpenhorn: Bootstrapping Secure Communication without Leaking Metadata

    Get PDF
    Alpenhorn is the first system for initiating an encrypted connection between two users that provides strong privacy and forward secrecy guarantees for metadata (i.e., information about which users connected to each other) and that does not require out-of-band communication other than knowing the other user's Alpenhorn username (email address). This resolves a significant shortcoming in all prior works on private messaging, which assume an out-of-band key distribution mechanism. Alpenhorn's design builds on three ideas. First, Alpenhorn provides each user with an address book of friends that the user can call to establish a connection. Second, when a user adds a friend for the first time, Alpenhorn ensures the adversary does not learn the friend's identity, by using identity-based encryption in a novel wayto privately determine the friend's public key. Finally, when calling a friend, Alpenhorn ensures forward secrecy of metadata by storing pairwise shared secrets in friends' address books, and evolving them over time, using a new keywheel construction. Alpenhorn relies on a number of servers, but operates in an anytrust model, requiring just one of the servers to be honest. We implemented a prototype of Alpenhorn, and integrated it into the Vuvuzela private messaging system (which did not previously provide privacy or forward secrecy of metadata when initiating conversations). Experimental results show that Alpenhorn can scale to many users, supporting 10 million users on three Alpenhorn servers with an average call latency of 150 seconds and a client bandwidth overhead of 3.7 KB/sec

    Secure Protocols for Privacy-preserving Data Outsourcing, Integration, and Auditing

    Get PDF
    As the amount of data available from a wide range of domains has increased tremendously in recent years, the demand for data sharing and integration has also risen. The cloud computing paradigm provides great flexibility to data owners with respect to computation and storage capabilities, which makes it a suitable platform for them to share their data. Outsourcing person-specific data to the cloud, however, imposes serious concerns about the confidentiality of the outsourced data, the privacy of the individuals referenced in the data, as well as the confidentiality of the queries processed over the data. Data integration is another form of data sharing, where data owners jointly perform the integration process, and the resulting dataset is shared between them. Integrating related data from different sources enables individuals, businesses, organizations and government agencies to perform better data analysis, make better informed decisions, and provide better services. Designing distributed, secure, and privacy-preserving protocols for integrating person-specific data, however, poses several challenges, including how to prevent each party from inferring sensitive information about individuals during the execution of the protocol, how to guarantee an effective level of privacy on the released data while maintaining utility for data mining, and how to support public auditing such that anyone at any time can verify that the integration was executed correctly and no participants deviated from the protocol. In this thesis, we address the aforementioned concerns by presenting secure protocols for privacy-preserving data outsourcing, integration and auditing. First, we propose a secure cloud-based data outsourcing and query processing framework that simultaneously preserves the confidentiality of the data and the query requests, while providing differential privacy guarantees on the query results. Second, we propose a publicly verifiable protocol for integrating person-specific data from multiple data owners, while providing differential privacy guarantees and maintaining an effective level of utility on the released data for the purpose of data mining. Next, we propose a privacy-preserving multi-party protocol for high-dimensional data mashup with guaranteed LKC-privacy on the output data. Finally, we apply the theory to the real world problem of solvency in Bitcoin. More specifically, we propose a privacy-preserving and publicly verifiable cryptographic proof of solvency scheme for Bitcoin exchanges such that no information is revealed about the exchange's customer holdings, the value of the exchange's total holdings is kept secret, and multiple exchanges performing the same proof of solvency can contemporaneously prove they are not colluding

    Embedded document security using sticky policies and identity based encryption

    Get PDF
    Data sharing domains have expanded over several, both trusted and insecure environments. At the same time, the data security boundaries have shrunk from internal network perimeters down to a single identity and a piece of information. Since new EU GDPR regulations, the personally identifiable information sharing requires data governance in favour of a data subject. Existing enterprise grade IRM solutions fail to follow open standards and lack of data sharing frameworks that could efficiently integrate with existing identity management and authentication infrastructures. IRM services that stood against cloud demands often offer a very limited access control functionality allowing an individual to store a document online giving a read or read-write permission to other individual identified by email address. Unfortunately, such limited information sharing controls are often introduced as the only safeguards in large enterprises, healthcare institutions and other organizations that should provide the highest possible personal data protection standards. The IRM suffers from a systems architecture vulnerability where IRM application installed on a semi-trusted client truly only guarantees none or full access enforcement. Since no single authority is contacted to verify each committed change the adversary having an advantage of possessing data-encrypting and key-encrypting keys could change and re-encrypt the amended content despite that read only access has been granted. Finally, the two evaluated IRM products, have either the algorithm security lifecycle (ASL) relatively short to protect the shared data, or the solution construct highly restrained secure key-encrypting key distribution and exposes a symmetric data-encrypting key over the network. Presented here sticky policy with identity-based encryption (SPIBE) solution was designed for secure cloud data sharing. SPIBE challenges are to deliver simple standardized construct that would easily integrate with popular OOXML-like document formats and provide simple access rights enforcement over protected content. It leverages a sticky policy construct using XACML access policy language to express access conditions across different cloud data sharing boundaries. XACML is a cloud-ready standard designed for a global multi-jurisdictional use. Unlike other raw ABAC implementations, the XACML offers a standardised schema and authorisation protocols hence it simplifies interoperability. The IBE is a cryptographic scheme protecting the shared document using an identified policy as an asymmetric key-encrypting a symmetric data-encrypting key. Unlike ciphertext-policy attribute-based access control (CP-ABE), the SPIBE policy contains not only access preferences but global document identifier and unique version identifier what makes each policy uniquely identifiable in relation to the protected document. In IBE scheme the public key-encrypting key is known and could be shared between the parties although the data-encrypting key is never sent over the network. Finally, the SPIBE as a framework should have a potential to protect data in case of new threats where ASL of a used cryptographic primitive is too short, when algorithm should be replaced with a new updated cryptographic primitive. The IBE like a cryptographic protocol could be implemented with different cryptographic primitives. The identity-based encryption over isogenous pairing groups (IBE-IPG) is a post-quantum ready construct that leverages the initial IBE Boneh-Franklin (IBE-BF) approach. Existing IBE implementations could be updated to IBE-IPG without major system amendments. Finally, by applying the one document versioning blockchain-like construct could verify changes authenticity and approve only legitimate document updates, where other IRM solutions fail to operate delivering the one single authority for non-repudiation and authenticity assurance

    A role and attribute based encryption approach to privacy and security in cloud based health services

    Get PDF
    Cloud computing is a rapidly emerging computing paradigm which replaces static and expensive data centers, network and software infrastructure with dynamically scalable “cloud based” services offered by third party providers on an on-demand basis. However, with the potential for seemingly limitless scalability and reduced infrastructure costs comes new issues regarding security and privacy as processing and storage tasks are delegated to potentially untrustworthy cloud providers. For the eHealth industry this loss of control makes adopting the cloud problematic when compliance with privacy laws (such HIPAA, PIPEDA and PHIPA) is required and limits third party access to patient records. This thesis presents a RBAC enabled solution to cloud privacy and security issues resulting from this loss of control to a potentially untrustworthy third party cloud provider, which remains both scalable and distributed. This is accomplished through four major components presented, implemented and evaluated within this thesis; the DOSGi based Health Cloud eXchange (HCX) architecture for managing and exchanging EHRs between authorized users, the Role Based Access Control as a Service (RBACaaS) model and web service providing RBAC policy enforcement and services to cloud applications, the Role Based Single Sign On (RBSSO) protocol, and the Distributed Multi-Authority Ciphertext-Policy Shared Attribute-Based Encryption (DMACPSABE) scheme for limiting access to sensitive records dependent on attributes (or roles) assigned to users. We show that when these components are combined the resulting system is both scalable (scaling at least linearly with users, request, records and attributes), secure and provides a level of protection from the cloud provider which preserves the privacy of user’s records from any third party. Additionally, potential use cases are presented for each component as well as the overall system

    Secure and Efficient Comparisons between Untrusted Parties

    Get PDF
    A vast number of online services is based on users contributing their personal information. Examples are manifold, including social networks, electronic commerce, sharing websites, lodging platforms, and genealogy. In all cases user privacy depends on a collective trust upon all involved intermediaries, like service providers, operators, administrators or even help desk staff. A single adversarial party in the whole chain of trust voids user privacy. Even more, the number of intermediaries is ever growing. Thus, user privacy must be preserved at every time and stage, independent of the intrinsic goals any involved party. Furthermore, next to these new services, traditional offline analytic systems are replaced by online services run in large data centers. Centralized processing of electronic medical records, genomic data or other health-related information is anticipated due to advances in medical research, better analytic results based on large amounts of medical information and lowered costs. In these scenarios privacy is of utmost concern due to the large amount of personal information contained within the centralized data. We focus on the challenge of privacy-preserving processing on genomic data, specifically comparing genomic sequences. The problem that arises is how to efficiently compare private sequences of two parties while preserving confidentiality of the compared data. It follows that the privacy of the data owner must be preserved, which means that as little information as possible must be leaked to any party participating in the comparison. Leakage can happen at several points during a comparison. The secured inputs for the comparing party might leak some information about the original input, or the output might leak information about the inputs. In the latter case, results of several comparisons can be combined to infer information about the confidential input of the party under observation. Genomic sequences serve as a use-case, but the proposed solutions are more general and can be applied to the generic field of privacy-preserving comparison of sequences. The solution should be efficient such that performing a comparison yields runtimes linear in the length of the input sequences and thus producing acceptable costs for a typical use-case. To tackle the problem of efficient, privacy-preserving sequence comparisons, we propose a framework consisting of three main parts. a) The basic protocol presents an efficient sequence comparison algorithm, which transforms a sequence into a set representation, allowing to approximate distance measures over input sequences using distance measures over sets. The sets are then represented by an efficient data structure - the Bloom filter -, which allows evaluation of certain set operations without storing the actual elements of the possibly large set. This representation yields low distortion for comparing similar sequences. Operations upon the set representation are carried out using efficient, partially homomorphic cryptographic systems for data confidentiality of the inputs. The output can be adjusted to either return the actual approximated distance or the result of an in-range check of the approximated distance. b) Building upon this efficient basic protocol we introduce the first mechanism to reduce the success of inference attacks by detecting and rejecting similar queries in a privacy-preserving way. This is achieved by generating generalized commitments for inputs. This generalization is done by treating inputs as messages received from a noise channel, upon which error-correction from coding theory is applied. This way similar inputs are defined as inputs having a hamming distance of their generalized inputs below a certain predefined threshold. We present a protocol to perform a zero-knowledge proof to assess if the generalized input is indeed a generalization of the actual input. Furthermore, we generalize a very efficient inference attack on privacy-preserving sequence comparison protocols and use it to evaluate our inference-control mechanism. c) The third part of the framework lightens the computational load of the client taking part in the comparison protocol by presenting a compression mechanism for partially homomorphic cryptographic schemes. It reduces the transmission and storage overhead induced by the semantically secure homomorphic encryption schemes, as well as encryption latency. The compression is achieved by constructing an asymmetric stream cipher such that the generated ciphertext can be converted into a ciphertext of an associated homomorphic encryption scheme without revealing any information about the plaintext. This is the first compression scheme available for partially homomorphic encryption schemes. Compression of ciphertexts of fully homomorphic encryption schemes are several orders of magnitude slower at the conversion from the transmission ciphertext to the homomorphically encrypted ciphertext. Indeed our compression scheme achieves optimal conversion performance. It further allows to generate keystreams offline and thus supports offloading to trusted devices. This way transmission-, storage- and power-efficiency is improved. We give security proofs for all relevant parts of the proposed protocols and algorithms to evaluate their security. A performance evaluation of the core components demonstrates the practicability of our proposed solutions including a theoretical analysis and practical experiments to show the accuracy as well as efficiency of approximations and probabilistic algorithms. Several variations and configurations to detect similar inputs are studied during an in-depth discussion of the inference-control mechanism. A human mitochondrial genome database is used for the practical evaluation to compare genomic sequences and detect similar inputs as described by the use-case. In summary we show that it is indeed possible to construct an efficient and privacy-preserving (genomic) sequences comparison, while being able to control the amount of information that leaves the comparison. To the best of our knowledge we also contribute to the field by proposing the first efficient privacy-preserving inference detection and control mechanism, as well as the first ciphertext compression system for partially homomorphic cryptographic systems

    Law and Policy for the Quantum Age

    Get PDF
    Law and Policy for the Quantum Age is for readers interested in the political and business strategies underlying quantum sensing, computing, and communication. This work explains how these quantum technologies work, future national defense and legal landscapes for nations interested in strategic advantage, and paths to profit for companies
    corecore