313 research outputs found

    Function Private Predicate Encryption for Low Min-Entropy Predicates

    Get PDF
    In this work, we propose new predicate encryption schemes for zero inner-product encryption (ZIPE) and non-zero inner-product encryption (NIPE) predicates from prime-order bilinear pairings, which are both attribute and function private in the public-key setting. Our ZIPE scheme is adaptively attribute private under the standard Matrix DDH assumption for unbounded collusions. It is additionally computationally function private under a min-entropy variant of the Matrix DDH assumption for predicates sampled from distributions with superlogarithmic min-entropy. Existing (statistically) function private ZIPE schemes due to Boneh et al. [Crypto’13, Asiacrypt’13] necessarily require predicate distributions with significantly larger min-entropy in the public-key setting. Our NIPE scheme is adaptively attribute private under the standard Matrix DDH assumption, albeit for bounded collusions. It is also computationally function private under a min-entropy variant of the Matrix DDH assumption for predicates sampled from distributions with super-logarithmic min-entropy. To the best of our knowledge, existing NIPE schemes from bilinear pairings were neither attribute private nor function private. Our constructions are inspired by the linear FE constructions of Agrawal et al. [Crypto’16] and the simulation secure ZIPE of Wee [TCC’17]. In our ZIPE scheme, we show a novel way of embedding two different hard problem instances in a single secret key - one for unbounded collusion-resistance and the other for function privacy. With respect to NIPE, we introduce new techniques for simultaneously achieving attribute and function privacy. We also show natural generalizations of our ZIPE and NIPE constructions to a wider class of subspace membership, subspace non-membership and hidden-vector encryption predicates

    Quantum entropic security and approximate quantum encryption

    Full text link
    We present full generalisations of entropic security and entropic indistinguishability to the quantum world where no assumption but a limit on the knowledge of the adversary is made. This limit is quantified using the quantum conditional min-entropy as introduced by Renato Renner. A proof of the equivalence between the two security definitions is presented. We also provide proofs of security for two different cyphers in this model and a proof for a lower bound on the key length required by any such cypher. These cyphers generalise existing schemes for approximate quantum encryption to the entropic security model.Comment: Corrected mistakes in the proofs of Theorems 3 and 6; results unchanged. To appear in IEEE Transactions on Information Theory

    Privacy-Preserving intrusion detection over network data

    Get PDF
    Effective protection against cyber-attacks requires constant monitoring and analysis of system data such as log files and network packets in an IT infrastructure, which may contain sensitive information. To this end, security operation centers (SOC) are established to detect, analyze, and respond to cyber-security incidents. Security officers at SOC are not necessarily trusted with handling the content of the sensitive and private information, especially in case when SOC services are outsourced as maintaining in-house expertise and capability in cyber-security is expensive. Therefore, an end-to-end security solution is needed for the system data. SOC often utilizes detection models either for known types of attacks or for an anomaly and applies them to the collected data to detect cyber-security incidents. The models are usually constructed from historical data that contains records pertaining to attacks and normal functioning of the IT infrastructure under monitoring; e.g., using machine learning techniques. SOC is also motivated to keep its models confidential for three reasons: i) to capitalize on the models that are its propriety expertise, ii) to protect its detection strategies against adversarial machine learning, in which intelligent and adaptive adversaries carefully manipulate their attack strategy to avoid detection, and iii) the model might have been trained on sensitive information, whereby revealing the model can violate certain laws and regulations. Therefore, detection models are also private. In this dissertation, we propose a scenario in which privacy of both system data and detection models is protected and information leakage is either prevented altogether or quantifiably decreased. Our main approach is to provide an end-to-end encryption for system data and detection models utilizing lattice-based cryptography that allows homomorphic operations over the encrypted data. Assuming that the detection models are previously obtained from training data by SOC, we apply the models to system data homomorphically, whereby the model is encrypted. We take advantage of three different machine learning algorithms to extract intrusion models by training historical data. Using different data sets (two recent data sets, and one outdated but widely used in the intrusion detection literature), the performance of each algorithm is evaluated via the following metrics: i) the time that takes to extract the rules, ii) the time that takes to apply the rules on data homomorphically, iii) the accuracy of the rules in detecting intrusions, and iv) the number of rules. Our experiments demonstrates that the proposed privacy-preserving intrusion detection system (IDS) is feasible in terms of execution times and reliable in terms of accurac

    Public-Key Function-Private Hidden Vector Encryption (and More)

    Get PDF
    We construct public-key function-private predicate encryption for the ``small superset functionality,\u27\u27 recently introduced by Beullens and Wee (PKC 2019). This functionality captures several important classes of predicates: - Point functions. For point function predicates, our construction is equivalent to public-key function-private anonymous identity-based encryption. - Conjunctions. If the predicate computes a conjunction, our construction is a public-key function-private hidden vector encryption scheme. This addresses an open problem posed by Boneh, Raghunathan, and Segev (ASIACRYPT 2013). - dd-CNFs and read-once conjunctions of dd-disjunctions for constant-size dd. Our construction extends the group-based obfuscation schemes of Bishop et al. (CRYPTO 2018), Beullens and Wee (PKC 2019), and Bartusek et al. (EUROCRYPT 2019) to the setting of public-key function-private predicate encryption. We achieve an average-case notion of function privacy, which guarantees that a decryption key skfsk_f reveals nothing about ff as long as ff is drawn from a distribution with sufficient entropy. We formalize this security notion as a generalization of the (enhanced) real-or-random function privacy definition of Boneh, Raghunathan, and Segev (CRYPTO 2013). Our construction relies on bilinear groups, and we prove security in the generic bilinear group model

    Hiding Symbols and Functions: New Metrics and Constructions for Information-Theoretic Security

    Get PDF
    We present information-theoretic definitions and results for analyzing symmetric-key encryption schemes beyond the perfect secrecy regime, i.e. when perfect secrecy is not attained. We adopt two lines of analysis, one based on lossless source coding, and another akin to rate-distortion theory. We start by presenting a new information-theoretic metric for security, called symbol secrecy, and derive associated fundamental bounds. We then introduce list-source codes (LSCs), which are a general framework for mapping a key length (entropy) to a list size that an eavesdropper has to resolve in order to recover a secret message. We provide explicit constructions of LSCs, and demonstrate that, when the source is uniformly distributed, the highest level of symbol secrecy for a fixed key length can be achieved through a construction based on minimum-distance separable (MDS) codes. Using an analysis related to rate-distortion theory, we then show how symbol secrecy can be used to determine the probability that an eavesdropper correctly reconstructs functions of the original plaintext. We illustrate how these bounds can be applied to characterize security properties of symmetric-key encryption schemes, and, in particular, extend security claims based on symbol secrecy to a functional setting.Comment: Submitted to IEEE Transactions on Information Theor

    Maintaining secrecy when information leakage is unavoidable

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.Includes bibliographical references (p. 109-115).(cont.) We apply the framework to get new results, creating (a) encryption schemes with very short keys, and (b) hash functions that leak no information about their input, yet-paradoxically-allow testing if a candidate vector is close to the input. One of the technical contributions of this research is to provide new, cryptographic uses of mathematical tools from complexity theory known as randomness extractors.Sharing and maintaining long, random keys is one of the central problems in cryptography. This thesis provides about ensuring the security of a cryptographic key when partial information about it has been, or must be, leaked to an adversary. We consider two basic approaches: 1. Extracting a new, shorter, secret key from one that has been partially compromised. Specifically, we study the use of noisy data, such as biometrics and personal information, as cryptographic keys. Such data can vary drastically from one measurement to the next. We would like to store enough information to handle these variations, without having to rely on any secure storage-in particular, without storing the key itself in the clear. We solve the problem by casting it in terms of key extraction. We give a precise definition of what "security" should mean in this setting, and design practical, general solutions with rigorous analyses. Prior to this work, no solutions were known with satisfactory provable security guarantees. 2. Ensuring that whatever is revealed is not actually useful. This is most relevant when the key itself is sensitive-for example when it is based on a person's iris scan or Social Security Number. This second approach requires the user to have some control over exactly what information is revealed, but this is often the case: for example, if the user must reveal enough information to allow another user to correct errors in a corrupted key. How can the user ensure that whatever information the adversary learns is not useful to her? We answer by developing a theoretical framework for separating leaked information from useful information. Our definition strengthens the notion of entropic security, considered before in a few different contexts.by Adam Davison Smith.Ph.D

    Indistinguishability Obfuscation from Trilinear Maps and Block-Wise Local PRGs

    Get PDF
    We consider the question of finding the lowest degree LL for which LL-linear maps suffice to obtain IO. The current state of the art (Lin, EUROCRYPT\u2716, CRYPTO \u2717; Lin and Vaikunthanathan, FOCS\u2716; Ananth and Sahai, EUROCRYPT \u2717) is that LL-linear maps (under suitable security assumptions) suffice for IO, assuming the existence of pseudo-random generators (PRGs) with output locality LL. However, these works cannot answer the question of whether L<5L < 5 suffices, as no polynomial-stretch PRG with locality lower than 55 exists. In this work, we present a new approach that relies on the existence of PRGs with block-wise locality LL, i.e., every output bit depends on at most LL (disjoint) input blocks, each consisting of up to logλ\log \lambda input bits. We show that the existence of PRGs with block-wise locality is plausible for any L3L \geq 3, and also provide: * A construction of a general-purpose indistinguishability obfuscator from LL-linear maps and a subexponentially-secure PRG with block-wise locality LL and polynomial stretch. * A construction of general-purpose functional encryption from LL-linear maps and any slightly super-polynomially secure PRG with block-wise locality LL and polynomial stretch. All our constructions are based on the SXDH assumption on LL-linear maps and subexponential Learning With Errors (LWE) assumption, and follow by instantiating our new generic bootstrapping theorems with Lin\u27s recently proposed FE scheme (CRYPTO \u2717). Inherited from Lin\u27s work, our security proof requires algebraic multilinear maps (Boneh and Silverberg, Contemporary Mathematics), whereas security when using noisy multilinear maps is based on a family of more complex assumptions that hold in the generic model. Our candidate PRGs with block-wise locality are based on Goldreich\u27s local functions, and we show that the security of instantiations with block-wise locality L3L \ge 3 is backed by similar validation as constructions with (conventional) locality 55. We further complement this with hardness amplification techniques that further weaken the pseudorandomness requirements

    Honey Encryption Beyond Message Recovery Security

    Get PDF
    Juels and Ristenpart introduced honey encryption (HE) and showed how to achieve message recovery security even in the face of attacks that can exhaustively try all likely keys. This is important in contexts like password-based encryption where keys are very low entropy, and HE schemes based on the JR construction were subsequently proposed for use in password management systems and even long-term protection of genetic data. But message recovery security is in this setting, like previous ones, a relatively weak property, and in particular does not prohibit an attacker from learning partial information about plaintexts or from usefully mauling ciphertexts. We show that one can build HE schemes that can hide partial information about plaintexts and that prevent mauling even in the face of exhaustive brute force attacks. To do so, we introduce target-distribution semantic-security and target-distribution non-malleability security notions and proofs that a slight variant of the JR HE construction can meet them. The proofs require new balls-and-bins type analyses significantly different from those used in prior work. Finally, we provide a formal proof of the folklore result that an unbounded adversary which obtains a limited number of encryptions of known plaintexts can always succeed at message recovery

    On the Power of Amortization in Secret Sharing: dd-Uniform Secret Sharing and CDS with Constant Information Rate

    Get PDF
    Consider the following secret-sharing problem. Your goal is to distribute a long file ss between nn servers such that (d1)(d-1)-subsets cannot recover the file, (d+1)(d+1)-subsets can recover the file, and dd-subsets should be able to recover ss if and only if they appear in some predefined list LL. How small can the information ratio (i.e., the number of bits stored on a server per each bit of the secret) be? We initiate the study of such dd-uniform access structures, and view them as a useful scaled-down version of general access structures. Our main result shows that, for constant dd, any dd-uniform access structure admits a secret sharing scheme with a *constant* asymptotic information ratio of cdc_d that does not grow with the number of servers nn. This result is based on a new construction of dd-party Conditional Disclosure of Secrets (Gertner et al., JCSS \u2700) for arbitrary predicates over nn-size domain in which each party communicates at most four bits per secret bit. In both settings, previous results achieved non-constant information ratio which grows asymptotically with nn even for the simpler (and widely studied) special case of d=2d=2. Moreover, our results provide a unique example for a natural class of access structures FF that can be realized with information rate smaller than its bit-representation length logF\log |F| (i.e., Ω(dlogn)\Omega( d \log n) for dd-uniform access structures) showing that amortization can beat the representation size barrier. Our main result applies to exponentially long secrets, and so it should be mainly viewed as a barrier against amortizable lower-bound techniques. We also show that in some natural simple cases (e.g., low-degree predicates), amortization kicks in even for quasi-polynomially long secrets. Finally, we prove some limited lower-bounds, point out some limitations of existing lower-bound techniques, and describe some applications to the setting of private simultaneous messages

    An Alternative View of the Graph-Induced Multilinear Maps

    Get PDF
    In this paper, we view multilinear maps through the lens of ``homomorphic obfuscation . In specific, we show how to homomorphically obfuscate the kernel-test and affine subspace-test functionalities of high dimensional matrices. Namely, the evaluator is able to perform additions and multiplications over the obfuscated matrices, and test subspace memberships on the resulting code. The homomorphic operations are constrained by the prescribed data structure, e.g. a tree or a graph, where the matrices are stored. The security properties of all the constructions are based on the hardness of Learning with errors problem (LWE). The technical heart is to ``control the ``chain reactions\u27\u27 over a sequence of LWE instances. Viewing the homomorphic obfuscation scheme from a different angle, it coincides with the graph-induced multilinear maps proposed by Gentry, Gorbunov and Halevi (GGH15). Our proof technique recognizes several ``safe modes of GGH15 that are not known before, including a simple special case: if the graph is acyclic and the matrices are sampled independently from binary or error distributions, then the encodings of the matrices are pseudorandom
    corecore