107 research outputs found

    On the efficiency of revocation in RSA-based anonymous systems

    Get PDF
    © 2016 IEEEThe problem of revocation in anonymous authentication systems is subtle and has motivated a lot of work. One of the preferable solutions consists in maintaining either a whitelist L-W of non-revoked users or a blacklist L-B of revoked users, and then requiring users to additionally prove, when authenticating themselves, that they are in L-W (membership proof) or that they are not in L-B (non-membership proof). Of course, these additional proofs must not break the anonymity properties of the system, so they must be zero-knowledge proofs, revealing nothing about the identity of the users. In this paper, we focus on the RSA-based setting, and we consider the case of non-membership proofs to blacklists L = L-B. The existing solutions for this setting rely on the use of universal dynamic accumulators; the underlying zero-knowledge proofs are bit complicated, and thus their efficiency; although being independent from the size of the blacklist L, seems to be improvable. Peng and Bao already tried to propose simpler and more efficient zero-knowledge proofs for this setting, but we prove in this paper that their protocol is not secure. We fix the problem by designing a new protocol, and formally proving its security properties. We then compare the efficiency of the new zero-knowledge non-membership protocol with that of the protocol, when they are integrated with anonymous authentication systems based on RSA (notably, the IBM product Idemix for anonymous credentials). We discuss for which values of the size k of the blacklist L, one protocol is preferable to the other one, and we propose different ways to combine and implement the two protocols.Postprint (author's final draft

    Signcryption schemes with threshold unsigncryption, and applications

    Get PDF
    The final publication is available at link.springer.comThe goal of a signcryption scheme is to achieve the same functionalities as encryption and signature together, but in a more efficient way than encrypting and signing separately. To increase security and reliability in some applications, the unsigncryption phase can be distributed among a group of users, through a (t, n)-threshold process. In this work we consider this task of threshold unsigncryption, which has received very few attention from the cryptographic literature up to now (maybe surprisingly, due to its potential applications). First we describe in detail the security requirements that a scheme for such a task should satisfy: existential unforgeability and indistinguishability, under insider chosen message/ciphertext attacks, in a multi-user setting. Then we show that generic constructions of signcryption schemes (by combining encryption and signature schemes) do not offer this level of security in the scenario of threshold unsigncryption. For this reason, we propose two new protocols for threshold unsigncryption, which we prove to be secure, one in the random oracle model and one in the standard model. The two proposed schemes enjoy an additional property that can be very useful. Namely, the unsigncryption protocol can be divided in two phases: a first one where the authenticity of the ciphertext is verified, maybe by a single party; and a second one where the ciphertext is decrypted by a subset of t receivers, without using the identity of the sender. As a consequence, the schemes can be used in applications requiring some level of anonymity, such as electronic auctions.Peer ReviewedPostprint (author's final draft

    Attribute-based encryption implies identity-based encryption

    Get PDF
    In this study, the author formally proves that designing attribute-based encryption schemes cannot be easier than designing identity-based encryption schemes. In more detail, they show how an attribute-based encryption scheme which admits, at least, and policies can be combined with a collision-resistant hash function to obtain an identity-based encryption scheme. Even if this result may seem natural, not surprising at all, it has not been explicitly written anywhere, as far as they know. Furthermore, it may be an unknown result for some people: Odelu et al. in 2016 and 2017 have proposed both an attribute-based encryption scheme in the discrete logarithm setting, without bilinear pairings, and an attribute-based encryption scheme in the RSA setting, both admitting and policies. If these schemes were secure, then by using the implication proved in this study, one would obtain secure identity-based encryption schemes in both the RSA and the discrete logarithm settings, without bilinear pairings, which would be a breakthrough in the area. Unfortunately, the author presents here complete attacks of the two schemes proposed by Odelu et al.Postprint (updated version

    Ideal homogeneous access structures constructed from graphs

    Get PDF
    Starting from a new relation between graphs and secret sharing schemes introduced by Xiao, Liu and Zhang, we show a method to construct more general ideal homogeneous access structures. The method has some advantages: it efficiently gives an ideal homogeneous access structure for the desired rank, and some conditions can be imposed (such as forbidden or necessary subsets of players), even if the exact composition of the resulting access structure cannot be fully controlled. The number of homogeneous access structures that can be constructed in this way is quite limited; for example, we show that (t, l)-threshold access structures can be constructed from a graph only when t = 1, t = l - 1 or t = l.Peer ReviewedPostprint (published version

    On the computational security of a distributed key distribution scheme

    Get PDF
    In a distributed key distribution scheme, a set of servers helps a set of users in a group to securely obtain a common key. Security means that an adversary who corrupts some servers and some users has no information about the key of a noncorrupted group. In this work, we formalize the security analysis of one such scheme [ 11] which was not considered in the original proposal. We prove the scheme is secure in the random oracle model, assuming that the Decisional Diffie-Hellman (DDH) problem is hard to solve. We also detail a possible modification of that scheme and the one in [ 24] which allows us to prove the security of the schemes without assuming that a specific hash function behaves as a random oracle. As usual, this improvement in the security of the schemes is at the cost of an efficiency loss.Peer Reviewe

    New results and applications for multi-secret sharing schemes

    Get PDF
    In a multi-secret sharing scheme (MSSS), different secrets are distributed among the players in some set , each one according to an access structure. The trivial solution to this problem is to run independent instances of a standard secret sharing scheme, one for each secret. In this solution, the length of the secret share to be stored by each player grows linearly with (when keeping all other parameters fixed). Multi-secret sharing schemes have been studied by the cryptographic community mostly from a theoretical perspective: different models and definitions have been proposed, for both unconditional (information-theoretic) and computational security. In the case of unconditional security, there are two different definitions. It has been proved that, for some particular cases of access structures that include the threshold case, a MSSS with the strongest level of unconditional security must have shares with length linear in . Therefore, the optimal solution in this case is equivalent to the trivial one. In this work we prove that, even for a more relaxed notion of unconditional security, and for some kinds of access structures (in particular, threshold ones), we have the same efficiency problem: the length of each secret share must grow linearly with . Since we want more efficient solutions, we move to the scenario of MSSSs with computational security. We propose a new MSSS, where each secret share has constant length (just one element), and we formally prove its computational security in the random oracle model. To the best of our knowledge, this is the first formal analysis on the computational security of a MSSS. We show the utility of the new MSSS by using it as a key ingredient in the design of two schemes for two new functionalities: multi-policy signatures and multi-policy decryption. We prove the security of these two new multi-policy cryptosystems in a formal security model. The two new primitives provide similar functionalities as attribute-based cryptosystems, with some advantages and some drawbacks that we discuss at the end of this work.Peer ReviewedPostprint (author’s final draft

    Shorter lattice-based zero-knowledge proofs for the correctness of a shuffle

    Get PDF
    In an electronic voting procedure, mixing networks are used to ensure anonymity of the casted votes. Each node of the network re-encrypts the input list of ciphertexts and randomly permutes it in a process named shuffle, and must prove (in zero-knowledge) that the process was applied honestly. To maintain security of such a process in a post-quantum scenario, new proofs are based on different mathematical assumptions, such as lattice-based problems. Nonetheless, the best lattice-based protocols to ensure verifiable shuffling have linear communication complexity on N, the number of shuffled ciphertexts. In this paper we propose the first sub-linear (on N) post-quantum zero-knowledge argument for the correctness of a shuffle, for which we have mainly used two ideas: arithmetic circuit satisfiability results from Baum et al. (CRYPTO'2018) and Beneš networks to model a permutation of N elements. The achieved communication complexity of our protocol with respect to N is O(v(N)log^2(N)), but we will also highlight its dependency on other important parameters of the underlying lattice ingredients.The work is partially supported by the Spanish Ministerio de Ciencia e Innovaci´on (MICINN), under Project PID2019-109379RB-I00 and by the European Union PROMETHEUS project (Horizon 2020 Research and Innovation Program, grant 780701). Authors thank Tjerand Silde for pointing out an incorrect set of parameters (Section 4.1) that we had proposed in a previous version of the manuscript.Postprint (author's final draft

    Revisiting distance-based record linkage for privacy-preserving release of statistical datasets

    Get PDF
    Statistical Disclosure Control (SDC, for short) studies the problem of privacy-preserving data publishing in cases where the data is expected to be used for statistical analysis. An original dataset T containing sensitive information is transformed into a sanitized version T' which is released to the public. Both utility and privacy aspects are very important in this setting. For utility, T' must allow data miners or statisticians to obtain similar results to those which would have been obtained from the original dataset T. For privacy, T' must significantly reduce the ability of an adversary to infer sensitive information on the data subjects in T. One of the main a-posteriori measures that the SDC community has considered up to now when analyzing the privacy offered by a given protection method is the Distance-Based Record Linkage (DBRL) risk measure. In this work, we argue that the classical DBRL risk measure is insufficient. For this reason, we introduce the novel Global Distance-Based Record Linkage (GDBRL) risk measure. We claim that this new measure must be evaluated alongside the classical DBRL measure in order to better assess the risk in publishing T' instead of T. After that, we describe how this new measure can be computed by the data owner and discuss the scalability of those computations. We conclude by extensive experimentation where we compare the risk assessments offered by our novel measure as well as by the classical one, using well-known SDC protection methods. Those experiments validate our hypothesis that the GDBRL risk measure issues, in many cases, higher risk assessments than the classical DBRL measure. In other words, relying solely on the classical DBRL measure for risk assessment might be misleading, as the true risk may be in fact higher. Hence, we strongly recommend that the SDC community considers the new GDBRL risk measure as an additional measure when analyzing the privacy offered by SDC protection algorithms.Postprint (author's final draft

    Cifrado homomorfico de clave publica basado en Residuosidad Cuadratica

    Get PDF
    Los esquemas de cifrado de clave p´ ublica con propiedades homom´orficas tienen muchas utilidades en aplicaciones reales. Entre los esquemas con propiedades homom´orficas aditivas existentes, hay una familia (desde el esquema de Goldwasser-Micali hasta el esquema de Paillier) cuya seguridad se basa en problemas computacionalmente dif´ıciles relacionados con el problema de factorizar un n´umero grande N. Los esquemas de esta familia tienen diferentes propiedades tanto en lo referente a la eficiencia, como al problema de teor´ıa de n´umeros concreto en el que basan su seguridad. En este art´ıculo proponemos un nuevo esquema a a˜nadir a esta familia. La hip´otesis computacional en la que se basa la seguridad de nuestro esquema es la hip´otesis de la Residuosidad Cuadr´atica m´odulo N. En t´erminos de eficiencia, por un lado nuestro esquema mejora todos los esquemas anteriores cuya seguridad se basa en la hip´otesis de la Residuosidad d-´esima m´odulo N, para d 2; por otro lado, nuestro esquema es en general menos eficiente (tiempo de descifrado) que algunos esquemas como el de Paillier, cuya seguridad se basa en otra hip´otesis (Residuosidad N-´esima m´odulo N2). Sin embargo, si los mensajes a cifrar son cortos, la eficiencia de nuestro esquema es esencialmente la misma que la del esquema de PaillierPeer ReviewedPostprint (published version

    How to avoid repetitions in lattice-based deniable zero-knowledge proofs

    Get PDF
    Interactive zero-knowledge systems are a very important cryptographic primitive, used in many applications, especially when deniability (also known as non-transferability) is desired. In the lattice-based setting, the currently most efficient interactive zero-knowledge systems employ the technique of rejection sampling, which implies that the interaction does not always finish correctly in the first execution; the whole interaction must be re-run until abort does not happen. While repetitions due to aborts are acceptable in theory, in some practical applications it is desirable to avoid re-runs for usability reasons. In this work we present a generic technique that departs from an interactive zero-knowledge system (that might require multiple re-runs to complete the protocol) and obtains a 3-moves zero-knowledge system (without re-runs). The transformation combines the well-known Fiat-Shamir technique with a couple of initially exchanged messages. The resulting 3-moves system enjoys honest-verifier zero-knowledge and can be easily turned into a fully deniable proof using standard techniques. We show some practical scenarios where our transformation can be beneficial and we also discuss the results of an implementation of our transformation.Preprin
    • …
    corecore