188 research outputs found
Still Wrong Use of Pairings in Cryptography
Several pairing-based cryptographic protocols are recently proposed with a
wide variety of new novel applications including the ones in emerging
technologies like cloud computing, internet of things (IoT), e-health systems
and wearable technologies. There have been however a wide range of incorrect
use of these primitives. The paper of Galbraith, Paterson, and Smart (2006)
pointed out most of the issues related to the incorrect use of pairing-based
cryptography. However, we noticed that some recently proposed applications
still do not use these primitives correctly. This leads to unrealizable,
insecure or too inefficient designs of pairing-based protocols. We observed
that one reason is not being aware of the recent advancements on solving the
discrete logarithm problems in some groups. The main purpose of this article is
to give an understandable, informative, and the most up-to-date criteria for
the correct use of pairing-based cryptography. We thereby deliberately avoid
most of the technical details and rather give special emphasis on the
importance of the correct use of bilinear maps by realizing secure
cryptographic protocols. We list a collection of some recent papers having
wrong security assumptions or realizability/efficiency issues. Finally, we give
a compact and an up-to-date recipe of the correct use of pairings.Comment: 25 page
Cryptographic Assumptions: A Position Paper
The mission of theoretical cryptography is to define and construct provably secure cryptographic protocols and schemes. Without proofs of security, cryptographic constructs offer no guarantees whatsoever and no basis for evaluation and comparison. As most security proofs necessarily come in the form of a reduction between the security claim and an intractability assumption, such proofs are ultimately only as good as the assumptions they are based on. Thus, the complexity implications of every assumption we utilize should be of significant substance, and serve as the yard stick for the value of our proposals.
Lately, the field of cryptography has seen a sharp increase in the number of new assumptions that are often complex to define and difficult to interpret. At times, these assumptions are hard to untangle from the constructions which utilize them.
We believe that the lack of standards of what is accepted as a reasonable cryptographic assumption can be harmful to the credibility of our field. Therefore, there is a great need for {\em measures} according to which we classify and compare assumptions, as to which are {\it safe} and which are not.
In this paper, we propose such a classification and review recently suggested assumptions in this light. This follows the footsteps of Naor (Crypto 2003).
Our governing principle is relying on hardness assumptions that are independent of the cryptographic constructions
Replacing Probability Distributions in Security Games via Hellinger Distance
Security of cryptographic primitives is usually proved by assuming "ideal" probability distributions. We need to replace them with approximated "real" distributions in the real-world systems without losing the security level. We demonstrate that the Hellinger distance is useful for this problem, while the statistical distance is mainly used in the cryptographic literature. First, we show that for preserving ?-bit security of a given security game, the closeness of 2^{-?/2} to the ideal distribution is sufficient for the Hellinger distance, whereas 2^{-?} is generally required for the statistical distance. The result can be applied to both search and decision primitives through the bit security framework of Micciancio and Walter (Eurocrypt 2018). We also show that the Hellinger distance gives a tighter evaluation of closeness than the max-log distance when the distance is small. Finally, we show that the leftover hash lemma can be strengthened to the Hellinger distance. Namely, a universal family of hash functions gives a strong randomness extractor with optimal entropy loss for the Hellinger distance. Based on the results, a ?-bit entropy loss in randomness extractors is sufficient for preserving ?-bit security. The current understanding based on the statistical distance is that a 2?-bit entropy loss is necessary
A Simple Obfuscation Scheme for Pattern-Matching with Wildcards
We give a simple and efficient method for obfuscating pattern matching with wildcards. In other words, we construct a way to check an input against a secret pattern, which is described in terms of prescribed values interspersed with unconstrained “wildcard” slots. As long as the support of the pattern is sufficiently sparse and the pattern itself is chosen from an appropriate distribution, we prove that a polynomial-time adversary cannot find a matching input, except with negligible probability. We rely upon the generic group heuristic (in a regular group, with no multilinearity). Previous work provided less efficient constructions based on multilinear maps or LWE
Circuit-ABE from LWE: Unbounded Attributes and Semi-adaptive Security
We construct an LWE-based key-policy attribute-based encryption (ABE) scheme that supports attributes of unbounded polynomial length. Namely, the size of the public parameters is a fixed polynomial in the security parameter and a depth bound, and with these fixed length parameters, one can encrypt attributes of arbitrary length. Similarly, any polynomial size circuit that adheres to the depth bound can be used as the policy circuit regardless of its input length (recall that a depth d circuit can have as many as 2d
inputs). This is in contrast to previous LWE-based schemes where the length of the public parameters has to grow linearly with the maximal attribute length.
We prove that our scheme is semi-adaptively secure, namely, the adversary can choose the challenge attribute after seeing the public parameters (but before any decryption keys). Previous LWE-based constructions were only able to achieve selective security. (We stress that the “complexity leveraging” technique is not applicable for unbounded attributes).
We believe that our techniques are of interest at least as much as our end result. Fundamentally, selective security and bounded attributes are both shortcomings that arise out of the current LWE proof techniques that program the challenge attributes into the public parameters. The LWE toolbox we develop in this work allows us to delay this programming. In a nutshell, the new tools include a way to generate an a-priori unbounded sequence of LWE matrices, and have fine-grained control over which trapdoor is embedded in each and every one of them, all with succinct representation.National Science Foundation (U.S.) (Award CNS-1350619)National Science Foundation (U.S.) (Grant CNS-1413964)United States-Israel Binational Science Foundation (Grant 712307
The Design Space of Lightweight Cryptography
International audienceFor constrained devices, standard cryptographic algorithms can be too big, too slow or too energy-consuming. The area of lightweight cryptography studies new algorithms to overcome these problems. In this paper, we will focus on symmetric-key encryption, authentication and hashing. Instead of providing a full overview of this area of research, we will highlight three interesting topics. Firstly, we will explore the generic security of lightweight constructions. In particular, we will discuss considerations for key, block and tag sizes, and explore the topic of instantiating a pseudorandom permutation (PRP) with a non-ideal block cipher construction. This is inspired by the increasing prevalence of lightweight designs that are not secure against related-key attacks, such as PRINCE, PRIDE or Chaskey. Secondly, we explore the efficiency of cryptographic primitives. In particular, we investigate the impact on efficiency when the input size of a primitive doubles. Lastly, we provide some considerations for cryptographic design. We observe that applications do not always use cryptographic algorithms as they were intended, which negatively impacts the security and/or efficiency of the resulting implementations
LeakyOhm: Secret Bits Extraction using Impedance Analysis
The threats of physical side-channel attacks and their countermeasures have
been widely researched. Most physical side-channel attacks rely on the
unavoidable influence of computation or storage on current consumption or
voltage drop on a chip. Such data-dependent influence can be exploited by, for
instance, power or electromagnetic analysis. In this work, we introduce a novel
non-invasive physical side-channel attack, which exploits the data-dependent
changes in the impedance of the chip. Our attack relies on the fact that the
temporarily stored contents in registers alter the physical characteristics of
the circuit, which results in changes in the die's impedance. To sense such
impedance variations, we deploy a well-known RF/microwave method called
scattering parameter analysis, in which we inject sine wave signals with high
frequencies into the system's power distribution network (PDN) and measure the
echo of the signals. We demonstrate that according to the content bits and
physical location of a register, the reflected signal is modulated differently
at various frequency points enabling the simultaneous and independent probing
of individual registers. Such side-channel leakage challenges the -probing
security model assumption used in masking, which is a prominent side-channel
countermeasure. To validate our claims, we mount non-profiled and profiled
impedance analysis attacks on hardware implementations of unprotected and
high-order masked AES. We show that in the case of the profiled attack, only a
single trace is required to recover the secret key. Finally, we discuss how a
specific class of hiding countermeasures might be effective against impedance
leakage
Still Wrong Use of Pairings in Cryptography
The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Several pairing-based cryptographic protocols are recently
proposed with a wide variety of new novel applications including the ones
in emerging technologies like cloud computing, internet of things (IoT),
e-health systems and wearable technologies. There have been however a
wide range of incorrect use of these primitives. The paper of Galbraith,
Paterson, and Smart (2006) pointed out most of the issues related to the
incorrect use of pairing-based cryptography. However, we noticed that
some recently proposed applications still do not use these primitives correctly.
This leads to unrealizable, insecure or too ine cient designs of
pairing-based protocols. We observed that one reason is not being aware
of the recent advancements on solving the discrete logarithm problems in
some groups. The main purpose of this article is to give an understandable,
informative, and the most up-to-date criteria for the correct use of
pairing-based cryptography. We thereby deliberately avoid most of the
technical details and rather give special emphasis on the importance of
the correct use of bilinear maps by realizing secure cryptographic protocols.
We list a collection of some recent papers having wrong security
assumptions or realizability/e ciency issues. Finally, we give a compact
and an up-to-date recipe of the correct use of pairings
- …