8,782 research outputs found
A NEW APPROACH TO THE DISCRETE LOGARITHM PROBLEM WITH AUXILIARY INPUTS
The discrete logarithm problem with auxiliary inputs is to
solve~ for given elements
of a cyclic group of prime order~.
The best-known algorithm, proposed by Cheon in 2006,
solves in the case of
with running time of
group exponentiations~( or depending on the sign).
There have been several attempts to generalize this algorithm
in the case of for ,
but it has been shown, by Kim, Cheon and Lee, that
they cannot have better complexity than the usual square root algorithms.
We propose a new algorithm to solve the DLPwAI.
The complexity of the algorithm is determined by
a chosen polynomial f \in \F_p[x] of degree .
We show that the proposed algorithm has a running time of
group exponentiations,
where~ is the number of absolutely irreducible factors of .
We note that it is always smaller than .
To obtain a better complexity of the algorithm,
we investigate an upper bound of and
try to find polynomials that achieve the upper bound.
We can find such polynomials in the case of .
In this case, the algorithm has a running time of
group operations
which corresponds with the lower bound in the generic group model.
On the contrary, we show that no polynomial exists that achieves the
upper bound in the case of .
As an independent interest, we present an analysis of a non-uniform
birthday problem.
Precisely, we show that a collision occurs with a high probability after
samplings of balls,
where the probability of assigning balls to the bin is arbitrary
부가정보를 이용한 이산대수 문제 연구
학위논문 (박사)-- 서울대학교 대학원 : 수리과학부, 2014. 2. 천정희.The modern cryptography has been developed based on mathematical hard problems.
For example, it is considered hard to solve the discrete logarithm problem~(DLP).
The DLP is required to solve for given ,
where .
It is well-known that the lower bound complexity to solve the DLP
in the generic group model is ~(EUROCRYPT 97, Shoup),
where is the prime order of the group .
However, if the problem is given with auxiliary informations,
then it can be solved faster than .
In the former of the thesis, we deal with the problem
called discrete logarithm problem with the auxiliary inputs~(DLPwAI).
The DLPwAI is a problem required to solve for given
.
The state-of-art algorithm to solve this problem is Cheon's algorithm
which solves the problem in the case of .
In the thesis, we propose a new method to solve the DLPwAI which
reduces to find a polynomial with small value sets.
As a result, we solved the DLPwAI when were given,
where is an element of multiplicative subgroup of .
In the later of the thesis,
we try to solve the DLP with the pairing inversion problem.
If one has an efficient algorithm to solve the pairing inversion,
then it can be used to solve the DLP.
We focus on how to reduce the complexity of the pairing inversion problem
by reducing the size of the final exponentiation in the pairing computation.
As a result, we obtained the lower bound of the size of the final exponentiation.Abstract i
1 Introduction 1
2 Discrete Logarithm Problem 4
2.1 AlgorithmsfortheDLP ..................... 4
2.1.1 Genericalgorithms .................... 4
2.1.2 Non-genericalgorithms.................. 8
3 Discrete Logairhtm Problem with Auxiliary Inputs 10
3.1 Introduction............................ 10
3.2 TheDLPwAIandCheonsalgorithm .............. 12
3.2.1 p−1cases......................... 12
3.2.2 Generalizedalgorithms.................. 14
3.3 Fast multipoint evaluation in the blackbox manner . . . . . . 16
3.4 Balls-and-BinsProblem...................... 24
3.4.1 Balls-and-Bins Problem with Uniform Probability . . . 24
3.4.2 Balls-and-Bins Problem with Non-Uniform Probability 25
3.5 Polynomialswithsmallvaluesets ................ 28
3.5.1 An approach using the polynomial of small value set: uniformcase........................ 28
3.5.2 Approach using polynomials with almost small value set:non-uniformcase................... 31
3.5.3 Generalization of the Dickson Polynomial and its value set............................. 32
4 Generalized DLP with Auxiliary Inputs 38
4.1 MultiplicativeSubgroupsofZ×n ................. 38
4.1.1 Representation of a Multiplicative Subgroup of Z×n . . 39
4.2 AGroupActiononZ×p ...................... 41
4.3 PolynomialConstruction..................... 47
4.4 MainTheorem .......................... 51
5 The Pairing Inversion Problem 56
5.1 Introduction............................ 56
5.2 Preliminaries ........................... 60
5.2.1 Pairings .......................... 60
5.2.2 Pairing-FriendlyEllipticCurves . . . . . . . . . . . . . 61
5.2.3 ExponentiationMethod ................. 63
5.3 Reducingthefinalexponentiation................ 64
5.3.1 Polynomial representation of the base-p coefficients . . 64
5.3.2 Reducingthesizeofbasepcoefficients . . . . . . . . . 72
5.3.3 Examples ......................... 77
6 Conclusion .....................81
Abstract (in Korean) .................91
Acknowledgement (in Korean) ................92Docto
Still Wrong Use of Pairings in Cryptography
Several pairing-based cryptographic protocols are recently proposed with a
wide variety of new novel applications including the ones in emerging
technologies like cloud computing, internet of things (IoT), e-health systems
and wearable technologies. There have been however a wide range of incorrect
use of these primitives. The paper of Galbraith, Paterson, and Smart (2006)
pointed out most of the issues related to the incorrect use of pairing-based
cryptography. However, we noticed that some recently proposed applications
still do not use these primitives correctly. This leads to unrealizable,
insecure or too inefficient designs of pairing-based protocols. We observed
that one reason is not being aware of the recent advancements on solving the
discrete logarithm problems in some groups. The main purpose of this article is
to give an understandable, informative, and the most up-to-date criteria for
the correct use of pairing-based cryptography. We thereby deliberately avoid
most of the technical details and rather give special emphasis on the
importance of the correct use of bilinear maps by realizing secure
cryptographic protocols. We list a collection of some recent papers having
wrong security assumptions or realizability/efficiency issues. Finally, we give
a compact and an up-to-date recipe of the correct use of pairings.Comment: 25 page
PPP-Completeness with Connections to Cryptography
Polynomial Pigeonhole Principle (PPP) is an important subclass of TFNP with
profound connections to the complexity of the fundamental cryptographic
primitives: collision-resistant hash functions and one-way permutations. In
contrast to most of the other subclasses of TFNP, no complete problem is known
for PPP. Our work identifies the first PPP-complete problem without any circuit
or Turing Machine given explicitly in the input, and thus we answer a
longstanding open question from [Papadimitriou1994]. Specifically, we show that
constrained-SIS (cSIS), a generalized version of the well-known Short Integer
Solution problem (SIS) from lattice-based cryptography, is PPP-complete.
In order to give intuition behind our reduction for constrained-SIS, we
identify another PPP-complete problem with a circuit in the input but closely
related to lattice problems. We call this problem BLICHFELDT and it is the
computational problem associated with Blichfeldt's fundamental theorem in the
theory of lattices.
Building on the inherent connection of PPP with collision-resistant hash
functions, we use our completeness result to construct the first natural hash
function family that captures the hardness of all collision-resistant hash
functions in a worst-case sense, i.e. it is natural and universal in the
worst-case. The close resemblance of our hash function family with SIS, leads
us to the first candidate collision-resistant hash function that is both
natural and universal in an average-case sense.
Finally, our results enrich our understanding of the connections between PPP,
lattice problems and other concrete cryptographic assumptions, such as the
discrete logarithm problem over general groups
Quantum resource estimates for computing elliptic curve discrete logarithms
We give precise quantum resource estimates for Shor's algorithm to compute
discrete logarithms on elliptic curves over prime fields. The estimates are
derived from a simulation of a Toffoli gate network for controlled elliptic
curve point addition, implemented within the framework of the quantum computing
software tool suite LIQ. We determine circuit implementations for
reversible modular arithmetic, including modular addition, multiplication and
inversion, as well as reversible elliptic curve point addition. We conclude
that elliptic curve discrete logarithms on an elliptic curve defined over an
-bit prime field can be computed on a quantum computer with at most qubits using a quantum circuit of at most Toffoli gates. We are able to classically simulate the
Toffoli networks corresponding to the controlled elliptic curve point addition
as the core piece of Shor's algorithm for the NIST standard curves P-192,
P-224, P-256, P-384 and P-521. Our approach allows gate-level comparisons to
recent resource estimates for Shor's factoring algorithm. The results also
support estimates given earlier by Proos and Zalka and indicate that, for
current parameters at comparable classical security levels, the number of
qubits required to tackle elliptic curves is less than for attacking RSA,
suggesting that indeed ECC is an easier target than RSA.Comment: 24 pages, 2 tables, 11 figures. v2: typos fixed and reference added.
ASIACRYPT 201
Shaping the learning landscape in neural networks around wide flat minima
Learning in Deep Neural Networks (DNN) takes place by minimizing a non-convex
high-dimensional loss function, typically by a stochastic gradient descent
(SGD) strategy. The learning process is observed to be able to find good
minimizers without getting stuck in local critical points, and that such
minimizers are often satisfactory at avoiding overfitting. How these two
features can be kept under control in nonlinear devices composed of millions of
tunable connections is a profound and far reaching open question. In this paper
we study basic non-convex one- and two-layer neural network models which learn
random patterns, and derive a number of basic geometrical and algorithmic
features which suggest some answers. We first show that the error loss function
presents few extremely wide flat minima (WFM) which coexist with narrower
minima and critical points. We then show that the minimizers of the
cross-entropy loss function overlap with the WFM of the error loss. We also
show examples of learning devices for which WFM do not exist. From the
algorithmic perspective we derive entropy driven greedy and message passing
algorithms which focus their search on wide flat regions of minimizers. In the
case of SGD and cross-entropy loss, we show that a slow reduction of the norm
of the weights along the learning process also leads to WFM. We corroborate the
results by a numerical study of the correlations between the volumes of the
minimizers, their Hessian and their generalization performance on real data.Comment: 37 pages (16 main text), 10 figures (7 main text
A Machine-Checked Formalization of the Generic Model and the Random Oracle Model
Most approaches to the formal analyses of cryptographic protocols make the perfect cryptography assumption, i.e. the hypothese that there is no way to obtain knowledge about the plaintext pertaining to a ciphertext without knowing the key. Ideally, one would prefer to rely on a weaker hypothesis on the computational cost of gaining information about the plaintext pertaining to a ciphertext without knowing the key. Such a view is permitted by the Generic Model and the Random Oracle Model which provide non-standard computational models in which one may reason about the computational cost of breaking a cryptographic scheme. Using the proof assistant Coq, we provide a machine-checked account of the Generic Model and the Random Oracle Mode
Removable Weak Keys for Discrete Logarithm Based Cryptography
We describe a novel type of weak cryptographic private key that can exist in
any discrete logarithm based public-key cryptosystem set in a group of prime
order where has small divisors. Unlike the weak private keys based on
\textit{numerical size} (such as smaller private keys, or private keys lying in
an interval) that will \textit{always} exist in any DLP cryptosystems, our type
of weak private keys occurs purely due to parameter choice of , and hence,
can be removed with appropriate value of . Using the theory of implicit
group representations, we present algorithms that can determine whether a key
is weak, and if so, recover the private key from the corresponding public key.
We analyze several elliptic curves proposed in the literature and in various
standards, giving counts of the number of keys that can be broken with
relatively small amounts of computation. Our results show that many of these
curves, including some from standards, have a considerable number of such weak
private keys. We also use our methods to show that none of the 14 outstanding
Certicom Challenge problem instances are weak in our sense, up to a certain
weakness bound
On Constant-Round Concurrent Zero-Knowledge from a Knowledge Assumption
In this work, we consider the long-standing open question of constructing
constant-round concurrent zero-knowledge protocols in the plain model.
Resolving this question is known to require non-black-box techniques.
We consider non-black-box techniques for zero-knowledge based on knowledge
assumptions, a line of thinking initiated by the work of Hada and Tanaka
(CRYPTO 1998). Prior to our work, it was not known whether knowledge
assumptions could be used for achieving security in the concurrent setting, due
to a number of significant limitations that we discuss here. Nevertheless, we
obtain the following results:
1. We obtain the first constant round concurrent zero-knowledge argument for
\textbf{NP} in the plain model based on a new variant of knowledge of exponent
assumption. Furthermore, our construction avoids the inefficiency inherent in
previous non-black-box techniques such that those of Barak (FOCS 2001); we
obtain our result through an efficient protocol compiler.
2. Unlike Hada and Tanaka, we do not require a knowledge assumption to argue
the soundness of our protocol. Instead, we use a discrete log like assumption,
which we call Diffie-Hellman Logarithm Assumption, to prove the soundness of
our protocol.
3. We give evidence that our new variant of knowledge of exponent assumption
is in fact plausible. In particular, we show that our assumption holds in the
generic group model.
4. Knowledge assumptions are especially delicate assumptions whose
plausibility may be hard to gauge. We give a novel framework to express
knowledge assumptions in a more flexible way, which may allow for formulation
of plausible assumptions and exploration of their impact and application in
cryptography.Comment: 30 pages, 3 figure
- …