1,405 research outputs found

    A Universal Forgery of Hess\u27s Second ID-based Signature against the Known-message Attack

    Get PDF
    In this paper we propose a universal forgery attack of Hess\u27s second ID-based signature scheme against the known-message attack

    Multisignatures secure under the discrete logarithm assumption and a generalized forking lemma

    Full text link
    Multisignatures allow n signers to produce a short joint signature on a single message. Multisignatures were achieved in the plain model with a non-interactive protocol in groups with bilinear maps, by Boneh et al [4], and by a three-round protocol under the Discrete Logarithm (DL) assumption, by Bellare and Neven [3], with mul-tisignature verification cost of, respectively, O(n) pairings or ex-ponentiations. In addition, multisignatures with O(1) verification were shown in so-called Key Verification (KV) model, where each public key is accompanied by a short proof of well-formedness, again either with a non-interactive protocol using bilinear maps, by Ristenpart and Yilek [15], or with a three-round protocol under the Diffie-Hellman assumption, by Bagherzandi and Jarecki [1]. We improve on these results in two ways: First, we show a two-round O(n)-verification multisignature secure under the DL as

    A NEW APPROACH TO THE DISCRETE LOGARITHM PROBLEM WITH AUXILIARY INPUTS

    Get PDF
    The discrete logarithm problem with auxiliary inputs is to solve~α\alpha for given elements g,gα,,gαdg, g^\alpha, \ldots, g^{\alpha^d} of a cyclic group G=gG=\langle g \rangle of prime order~pp. The best-known algorithm, proposed by Cheon in 2006, solves α\alpha in the case of d(p±1)d | (p\pm 1) with running time of O(p/d+di)O\left( \sqrt{p/d} + d^i \right) group exponentiations~(i=1i=1 or 1/21/2 depending on the sign). There have been several attempts to generalize this algorithm in the case of Φk(p)\Phi_k(p) for k3k \ge 3, but it has been shown, by Kim, Cheon and Lee, that they cannot have better complexity than the usual square root algorithms. We propose a new algorithm to solve the DLPwAI. The complexity of the algorithm is determined by a chosen polynomial f \in \F_p[x] of degree dd. We show that the proposed algorithm has a running time of O~(p/τf+d)\widetilde O\left( \sqrt{p / \tau_f} +d \right) group exponentiations, where~τf\tau_f is the number of absolutely irreducible factors of f(x)f(y)f(x)-f(y). We note that it is always smaller than O~(p1/2)\widetilde O(p^{1/2}). To obtain a better complexity of the algorithm, we investigate an upper bound of τf\tau_f and try to find polynomials that achieve the upper bound. We can find such polynomials in the case of d(p±1)d|(p\pm 1). In this case, the algorithm has a running time of O~(p/d+d)\widetilde O\left(\sqrt{p/d} +d \right) group operations which corresponds with the lower bound in the generic group model. On the contrary, we show that no polynomial exists that achieves the upper bound in the case of dΦ3(p)=p2+p+1d \vert\Phi_3(p)=p^2+p+1. As an independent interest, we present an analysis of a non-uniform birthday problem. Precisely, we show that a collision occurs with a high probability after O(1kwk2)O\big( \frac{1}{ \sqrt{\sum_{k} {w_k}^2} } \big) samplings of balls, where the probability wkw_k of assigning balls to the bin kk is arbitrary

    Approximate Algorithms on Lattices with Small Determinant

    Get PDF
    In this paper, we propose approximate lattice algorithms for solving the shortest vector problem (SVP) and the closest vector problem (CVP) on an nn-dimensional Euclidean integral lattice L. Our algorithms run in polynomial time of the dimension and determinant of lattices and improve on the LLL algorithm when the determinant of a lattice is less than 2^{n^2/4}. More precisely, our approximate SVP algorithm gives a lattice vector of size \le 2^{\sqrt{\log\det L}} and our approximate CVP algorithm gives a lattice vector, the distance of which to a target vector is 2^{\sqrt{\log\det L}} times the distance from the target vector to the lattice. One interesting feature of our algorithms is that their output sizes are independent of dimension and become smaller as the determinant of L becomes smaller. For example, if \det L=2^{n \sqrt n}, a short vector outputted from our approximate SVP algorithm is of size 2^{n^{3/4}}, which is asymptotically smaller than the size 2^{n/4+\sqrt n} of the outputted short vectors of the LLL algorithm. It is similar to our approximate CVP algorithm

    Fully Homomorphic Encryption over the Integers Revisited

    Get PDF
    Two main computational problems serve as security foundations of current fully homomorphic encryption schemes: Regev\u27s Learning With Errors problem (LWE) and Howgrave-Graham\u27s Approximate Greatest Common Divisor problem (AGCD). Our first contribution is a reduction from LWE to AGCD. As a second contribution, we describe a new AGCD-based fully homomorphic encryption scheme, which outperforms all prior AGCD-based proposals: its security does not rely on the presumed hardness of the so-called Sparse Subset Sum problem, and the bit-length of a ciphertext is only softO(lambda), where lambda refers to the security parameter

    Revisiting the Hybrid attack on sparse and ternary secret LWE

    Get PDF
    In the practical use of the Learning With Error (LWE) based cryptosystems, it is quite common to choose the secret to be extremely small: one popular choice is ternary (±1,0\pm 1, 0) coefficient vector, and some further use ternary vector having only small numbers of nonzero coefficient, what is called sparse and ternary vector. This use of small secret also benefits to attack algorithms against LWE, and currently LWE-based cryptosystems including homomorphic encryptions (HE) set parameters based on the attack complexity of those improved attacks. In this work, we revisit the well-known Howgrave-Graham\u27s hybrid attack, which was originally designed to solve the NTRU problem, with respect to sparse and ternary secret LWE case, and also refine the previous analysis for the hybrid attack in line with LWE setting. Moreover, upon our analysis we estimate attack complexity of the hybrid attack for several LWE parameters. As a result, we argue the currently used HE parameters should be raised to maintain the same security level by considering the hybrid attack; for example, the parameter set (n,logq,σ)=(65536,1240,3.2)(n, \log q, \sigma) = (65536, 1240, 3.2) with Hamming weight of secret key h=64,h = 64, which was estimated to satisfy 128\ge 128 bit-security by the previously considered attacks, is newly estimated to provide only 113113 bit-security by the hybrid attack

    Efficacy of early immunomodulator therapy on the outcomes of Crohn’s disease

    Get PDF
    BACKGROUND: The natural course of Crohn’s disease (CD), with continuing relapses and remissions, leads to irreversible intestinal damage. Early adoption of immunomodulator therapy has been proposed in order to address this; however, it is still uncertain whether early immunomodulator therapy could affect the natural course of the disease in real practice. We evaluated the efficacy of such therapy on the prognosis of newly diagnosed patients with CD. METHODS: This retrospective study included 168 patients who were newly diagnosed with CD and who started treatment at Severance Hospital, Seoul, Korea between January 2006 and March 2013. The short- and long-term outcomes were compared between patients treated with early immunomodulator therapy and those treated with conventional therapy. RESULTS: A Kaplan-Meier analysis identified that administration of immunomodulators within 6 months after diagnosis of CD was superior to conventional therapy in terms of clinical remission and corticosteroid-free remission rates (P=0.043 and P=0.035). However, P=0.827). Patients with a baseline elevated CRP level were more likely to relapse (P<0.005). Drug-related adverse events were more frequent in the early immunomodulator therapy group than in the conventional therapy group P=0.029). CONCLUSIONS: Early immunomodulator therapy was more effective than conventional therapy in inducing remission, but not in preventing relapse. Baseline high CRP level was a significant indicator of relapse

    Probability that the k-gcd of products of positive integers is B-friable

    Get PDF
    In 1849, Dirichlet~\cite{D49} proved that the probability that two positive integers are relatively prime is 1/\zeta(2). Later, it was generalized into the case that positive integers has no nontrivial kkth power common divisor. In this paper, we further generalize this result: the probability that the gcd of m products of n positive integers is B-friable is \prod_{p>B}[1-{1-(1-\frac{1}{p})^{n}}^{m}] for m >= 2. We show that it is lower bounded by \frac{1}{\zeta(s)} for some s>1 if B>n^{\frac{m}{m-1}}, which completes the heuristic proof in the cryptanalysis of cryptographic multilinear maps by Cheon et al.~\cite{CHLRS15}. We extend this result to the case of kk-gcd: the probability is \prod_{p>B}[1-{1-(1-\frac{1}{p})^{n}(1+\frac{_{n}H_{1}}{p}+\cdot\cdot\cdot+\frac{_{n}H_{k-1}}{p^{k-1}})}^{m}], where _{n}H_{i} = n+i-1 \choose i

    An Approach to Reduce Storage for Homomorphic Computations

    Get PDF
    We introduce a hybrid homomorphic encryption by combining public key encryption (PKE) and somewhat homomorphic encryption (SHE) to reduce storage for most applications of somewhat or fully homomorphic encryption (FHE). In this model, one encrypts messages with a PKE and computes on encrypted data using a SHE or a FHE after homomorphic decryption. To obtain efficient homomorphic decryption, our hybrid schemes is constructed by combining IND-CPA PKE schemes without complicated message paddings with SHE schemes with large integer message space. Furthermore, we remark that if the underlying PKE is multiplicative on a domain closed under addition and multiplication, this scheme has an important advantage that one can evaluate a polynomial of arbitrary degree without recryption. We propose such a scheme by concatenating ElGamal and Goldwasser-Micali scheme over a ring ZN\Z_N for a composite integer NN whose message space is ZN×\Z_N^\times. To be used in practical applications, homomorphic decryption of the base PKE is too expensive. We accelerate the homomorphic evaluation of the decryption by introducing a method to reduce the degree of exponentiation circuit at the cost of additional public keys. Using same technique, we give an efficient solution to the open problem~\cite{KLYC13} partially. As an independent interest, we obtain another generic conversion method from private key SHE to public key SHE. Differently from Rothblum~\cite{RothTCC11}, it is free to choose the message space of SHE

    Fixed Argument Pairing Inversion on Elliptic Curves

    Get PDF
    Let EE be an elliptic curve over a finite field Fq{\mathbb F}_q with a power of prime qq, rr a prime dividing #E(Fq)\#E({\mathbb F}_q), and kk the smallest positive integer satisfying rΦk(p)r | \Phi_k(p), called embedding degree. Then a bilinear map t:E(Fq)[r]×E(Fqk)/rE(Fqk)Fqkt: E({\mathbb F}_q)[r] \times E({\mathbb F}_{q^k})/rE({\mathbb F}_{q^k}) \rightarrow {\mathbb F}_{q^k}^* is defined, called the Tate pairing. And the Ate pairing and other variants are obtained by reducing the domain for each argument and raising it to some power. In this paper we consider the {\em Fixed Argument Pairing Inversion (FAPI)} problem for the Tate pairing and its variants. In 2012, considering FAPI for the Atei_i pairing, Kanayama and Okamoto formulated the {\em Exponentiation Inversion (EI)} problem. However the definition gives a somewhat vague description of the hardness of EI. We point out that the described EI can be easily solved, and hence clarify the description so that the problem does contain the actual hardness connection with the prescribed domain for given pairings. Next we show that inverting the Ate pairing (including other variants of the Tate pairing) defined on the smaller domain is neither easier nor harder than inverting the Tate pairing defined on the lager domain. This is very interesting because it is commonly believed that the structure of the Ate pairing is so simple and good (that is, the Miller length is short, the solution domain is small and has an algebraic structure induced from the Frobenius map) that it may leak some information, thus there would be a chance for attackers to find further approach to solve FAPI for the Ate pairing, differently from the Tate pairing
    corecore