14 research outputs found

    Mixed-radix Naccache-Stern encryption

    Get PDF
    In this work we explore a combinatorial optimization problem stemming from the Naccache-Stern cryptosystem. We show that solving this problem results in bandwidth improvements, and suggest a polynomial-time approximation algorithm to find an optimal solution. Our work suggests that using optimal radix encoding results in an asymptotic 50% increase in bandwidth

    Self Masking for Hardering Inversions

    Get PDF
    The question whether one way functions (i.e., functions that are easy to compute but hard to invert) exist is arguably one of the central problems in complexity theory, both from theoretical and practical aspects. While proving that such functions exist could be hard, there were quite a few attempts to provide functions which are one way in practice , namely, they are easy to compute, but there are no known polynomial time algorithms that compute their (generalized) inverse (or that computing their inverse is as hard as notoriously difficult tasks, like factoring very large integers). In this paper we study a different approach. We provide a simple heuristic, called self masking, which converts a given polynomial time computable function ff into a self masked version [f][{f}], which satisfies the following: for a random input xx, [f]1([f](x))=f1(f(x))[{f}]^{-1}([{f}](x))=f^{-1}(f(x)) w.h.p., but a part of f(x)f(x), which is essential for computing f1(f(x))f^{-1}(f(x)) is masked in [f](x)[{f}](x). Intuitively, this masking makes it hard to convert an efficient algorithm which computes f1f^{-1} to an efficient algorithm which computes [f]1[{f}]^{-1}, since the masked parts are available to ff but not to [f][{f}]. We apply this technique on variants of the subset sum problem which were studied in the context of one way functions, and obtain functions which, to the best of our knowledge, cannot be inverted in polynomial time by published techniques

    Exploring Naccache-Stern Knapsack Encryption

    Get PDF
    The Naccache–Stern public-key cryptosystem (NS) relies on the conjectured hardness of the modular multiplicative knapsack problem: Given p,{vi},vimimodpp,\{v_i\},\prod v_i^{m_i} \bmod p, find the {mi}\{m_i\}. Given this scheme\u27s algebraic structure it is interesting to systematically explore its variants and generalizations. In particular it might be useful to enhance NS with features such as semantic security, re-randomizability or an extension to higher-residues. This paper addresses these questions and proposes several such variants

    An Analysis of Modern Cryptosystems

    Get PDF
    Since the ancient Egyptian empire, man has searched for ways to protect information from getting into the wrong hands. Julius Caesar used a simple substitution cipher to protect secrets. During World War II, the Allies and the Axis had codes that they used to protect information. Now that we have computers at our disposal, the methods used to protect data in the past are ineffective. More recently, computer scientists and mathematicians have been working diligently to develop cryptosystems which will provide absolute security in a computing environment. The three major cryptosystems in use today are DES, RSA, and the Knapsack Cryptosystem. These cryptosystems have been reviewed and the positive and negative aspects of each is discussed. A newcomer to the field of cryptology is the Random Spline Cryptosystem which is discussed in detail

    Improved Classical and Quantum Algorithms for the Shortest Vector Problem via Bounded Distance Decoding

    Get PDF
    The most important computational problem on lattices is the Shortest Vector Problem (SVP). In this paper, we present new algorithms that improve the state-of-the-art for provable classical/quantum algorithms for SVP. We present the following results. \bullet A new algorithm for SVP that provides a smooth tradeoff between time complexity and memory requirement. For any positive integer 4qn4\leq q\leq \sqrt{n}, our algorithm takes q13n+o(n)q^{13n+o(n)} time and requires poly(n)q16n/q2poly(n)\cdot q^{16n/q^2} memory. This tradeoff which ranges from enumeration (q=nq=\sqrt{n}) to sieving (qq constant), is a consequence of a new time-memory tradeoff for Discrete Gaussian sampling above the smoothing parameter. \bullet A quantum algorithm for SVP that runs in time 20.953n+o(n)2^{0.953n+o(n)} and requires 20.5n+o(n)2^{0.5n+o(n)} classical memory and poly(n) qubits. In Quantum Random Access Memory (QRAM) model this algorithm takes only 20.873n+o(n)2^{0.873n+o(n)} time and requires a QRAM of size 20.1604n+o(n)2^{0.1604n+o(n)}, poly(n) qubits and 20.5n2^{0.5n} classical space. This improves over the previously fastest classical (which is also the fastest quantum) algorithm due to [ADRS15] that has a time and space complexity 2n+o(n)2^{n+o(n)}. \bullet A classical algorithm for SVP that runs in time 21.741n+o(n)2^{1.741n+o(n)} time and 20.5n+o(n)2^{0.5n+o(n)} space. This improves over an algorithm of [CCL18] that has the same space complexity. The time complexity of our classical and quantum algorithms are obtained using a known upper bound on a quantity related to the lattice kissing number which is 20.402n2^{0.402n}. We conjecture that for most lattices this quantity is a 2o(n)2^{o(n)}. Assuming that this is the case, our classical algorithm runs in time 21.292n+o(n)2^{1.292n+o(n)}, our quantum algorithm runs in time 20.750n+o(n)2^{0.750n+o(n)} and our quantum algorithm in QRAM model runs in time 20.667n+o(n)2^{0.667n+o(n)}.Comment: Faster Quantum Algorithm for SVP in QRAM, 43 pages, 4 figure

    Faster Sieving Algorithm for Approximate SVP with Constant Approximation Factors

    Get PDF
    Abstract. There is a large gap between theory and practice in the complexities of sieving algorithms for solving the shortest vector problem in an arbitrary Euclidean lattice. In this paper, we work towards reducing this gap, providing theoretical refinements of the time and space complexity bounds in the context of the approximate shortest vector problem. This is achieved by relaxing the requirements on the AKS algorithm, rather than on the ListSieve, resulting in exponentially smaller bounds starting from μ2\mu\approx 2, for constant values of μ\mu. We also explain why these improvements carry over to also give the fastest quantum algorithms for the approximate shortest vector problem

    Why we couldn't prove SETH hardness of the Closest Vector Problem for even norms, and of the Subset Sum Problem!

    Full text link
    Recent work [BGS17,ABGS19] has shown SETH hardness of some constant factor approximate CVP in the p\ell_p norm for any pp that is not an even integer. This result was shown by giving a Karp reduction from kk-SAT on nn variables to approximate CVP on a lattice of rank nn. In this work, we show a barrier towards proving a similar result for CVP in the p\ell_p norm where pp is an even integer. We show that for any c,c>0c, c'>0, if for every k>0k > 0, there exists an efficient reduction that maps a kk-SAT instance on nn variables to a (1+exp(nc)))(1+exp(-n^c)))-CVP instance for a lattice of rank at most ncn^{c'} in the Euclidean norm, then coNPNP/Poly\mathsf{coNP} \subset \mathsf{NP/Poly}. We prove a similar result for (1+exp(nc)))(1+exp(-n^c)))-CVP for all even norms under a mild additional promise that the ratio of the distance of the target from the lattice and the shortest non-zero vector in the lattice is bounded by exp(nO(1))exp(n^{O(1)}). Furthermore, we show that for any c,c>0c,c' > 0, and any even integer pp, if for every k>0k > 0, there exists an efficient reduction that maps a kk-SAT instance on nn variables to a (1+exp(nc)))(1+exp(-n^c)))-SVPpSVP_p instance for a lattice of rank at most ncn^{c'}, then coNPNP/Poly\mathsf{coNP} \subset \mathsf{NP/Poly}. The result for SVP does not require any additional promise. While prior results have indicated that lattice problems in the 2\ell_2 norm (Euclidean norm) are easier than lattice problems in other norms, this is the first result that shows a separation between these problems. We achieve this by using a result by Dell and van Melkebeek [JACM, 2014] on the impossibility of the existence of a reduction that compresses an arbitrary kk-SAT instance into a string of length O(nkϵ)\mathcal{O}(n^{k-\epsilon}) for any ϵ>0\epsilon>0. In addition to CVP, we also show that the same result holds for the Subset-Sum problem using similar techniques.Comment: 32 pages, 3 figure
    corecore