20 research outputs found

    The decoding failure probability of MDPC codes

    Get PDF
    Moderate Density Parity Check (MDPC) codes are defined here as codes which have a parity-check matrix whose row weight is O(n)O(\sqrt{n}) where nn is the length nn of the code. They can be decoded like LDPC codes but they decode much less errors than LDPC codes: the number of errors they can decode in this case is of order Θ(n)\Theta(\sqrt{n}). Despite this fact they have been proved very useful in cryptography for devising key exchange mechanisms. They have also been proposed in McEliece type cryptosystems. However in this case, the parameters that have been proposed in \cite{MTSB13} were broken in \cite{GJS16}. This attack exploits the fact that the decoding failure probability is non-negligible. We show here that this attack can be thwarted by choosing the parameters in a more conservative way. We first show that such codes can decode with a simple bit-flipping decoder any pattern of O(nlog⁑log⁑nlog⁑n)O\left(\frac{\sqrt{n} \log \log n}{\log n}\right) errors. This avoids the previous attack at the cost of significantly increasing the key size of the scheme. We then show that under a very reasonable assumption the decoding failure probability decays almost exponentially with the codelength with just two iterations of bit-flipping. With an additional assumption it has even been proved that it decays exponentially with an unbounded number of iterations and we show that in this case the increase of the key size which is required for resisting to the attack of \cite{GJS16} is only moderate

    Tradeoffs for nearest neighbors on the sphere

    Get PDF
    We consider tradeoffs between the query and update complexities for the (approximate) nearest neighbor problem on the sphere, extending the recent spherical filters to sparse regimes and generalizing the scheme and analysis to account for different tradeoffs. In a nutshell, for the sparse regime the tradeoff between the query complexity nρqn^{\rho_q} and update complexity nρun^{\rho_u} for data sets of size nn is given by the following equation in terms of the approximation factor cc and the exponents ρq\rho_q and ρu\rho_u: c2ρq+(c2βˆ’1)ρu=2c2βˆ’1.c^2\sqrt{\rho_q}+(c^2-1)\sqrt{\rho_u}=\sqrt{2c^2-1}. For small c=1+Ο΅c=1+\epsilon, minimizing the time for updates leads to a linear space complexity at the cost of a query time complexity n1βˆ’4Ο΅2n^{1-4\epsilon^2}. Balancing the query and update costs leads to optimal complexities n1/(2c2βˆ’1)n^{1/(2c^2-1)}, matching bounds from [Andoni-Razenshteyn, 2015] and [Dubiner, IEEE-TIT'10] and matching the asymptotic complexities of [Andoni-Razenshteyn, STOC'15] and [Andoni-Indyk-Laarhoven-Razenshteyn-Schmidt, NIPS'15]. A subpolynomial query time complexity no(1)n^{o(1)} can be achieved at the cost of a space complexity of the order n1/(4Ο΅2)n^{1/(4\epsilon^2)}, matching the bound nΞ©(1/Ο΅2)n^{\Omega(1/\epsilon^2)} of [Andoni-Indyk-Patrascu, FOCS'06] and [Panigrahy-Talwar-Wieder, FOCS'10] and improving upon results of [Indyk-Motwani, STOC'98] and [Kushilevitz-Ostrovsky-Rabani, STOC'98]. For large cc, minimizing the update complexity results in a query complexity of n2/c2+O(1/c4)n^{2/c^2+O(1/c^4)}, improving upon the related exponent for large cc of [Kapralov, PODS'15] by a factor 22, and matching the bound nΞ©(1/c2)n^{\Omega(1/c^2)} of [Panigrahy-Talwar-Wieder, FOCS'08]. Balancing the costs leads to optimal complexities n1/(2c2βˆ’1)n^{1/(2c^2-1)}, while a minimum query time complexity can be achieved with update complexity n2/c2+O(1/c4)n^{2/c^2+O(1/c^4)}, improving upon the previous best exponents of Kapralov by a factor 22.Comment: 16 pages, 1 table, 2 figures. Mostly subsumed by arXiv:1608.03580 [cs.DS] (along with arXiv:1605.02701 [cs.DS]

    Statistical Decoding

    Get PDF
    International audienceThe security of code-based cryptography relies primarily on the hardness of generic decoding with linear codes. The best generic decoding algorithms are all improvements of an old algorithm due to Prange: they are known under the name of information set decoding techniques (ISD). A while ago a generic decoding algorithm which does not belong to this family was proposed: statistical decoding. It is a randomized algorithm that requires the computation of a large set of parity-check equations of moderate weight. We solve here several open problems related to this decoding algorithm. We give in particular the asymptotic complexity of this algorithm, give a rather efficient way of computing the parity-check equations needed for it inspired by ISD techniques and give a lower bound on its complexity showing that when it comes to decoding on the Gilbert-Varshamov bound it can never be better than Prange's algorithm
    corecore