70 research outputs found

    Improved asymptotic bounds for codes using distinguished divisors of global function fields

    Full text link
    For a prime power qq, let Ī±q\alpha_q be the standard function in the asymptotic theory of codes, that is, Ī±q(Ī“)\alpha_q(\delta) is the largest asymptotic information rate that can be achieved for a given asymptotic relative minimum distance Ī“\delta of qq-ary codes. In recent years the Tsfasman-Vl\u{a}du\c{t}-Zink lower bound on Ī±q(Ī“)\alpha_q(\delta) was improved by Elkies, Xing, and Niederreiter and \"Ozbudak. In this paper we show further improvements on these bounds by using distinguished divisors of global function fields. We also show improved lower bounds on the corresponding function Ī±qlin\alpha_q^{\rm lin} for linear codes

    Transitive and self-dual codes attaining the Tsfasman-Vladut-Zink bound

    Get PDF
    A major problem in coding theory is the question of whether the class of cyclic codes is asymptotically good. In this correspondence-as a generalization of cyclic codes-the notion of transitive codes is introduced (see Definition 1.4 in Section I), and it is shown that the class of transitive codes is asymptotically good. Even more, transitive codes attain the Tsfasman-Vladut-Zink bound over F-q, for all squares q = l(2). It is also shown that self-orthogonal and self-dual codes attain the Tsfasman-Vladut-Zink bound, thus improving previous results about self-dual codes attaining the Gilbert-Varshamov bound. The main tool is a new asymptotically optimal tower E-0 subset of E-1 subset of E-2 subset of center dot center dot center dot of function fields over F-q (with q = l(2)), where all extensions E-n/E-0 are Galois

    Why Philosophers Should Care About Computational Complexity

    Get PDF
    One might think that, once we know something is computable, how efficiently it can be computed is a practical question with little further philosophical importance. In this essay, I offer a detailed case that one would be wrong. In particular, I argue that computational complexity theory---the field that studies the resources (such as time, space, and randomness) needed to solve computational problems---leads to new perspectives on the nature of mathematical knowledge, the strong AI debate, computationalism, the problem of logical omniscience, Hume's problem of induction, Goodman's grue riddle, the foundations of quantum mechanics, economic rationality, closed timelike curves, and several other topics of philosophical interest. I end by discussing aspects of complexity theory itself that could benefit from philosophical analysis.Comment: 58 pages, to appear in "Computability: G\"odel, Turing, Church, and beyond," MIT Press, 2012. Some minor clarifications and corrections; new references adde

    Error-Correction Coding and Decoding: Bounds, Codes, Decoders, Analysis and Applications

    Get PDF
    Coding; Communications; Engineering; Networks; Information Theory; Algorithm

    Hardness of SIS and LWE with Small Parameters

    Get PDF
    The Short Integer Solution (SIS) and Learning With Errors (LWE) problems are the foundations for countless applications in lattice-based cryptography, and are provably as hard as approximate lattice problems in the worst case. A important question from both a practical and theoretical perspective is how small their parameters can be made, while preserving their hardness. We prove two main results on SIS and LWE with small parameters. For SIS, we show that the problem retains its hardness for moduli qā‰„Ī²ā‹…nĪ“q \geq \beta \cdot n^{\delta} for any constant Ī“>0\delta > 0, where Ī²\beta is the bound on the Euclidean norm of the solution. This improves upon prior results which required qā‰„Ī²ā‹…nlogā”nq \geq \beta \cdot \sqrt{n \log n}, and is essentially optimal since the problem is trivially easy for qā‰¤Ī²q \leq \beta. For LWE, we show that it remains hard even when the errors are small (e.g., uniformly random from {0,1}\{0,1\}), provided that the number of samples is small enough (e.g., linear in the dimension nn of the LWE secret). Prior results required the errors to have magnitude at least n\sqrt{n} and to come from a Gaussian-like distribution
    • ā€¦
    corecore