9,874 research outputs found

    Multiply Constant-Weight Codes and the Reliability of Loop Physically Unclonable Functions

    Full text link
    We introduce the class of multiply constant-weight codes to improve the reliability of certain physically unclonable function (PUF) response. We extend classical coding methods to construct multiply constant-weight codes from known qq-ary and constant-weight codes. Analogues of Johnson bounds are derived and are shown to be asymptotically tight to a constant factor under certain conditions. We also examine the rates of the multiply constant-weight codes and interestingly, demonstrate that these rates are the same as those of constant-weight codes of suitable parameters. Asymptotic analysis of our code constructions is provided

    On the structure of non-full-rank perfect codes

    Full text link
    The Krotov combining construction of perfect 1-error-correcting binary codes from 2000 and a theorem of Heden saying that every non-full-rank perfect 1-error-correcting binary code can be constructed by this combining construction is generalized to the qq-ary case. Simply, every non-full-rank perfect code CC is the union of a well-defined family of μ\mu-components KμK_\mu, where μ\mu belongs to an "outer" perfect code C∗C^*, and these components are at distance three from each other. Components from distinct codes can thus freely be combined to obtain new perfect codes. The Phelps general product construction of perfect binary code from 1984 is generalized to obtain μ\mu-components, and new lower bounds on the number of perfect 1-error-correcting qq-ary codes are presented.Comment: 8 page

    It'll probably work out: improved list-decoding through random operations

    Full text link
    In this work, we introduce a framework to study the effect of random operations on the combinatorial list-decodability of a code. The operations we consider correspond to row and column operations on the matrix obtained from the code by stacking the codewords together as columns. This captures many natural transformations on codes, such as puncturing, folding, and taking subcodes; we show that many such operations can improve the list-decoding properties of a code. There are two main points to this. First, our goal is to advance our (combinatorial) understanding of list-decodability, by understanding what structure (or lack thereof) is necessary to obtain it. Second, we use our more general results to obtain a few interesting corollaries for list decoding: (1) We show the existence of binary codes that are combinatorially list-decodable from 1/2−ϵ1/2-\epsilon fraction of errors with optimal rate Ω(ϵ2)\Omega(\epsilon^2) that can be encoded in linear time. (2) We show that any code with Ω(1)\Omega(1) relative distance, when randomly folded, is combinatorially list-decodable 1−ϵ1-\epsilon fraction of errors with high probability. This formalizes the intuition for why the folding operation has been successful in obtaining codes with optimal list decoding parameters; previously, all arguments used algebraic methods and worked only with specific codes. (3) We show that any code which is list-decodable with suboptimal list sizes has many subcodes which have near-optimal list sizes, while retaining the error correcting capabilities of the original code. This generalizes recent results where subspace evasive sets have been used to reduce list sizes of codes that achieve list decoding capacity
    • …
    corecore