29 research outputs found

    Information Sets of Multiplicity Codes

    Get PDF
    We here provide a method for systematic encoding of the Multiplicity codes introduced by Kopparty, Saraf and Yekhanin in 2011. The construction is built on an idea of Kop-party. We properly define information sets for these codes and give detailed proofs of the validity of Kopparty's construction, that use generating functions. We also give a complexity estimate of the associated encoding algorithm.Comment: International Symposium on Information Theory, Jun 2015, Hong-Kong, China. IEE

    List and Probabilistic Unique Decoding of Folded Subspace Codes

    Full text link
    A new class of folded subspace codes for noncoherent network coding is presented. The codes can correct insertions and deletions beyond the unique decoding radius for any code rate R∈[0,1]R\in[0,1]. An efficient interpolation-based decoding algorithm for this code construction is given which allows to correct insertions and deletions up to the normalized radius s(1−((1/h+h)/(h−s+1))R)s(1-((1/h+h)/(h-s+1))R), where hh is the folding parameter and s≤hs\leq h is a decoding parameter. The algorithm serves as a list decoder or as a probabilistic unique decoder that outputs a unique solution with high probability. An upper bound on the average list size of (folded) subspace codes and on the decoding failure probability is derived. A major benefit of the decoding scheme is that it enables probabilistic unique decoding up to the list decoding radius.Comment: 6 pages, 1 figure, accepted for ISIT 201

    A Storage-Efficient and Robust Private Information Retrieval Scheme Allowing Few Servers

    Get PDF
    Since the concept of locally decodable codes was introduced by Katz and Trevisan in 2000, it is well-known that information the-oretically secure private information retrieval schemes can be built using locally decodable codes. In this paper, we construct a Byzantine ro-bust PIR scheme using the multiplicity codes introduced by Kopparty et al. Our main contributions are on the one hand to avoid full replica-tion of the database on each server; this significantly reduces the global redundancy. On the other hand, to have a much lower locality in the PIR context than in the LDC context. This shows that there exists two different notions: LDC-locality and PIR-locality. This is made possible by exploiting geometric properties of multiplicity codes

    Lifted Multiplicity Codes and the Disjoint Repair Group Property

    Get PDF
    Lifted Reed Solomon Codes (Guo, Kopparty, Sudan 2013) were introduced in the context of locally correctable and testable codes. They are multivariate polynomials whose restriction to any line is a codeword of a Reed-Solomon code. We consider a generalization of their construction, which we call lifted multiplicity codes. These are multivariate polynomial codes whose restriction to any line is a codeword of a multiplicity code (Kopparty, Saraf, Yekhanin 2014). We show that lifted multiplicity codes have a better trade-off between redundancy and a notion of locality called the t-disjoint-repair-group property than previously known constructions. More precisely, we show that, for t <=sqrt{N}, lifted multiplicity codes with length N and redundancy O(t^{0.585} sqrt{N}) have the property that any symbol of a codeword can be reconstructed in t different ways, each using a disjoint subset of the other coordinates. This gives the best known trade-off for this problem for any super-constant t < sqrt{N}. We also give an alternative analysis of lifted Reed Solomon codes using dual codes, which may be of independent interest

    Unbalanced Expanders from Multiplicity Codes

    Get PDF
    In 2007 Guruswami, Umans and Vadhan gave an explicit construction of a lossless condenser based on Parvaresh-Vardy codes. This lossless condenser is a basic building block in many constructions, and, in particular, is behind the state of the art extractor constructions. We give an alternative construction that is based on Multiplicity codes. While the bottom-line result is similar to the GUV result, the analysis is very different. In GUV (and Parvaresh-Vardy codes) the polynomial ring is closed to a finite field, and every polynomial is associated with related elements in the finite field. In our construction a polynomial from the polynomial ring is associated with its iterated derivatives. Our analysis boils down to solving a differential equation over a finite field, and uses previous techniques, introduced by Kopparty (in [Swastik Kopparty, 2015]) for the list-decoding setting. We also observe that these (and more general) questions were studied in differential algebra, and we use the terminology and result developed there. We believe these techniques have the potential of getting better constructions and solving the current bottlenecks in the area

    List Decoding of Rank-Metric Codes with Row-To-Column Ratio Bigger Than 1/2

    Get PDF
    Despite numerous results about the list decoding of Hamming-metric codes, development of list decoding on rank-metric codes is not as rapid as its counterpart. The bound of list decoding obeys the Gilbert-Varshamov bound in both the metrics. In the case of the Hamming-metric, the Gilbert-Varshamov bound is a trade-off among rate, decoding radius and alphabet size, while in the case of the rank-metric, the Gilbert-Varshamov bound is a trade-off among rate, decoding radius and column-to-row ratio (i.e., the ratio between the numbers of columns and rows). Hence, alphabet size and column-to-row ratio play a similar role for list decodability in each metric. In the case of the Hamming-metric, it is more challenging to list decode codes over smaller alphabets. In contrast, in the case of the rank-metric, it is more difficult to list decode codes with large column-to-row ratio. In particular, it is extremely difficult to list decode square matrix rank-metric codes (i.e., the column-to-row ratio is equal to 1). The main purpose of this paper is to explicitly construct a class of rank-metric codes ? of rate R with the column-to-row ratio up to 2/3 and efficiently list decode these codes with decoding radius beyond the decoding radius (1-R)/2 (note that (1-R)/2 is at least half of relative minimum distance ?). In literature, the largest column-to-row ratio of rank-metric codes that can be efficiently list decoded beyond half of minimum distance is 1/2. Thus, it is greatly desired to efficiently design list decoding algorithms for rank-metric codes with the column-to-row ratio bigger than 1/2 or even close to 1. Our key idea is to compress an element of the field F_q? into a smaller F_q-subspace via a linearized polynomial. Thus, the column-to-row ratio gets increased at the price of reducing the code rate. Our result shows that the compression technique is powerful and it has not been employed in the topic of list decoding of both the Hamming and rank metrics. Apart from the above algebraic technique, we follow some standard techniques to prune down the list. The algebraic idea enables us to pin down the message into a structured subspace of dimension linear in the number n of columns. This "periodic" structure allows us to pre-encode the message to prune down the list

    List and Unique Error-Erasure Decoding of Interleaved Gabidulin Codes with Interpolation Techniques

    Full text link
    A new interpolation-based decoding principle for interleaved Gabidulin codes is presented. The approach consists of two steps: First, a multi-variate linearized polynomial is constructed which interpolates the coefficients of the received word and second, the roots of this polynomial have to be found. Due to the specific structure of the interpolation polynomial, both steps (interpolation and root-finding) can be accomplished by solving a linear system of equations. This decoding principle can be applied as a list decoding algorithm (where the list size is not necessarily bounded polynomially) as well as an efficient probabilistic unique decoding algorithm. For the unique decoder, we show a connection to known unique decoding approaches and give an upper bound on the failure probability. Finally, we generalize our approach to incorporate not only errors, but also row and column erasures.Comment: accepted for Designs, Codes and Cryptography; presented in part at WCC 2013, Bergen, Norwa
    corecore