1,893 research outputs found

    Storage codes -- coding rate and repair locality

    Full text link
    The {\em repair locality} of a distributed storage code is the maximum number of nodes that ever needs to be contacted during the repair of a failed node. Having small repair locality is desirable, since it is proportional to the number of disk accesses during repair. However, recent publications show that small repair locality comes with a penalty in terms of code distance or storage overhead if exact repair is required. Here, we first review some of the main results on storage codes under various repair regimes and discuss the recent work on possible (information-theoretical) trade-offs between repair locality and other code parameters like storage overhead and code distance, under the exact repair regime. Then we present some new information theoretical lower bounds on the storage overhead as a function of the repair locality, valid for all common coding and repair models. In particular, we show that if each of the nn nodes in a distributed storage system has storage capacity \ga and if, at any time, a failed node can be {\em functionally} repaired by contacting {\em some} set of rr nodes (which may depend on the actual state of the system) and downloading an amount \gb of data from each, then in the extreme cases where \ga=\gb or \ga = r\gb, the maximal coding rate is at most r/(r+1)r/(r+1) or 1/2, respectively (that is, the excess storage overhead is at least 1/r1/r or 1, respectively).Comment: Accepted for publication in ICNC'13, San Diego, US

    Locally Repairable Codes with Multiple Repair Alternatives

    Full text link
    Distributed storage systems need to store data redundantly in order to provide some fault-tolerance and guarantee system reliability. Different coding techniques have been proposed to provide the required redundancy more efficiently than traditional replication schemes. However, compared to replication, coding techniques are less efficient for repairing lost redundancy, as they require retrieval of larger amounts of data from larger subsets of storage nodes. To mitigate these problems, several recent works have presented locally repairable codes designed to minimize the repair traffic and the number of nodes involved per repair. Unfortunately, existing methods often lead to codes where there is only one subset of nodes able to repair a piece of lost data, limiting the local repairability to the availability of the nodes in this subset. In this paper, we present a new family of locally repairable codes that allows different trade-offs between the number of contacted nodes per repair, and the number of different subsets of nodes that enable this repair. We show that slightly increasing the number of contacted nodes per repair allows to have repair alternatives, which in turn increases the probability of being able to perform efficient repairs. Finally, we present pg-BLRC, an explicit construction of locally repairable codes with multiple repair alternatives, constructed from partial geometries, in particular from Generalized Quadrangles. We show how these codes can achieve practical lengths and high rates, while requiring a small number of nodes per repair, and providing multiple repair alternatives.Comment: IEEE International Symposium on Information Theory (ISIT 2013

    Association schemes from the action of PGL(2,q)PGL(2,q) fixing a nonsingular conic in PG(2,q)

    Full text link
    The group PGL(2,q)PGL(2,q) has an embedding into PGL(3,q)PGL(3,q) such that it acts as the group fixing a nonsingular conic in PG(2,q)PG(2,q). This action affords a coherent configuration R(q)R(q) on the set L(q)L(q) of non-tangent lines of the conic. We show that the relations can be described by using the cross-ratio. Our results imply that the restrictions R+(q)R_{+}(q) and R(q)R_{-}(q) to the sets L+(q)L_{+}(q) of secant lines and to the set L(q)L_{-}(q) of exterior lines, respectively, are both association schemes; moreover, we show that the elliptic scheme R(q)R_{-}(q) is pseudocyclic. We further show that the coherent configuration R(q2)R(q^2) with qq even allow certain fusions. These provide a 4-class fusion of the hyperbolic scheme R+(q2)R_{+}(q^2), and 3-class fusions and 2-class fusions (strongly regular graphs) of both schemes R+(q2)R_{+}(q^2) and $R_{-}(q^2). The fusion results for the hyperbolic case are known, but our approach here as well as our results in the elliptic case are new.Comment: 33 page

    Generating parity check equations for bounded-distance iterative erasure decoding

    Full text link
    A generic (r,m)(r,m)-erasure correcting set is a collection of vectors in \bF_2^r which can be used to generate, for each binary linear code of codimension rr, a collection of parity check equations that enables iterative decoding of all correctable erasure patterns of size at most mm. That is to say, the only stopping sets of size at most mm for the generated parity check equations are the erasure patterns for which there is more than one manner to fill in theerasures to obtain a codeword. We give an explicit construction of generic (r,m)(r,m)-erasure correcting sets of cardinality i=0m1(r1i)\sum_{i=0}^{m-1} {r-1\choose i}. Using a random-coding-like argument, we show that for fixed mm, the minimum size of a generic (r,m)(r,m)-erasure correcting set is linear in rr. Keywords: iterative decoding, binary erasure channel, stopping setComment: Accepted for publication in Proc Int Symposium on Information Theory 2006, ISIT 0

    On parity check collections for iterative erasure decoding that correct all correctable erasure patterns of a given size

    Full text link
    Recently there has been interest in the construction of small parity check sets for iterative decoding of the Hamming code with the property that each uncorrectable (or stopping) set of size three is the support of a codeword and hence uncorrectable anyway. Here we reformulate and generalise the problem, and improve on this construction. First we show that a parity check collection that corrects all correctable erasure patterns of size m for the r-th order Hamming code (i.e, the Hamming code with codimension r) provides for all codes of codimension rr a corresponding ``generic'' parity check collection with this property. This leads naturally to a necessary and sufficient condition on such generic parity check collections. We use this condition to construct a generic parity check collection for codes of codimension r correcting all correctable erasure patterns of size at most m, for all r and m <= r, thus generalising the known construction for m=3. Then we discussoptimality of our construction and show that it can be improved for m>=3 and r large enough. Finally we discuss some directions for further research.Comment: 13 pages, no figures. Submitted to IEEE Transactions on Information Theory, July 28, 200

    Proofs of two conjectures on ternary weakly regular bent functions

    Full text link
    We study ternary monomial functions of the form f(x)=\Tr_n(ax^d), where x\in \Ff_{3^n} and \Tr_n: \Ff_{3^n}\to \Ff_3 is the absolute trace function. Using a lemma of Hou \cite{hou}, Stickelberger's theorem on Gauss sums, and certain ternary weight inequalities, we show that certain ternary monomial functions arising from \cite{hk1} are weakly regular bent, settling a conjecture of Helleseth and Kholosha \cite{hk1}. We also prove that the Coulter-Matthews bent functions are weakly regular.Comment: 20 page
    corecore