18 research outputs found

    The Capacity of Online (Causal) qq-ary Error-Erasure Channels

    Full text link
    In the qq-ary online (or "causal") channel coding model, a sender wishes to communicate a message to a receiver by transmitting a codeword x=(x1,…,xn)∈{0,1,…,qβˆ’1}n\mathbf{x} =(x_1,\ldots,x_n) \in \{0,1,\ldots,q-1\}^n symbol by symbol via a channel limited to at most pnpn errors and/or pβˆ—np^{*} n erasures. The channel is "online" in the sense that at the iith step of communication the channel decides whether to corrupt the iith symbol or not based on its view so far, i.e., its decision depends only on the transmitted symbols (x1,…,xi)(x_1,\ldots,x_i). This is in contrast to the classical adversarial channel in which the corruption is chosen by a channel that has a full knowledge on the sent codeword x\mathbf{x}. In this work we study the capacity of qq-ary online channels for a combined corruption model, in which the channel may impose at most pnpn {\em errors} and at most pβˆ—np^{*} n {\em erasures} on the transmitted codeword. The online channel (in both the error and erasure case) has seen a number of recent studies which present both upper and lower bounds on its capacity. In this work, we give a full characterization of the capacity as a function of q,pq,p, and pβˆ—p^{*}.Comment: This is a new version of the binary case, which can be found at arXiv:1412.637

    Rack-aware minimum-storage regenerating codes with optimal access

    Full text link
    We derive a lower bound on the amount of information accessed to repair failed nodes within a single rack from any number of helper racks in the rack-aware storage model that allows collective information processing in the nodes that share the same rack. Furthermore, we construct a family of rack-aware minimum-storage regenerating (MSR) codes with the property that the number of symbols accessed for repairing a single failed node attains the bound with equality for all admissible parameters. Constructions of rack-aware optimal-access MSR codes were only known for limited parameters. We also present a family of Reed-Solomon (RS) codes that only require accessing a relatively small number of symbols to repair multiple failed nodes in a single rack. In particular, for certain code parameters, the RS construction attains the bound on the access complexity with equality and thus has optimal access

    Codes with efficient erasure correction

    Get PDF
    Distributed storage systems are becoming increasingly ubiquitous in the emerging era of Internet of Things. Major internet technology companies employ large-scale distributed storage systems to accommodate the massive amounts of data generated and requested by global users. The need of reliable and efficient storage of immense amounts of data calls for new applications and development of classical error-correcting codes. This dissertation is devoted to a study of codes with efficient erasure correction for distributed storage systems. The efficiency of erasure correction is often assessed by two performance metrics, bandwidth and locality. In this dissertation we address several problems for each of these two metrics. We construct families of codes with optimal communication complexity for erasure correction ("repair bandwidth") for a heterogeneous storage model, and derive several results for the problem of optimal repair of Reed-Solomon codes. We also construct families of cyclic and convolutional codes with locality, extending the range of parameters for which such families were previously known

    A characterization of the capacity of online (causal) binary channels

    Full text link
    In the binary online (or "causal") channel coding model, a sender wishes to communicate a message to a receiver by transmitting a codeword x=(x1,…,xn)∈{0,1}n\mathbf{x} =(x_1,\ldots,x_n) \in \{0,1\}^n bit by bit via a channel limited to at most pnpn corruptions. The channel is "online" in the sense that at the iith step of communication the channel decides whether to corrupt the iith bit or not based on its view so far, i.e., its decision depends only on the transmitted bits (x1,…,xi)(x_1,\ldots,x_i). This is in contrast to the classical adversarial channel in which the error is chosen by a channel that has a full knowledge on the sent codeword x\mathbf{x}. In this work we study the capacity of binary online channels for two corruption models: the {\em bit-flip} model in which the channel may flip at most pnpn of the bits of the transmitted codeword, and the {\em erasure} model in which the channel may erase at most pnpn bits of the transmitted codeword. Specifically, for both error models we give a full characterization of the capacity as a function of pp. The online channel (in both the bit-flip and erasure case) has seen a number of recent studies which present both upper and lower bounds on its capacity. In this work, we present and analyze a coding scheme that improves on the previously suggested lower bounds and matches the previously suggested upper bounds thus implying a tight characterization

    Class-level Structural Relation Modelling and Smoothing for Visual Representation Learning

    Full text link
    Representation learning for images has been advanced by recent progress in more complex neural models such as the Vision Transformers and new learning theories such as the structural causal models. However, these models mainly rely on the classification loss to implicitly regularize the class-level data distributions, and they may face difficulties when handling classes with diverse visual patterns. We argue that the incorporation of the structural information between data samples may improve this situation. To achieve this goal, this paper presents a framework termed \textbf{C}lass-level Structural Relation Modeling and Smoothing for Visual Representation Learning (CSRMS), which includes the Class-level Relation Modelling, Class-aware Graph Sampling, and Relational Graph-Guided Representation Learning modules to model a relational graph of the entire dataset and perform class-aware smoothing and regularization operations to alleviate the issue of intra-class visual diversity and inter-class similarity. Specifically, the Class-level Relation Modelling module uses a clustering algorithm to learn the data distributions in the feature space and identify three types of class-level sample relations for the training set; Class-aware Graph Sampling module extends typical training batch construction process with three strategies to sample dataset-level sub-graphs; and Relational Graph-Guided Representation Learning module employs a graph convolution network with knowledge-guided smoothing operations to ease the projection from different visual patterns to the same class. Experiments demonstrate the effectiveness of structured knowledge modelling for enhanced representation learning and show that CSRMS can be incorporated with any state-of-the-art visual representation learning models for performance gains. The source codes and demos have been released at https://github.com/czt117/CSRMS

    A lower bound on the field size of convolutional codes with a maximum distance profile and an improved construction

    Full text link
    Convolutional codes with a maximum distance profile attain the largest possible column distances for the maximum number of time instants and thus have outstanding error-correcting capability especially for streaming applications. Explicit constructions of such codes are scarce in the literature. In particular, known constructions of convolutional codes with rate k/n and a maximum distance profile require a field of size at least exponential in n for general code parameters. At the same time, the only known lower bound on the field size is the trivial bound that is linear in n. In this paper, we show that a finite field of size Ξ©L(nLβˆ’1)\Omega_L(n^{L-1}) is necessary for constructing convolutional codes with rate k/n and a maximum distance profile of length L. As a direct consequence, this rules out the possibility of constructing convolutional codes with a maximum distance profile of length L >= 3 over a finite field of size O(n). Additionally, we also present an explicit construction of convolutional code with rate k/n and a maximum profile of length L = 1 over a finite field of size O(nmin⁑{k,nβˆ’k})O(n^{\min\{k,n-k\}}), achieving a smaller field size than known constructions with the same profile length.Comment: arXiv admin note: text overlap with arXiv:2112.0411

    The Capacity of Online (Causal) qq -Ary Error-Erasure Channels

    No full text

    The Capacity of Online (Causal) q q -Ary Error-Erasure Channels

    No full text
    corecore