111,612 research outputs found

    Locality in Index Coding for Large Min-Rank

    Full text link
    An index code is said to be locally decodable if each receiver can decode its demand using its side information and by querying only a subset of the transmitted codeword symbols instead of observing the entire codeword. Local decodability can be a beneficial feature in some communication scenarios, such as when the receivers can afford to listen to only a part of the transmissions because of limited availability of power. The locality of an index code is the ratio of the maximum number of codeword symbols queried by a receiver to the message length. In this paper we analyze the optimum locality of linear codes for the family of index coding problems whose min-rank is one less than the number of receivers in the network. We first derive the optimal trade-off between the index coding rate and locality with vector linear coding when the side information graph is a directed cycle. We then provide the optimal trade-off achieved by scalar linear coding for a larger family of problems, viz., problems where the min-rank is only one less than the number of receivers. While the arguments used for achievability are based on known coding techniques, the converse arguments rely on new results on the structure of locally decodable index codes.Comment: Keywords: index codes, locality, min-rank, directed cycle, side information grap

    Locality in Index Coding for Large Min-Rank

    Get PDF
    An index code is said to be locally decodable if each receiver can decode its demand using its side information and by querying only a subset of the transmitted codeword symbols instead of observing the entire codeword. Local decodability can be a beneficial feature in some communication scenarios, such as when the receivers can afford to listen to only a part of the transmissions because of limited availability of power. The locality of an index code is the ratio of the maximum number of codeword symbols queried by a receiver to the message length. In this paper we analyze the optimum locality of linear codes for the family of index coding problems whose min-rank is one less than the number of receivers in the network. We first derive the optimal trade-off between the index coding rate and locality with vector linear coding when the side information graph is a directed cycle. We then provide the optimal trade-off achieved by scalar linear coding for a larger family of problems, viz., problems where the min-rank is only one less than the number of receivers. While the arguments used for achievability are based on known coding techniques, the converse arguments rely on new results on the structure of locally decodable index codes

    Reduced-Dimension Linear Transform Coding of Correlated Signals in Networks

    Full text link
    A model, called the linear transform network (LTN), is proposed to analyze the compression and estimation of correlated signals transmitted over directed acyclic graphs (DAGs). An LTN is a DAG network with multiple source and receiver nodes. Source nodes transmit subspace projections of random correlated signals by applying reduced-dimension linear transforms. The subspace projections are linearly processed by multiple relays and routed to intended receivers. Each receiver applies a linear estimator to approximate a subset of the sources with minimum mean squared error (MSE) distortion. The model is extended to include noisy networks with power constraints on transmitters. A key task is to compute all local compression matrices and linear estimators in the network to minimize end-to-end distortion. The non-convex problem is solved iteratively within an optimization framework using constrained quadratic programs (QPs). The proposed algorithm recovers as special cases the regular and distributed Karhunen-Loeve transforms (KLTs). Cut-set lower bounds on the distortion region of multi-source, multi-receiver networks are given for linear coding based on convex relaxations. Cut-set lower bounds are also given for any coding strategy based on information theory. The distortion region and compression-estimation tradeoffs are illustrated for different communication demands (e.g. multiple unicast), and graph structures.Comment: 33 pages, 7 figures, To appear in IEEE Transactions on Signal Processin

    Dynamic algorithms for multicast with intra-session network coding

    Get PDF
    The problem of multiple multicast sessions with intra-session network coding in time-varying networks is considered. The network-layer capacity region of input rates that can be stably supported is established. Dynamic algorithms for multicast routing, network coding, power allocation, session scheduling, and rate allocation across correlated sources, which achieve stability for rates within the capacity region, are presented. This work builds on the back-pressure approach introduced by Tassiulas et al., extending it to network coding and correlated sources. In the proposed algorithms, decisions on routing, network coding, and scheduling between different sessions at a node are made locally at each node based on virtual queues for different sinks. For correlated sources, the sinks locally determine and control transmission rates across the sources. The proposed approach yields a completely distributed algorithm for wired networks. In the wireless case, power control among different transmitters is centralized while routing, network coding, and scheduling between different sessions at a given node are distributed

    Compute-and-Forward: Harnessing Interference through Structured Codes

    Get PDF
    Interference is usually viewed as an obstacle to communication in wireless networks. This paper proposes a new strategy, compute-and-forward, that exploits interference to obtain significantly higher rates between users in a network. The key idea is that relays should decode linear functions of transmitted messages according to their observed channel coefficients rather than ignoring the interference as noise. After decoding these linear equations, the relays simply send them towards the destinations, which given enough equations, can recover their desired messages. The underlying codes are based on nested lattices whose algebraic structure ensures that integer combinations of codewords can be decoded reliably. Encoders map messages from a finite field to a lattice and decoders recover equations of lattice points which are then mapped back to equations over the finite field. This scheme is applicable even if the transmitters lack channel state information.Comment: IEEE Trans. Info Theory, to appear. 23 pages, 13 figure
    corecore