191,951 research outputs found

    On network coding for sum-networks

    Full text link
    A directed acyclic network is considered where all the terminals need to recover the sum of the symbols generated at all the sources. We call such a network a sum-network. It is shown that there exists a solvably (and linear solvably) equivalent sum-network for any multiple-unicast network, and thus for any directed acyclic communication network. It is also shown that there exists a linear solvably equivalent multiple-unicast network for every sum-network. It is shown that for any set of polynomials having integer coefficients, there exists a sum-network which is scalar linear solvable over a finite field F if and only if the polynomials have a common root in F. For any finite or cofinite set of prime numbers, a network is constructed which has a vector linear solution of any length if and only if the characteristic of the alphabet field is in the given set. The insufficiency of linear network coding and unachievability of the network coding capacity are proved for sum-networks by using similar known results for communication networks. Under fractional vector linear network coding, a sum-network and its reverse network are shown to be equivalent. However, under non-linear coding, it is shown that there exists a solvable sum-network whose reverse network is not solvable.Comment: Accepted to IEEE Transactions on Information Theor

    Capacity of Sum-networks for Different Message Alphabets

    Get PDF
    A sum-network is a directed acyclic network in which all terminal nodes demand the `sum' of the independent information observed at the source nodes. Many characteristics of the well-studied multiple-unicast network communication problem also hold for sum-networks due to a known reduction between instances of these two problems. Our main result is that unlike a multiple unicast network, the coding capacity of a sum-network is dependent on the message alphabet. We demonstrate this using a construction procedure and show that the choice of a message alphabet can reduce the coding capacity of a sum-network from 11 to close to 00

    Computation Over Gaussian Networks With Orthogonal Components

    Get PDF
    Function computation of arbitrarily correlated discrete sources over Gaussian networks with orthogonal components is studied. Two classes of functions are considered: the arithmetic sum function and the type function. The arithmetic sum function in this paper is defined as a set of multiple weighted arithmetic sums, which includes averaging of the sources and estimating each of the sources as special cases. The type or frequency histogram function counts the number of occurrences of each argument, which yields many important statistics such as mean, variance, maximum, minimum, median, and so on. The proposed computation coding first abstracts Gaussian networks into the corresponding modulo sum multiple-access channels via nested lattice codes and linear network coding and then computes the desired function by using linear Slepian-Wolf source coding. For orthogonal Gaussian networks (with no broadcast and multiple-access components), the computation capacity is characterized for a class of networks. For Gaussian networks with multiple-access components (but no broadcast), an approximate computation capacity is characterized for a class of networks.Comment: 30 pages, 12 figures, submitted to IEEE Transactions on Information Theor

    Computation in Multicast Networks: Function Alignment and Converse Theorems

    Full text link
    The classical problem in network coding theory considers communication over multicast networks. Multiple transmitters send independent messages to multiple receivers which decode the same set of messages. In this work, computation over multicast networks is considered: each receiver decodes an identical function of the original messages. For a countably infinite class of two-transmitter two-receiver single-hop linear deterministic networks, the computing capacity is characterized for a linear function (modulo-2 sum) of Bernoulli sources. Inspired by the geometric concept of interference alignment in networks, a new achievable coding scheme called function alignment is introduced. A new converse theorem is established that is tighter than cut-set based and genie-aided bounds. Computation (vs. communication) over multicast networks requires additional analysis to account for multiple receivers sharing a network's computational resources. We also develop a network decomposition theorem which identifies elementary parallel subnetworks that can constitute an original network without loss of optimality. The decomposition theorem provides a conceptually-simpler algebraic proof of achievability that generalizes to LL-transmitter LL-receiver networks.Comment: to appear in the IEEE Transactions on Information Theor

    Utility Optimal Coding for Packet Transmission over Wireless Networks - Part II: Networks of Packet Erasure Channels

    Get PDF
    We define a class of multi--hop erasure networks that approximates a wireless multi--hop network. The network carries unicast flows for multiple users, and each information packet within a flow is required to be decoded at the flow destination within a specified delay deadline. The allocation of coding rates amongst flows/users is constrained by network capacity. We propose a proportional fair transmission scheme that maximises the sum utility of flow throughputs. This is achieved by {\em jointly optimising the packet coding rates and the allocation of bits of coded packets across transmission slots.}Comment: Submitted to the Forty-Ninth Annual Allerton Conference on Communication, Control, and Computing, Monticello, Illinois, US

    On Approximating the Sum-Rate for Multiple-Unicasts

    Full text link
    We study upper bounds on the sum-rate of multiple-unicasts. We approximate the Generalized Network Sharing Bound (GNS cut) of the multiple-unicasts network coding problem with kk independent sources. Our approximation algorithm runs in polynomial time and yields an upper bound on the joint source entropy rate, which is within an O(log2k)O(\log^2 k) factor from the GNS cut. It further yields a vector-linear network code that achieves joint source entropy rate within an O(log2k)O(\log^2 k) factor from the GNS cut, but \emph{not} with independent sources: the code induces a correlation pattern among the sources. Our second contribution is establishing a separation result for vector-linear network codes: for any given field F\mathbb{F} there exist networks for which the optimum sum-rate supported by vector-linear codes over F\mathbb{F} for independent sources can be multiplicatively separated by a factor of k1δk^{1-\delta}, for any constant δ>0{\delta>0}, from the optimum joint entropy rate supported by a code that allows correlation between sources. Finally, we establish a similar separation result for the asymmetric optimum vector-linear sum-rates achieved over two distinct fields Fp\mathbb{F}_{p} and Fq\mathbb{F}_{q} for independent sources, revealing that the choice of field can heavily impact the performance of a linear network code.Comment: 10 pages; Shorter version appeared at ISIT (International Symposium on Information Theory) 2015; some typos correcte

    Utility Optimal Coding for Packet Transmission over Wireless Networks - Part I: Networks of Binary Symmetric Channels

    Get PDF
    We consider multi--hop networks comprising Binary Symmetric Channels (BSC\mathsf{BSC}s). The network carries unicast flows for multiple users. The utility of the network is the sum of the utilities of the flows, where the utility of each flow is a concave function of its throughput. Given that the network capacity is shared by the flows, there is a contention for network resources like coding rate (at the physical layer), scheduling time (at the MAC layer), etc., among the flows. We propose a proportional fair transmission scheme that maximises the sum utility of flow throughputs subject to the rate and the scheduling constraints. This is achieved by {\em jointly optimising the packet coding rates of all the flows through the network}.Comment: Submitted to Forty-Ninth Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, US

    Wireless Network Coding with Local Network Views: Coded Layer Scheduling

    Full text link
    One of the fundamental challenges in the design of distributed wireless networks is the large dynamic range of network state. Since continuous tracking of global network state at all nodes is practically impossible, nodes can only acquire limited local views of the whole network to design their transmission strategies. In this paper, we study multi-layer wireless networks and assume that each node has only a limited knowledge, namely 1-local view, where each S-D pair has enough information to perform optimally when other pairs do not interfere, along with connectivity information for rest of the network. We investigate the information-theoretic limits of communication with such limited knowledge at the nodes. We develop a novel transmission strategy, namely Coded Layer Scheduling, that solely relies on 1-local view at the nodes and incorporates three different techniques: (1) per layer interference avoidance, (2) repetition coding to allow overhearing of the interference, and (3) network coding to allow interference neutralization. We show that our proposed scheme can provide a significant throughput gain compared with the conventional interference avoidance strategies. Furthermore, we show that our strategy maximizes the achievable normalized sum-rate for some classes of networks, hence, characterizing the normalized sum-capacity of those networks with 1-local view.Comment: Technical report. A paper based on the results of this report will appea
    corecore