523,544 research outputs found

    Computation Alignment: Capacity Approximation without Noise Accumulation

    Full text link
    Consider several source nodes communicating across a wireless network to a destination node with the help of several layers of relay nodes. Recent work by Avestimehr et al. has approximated the capacity of this network up to an additive gap. The communication scheme achieving this capacity approximation is based on compress-and-forward, resulting in noise accumulation as the messages traverse the network. As a consequence, the approximation gap increases linearly with the network depth. This paper develops a computation alignment strategy that can approach the capacity of a class of layered, time-varying wireless relay networks up to an approximation gap that is independent of the network depth. This strategy is based on the compute-and-forward framework, which enables relays to decode deterministic functions of the transmitted messages. Alone, compute-and-forward is insufficient to approach the capacity as it incurs a penalty for approximating the wireless channel with complex-valued coefficients by a channel with integer coefficients. Here, this penalty is circumvented by carefully matching channel realizations across time slots to create integer-valued effective channels that are well-suited to compute-and-forward. Unlike prior constant gap results, the approximation gap obtained in this paper also depends closely on the fading statistics, which are assumed to be i.i.d. Rayleigh.Comment: 36 pages, to appear in IEEE Transactions on Information Theor

    Classical capacity of a qubit depolarizing channel with memory

    Full text link
    The classical product state capacity of a noisy quantum channel with memory is investigated. A forgetful noise-memory channel is constructed by Markov switching between two depolarizing channels which introduces non-Markovian noise correlations between successive channel uses. The computation of the capacity is reduced to an entropy computation for a function of a Markov process. A reformulation in terms of algebraic measures then enables its calculation. The effects of the hidden-Markovian memory on the capacity are explored. An increase in noise-correlations is found to increase the capacity

    The Capacity of Private Computation

    Full text link
    We introduce the problem of private computation, comprised of NN distributed and non-colluding servers, KK independent datasets, and a user who wants to compute a function of the datasets privately, i.e., without revealing which function he wants to compute, to any individual server. This private computation problem is a strict generalization of the private information retrieval (PIR) problem, obtained by expanding the PIR message set (which consists of only independent messages) to also include functions of those messages. The capacity of private computation, CC, is defined as the maximum number of bits of the desired function that can be retrieved per bit of total download from all servers. We characterize the capacity of private computation, for NN servers and KK independent datasets that are replicated at each server, when the functions to be computed are arbitrary linear combinations of the datasets. Surprisingly, the capacity, C=(1+1/N++1/NK1)1C=\left(1+1/N+\cdots+1/N^{K-1}\right)^{-1}, matches the capacity of PIR with NN servers and KK messages. Thus, allowing arbitrary linear computations does not reduce the communication rate compared to pure dataset retrieval. The same insight is shown to hold even for arbitrary non-linear computations when the number of datasets KK\rightarrow\infty

    Computation Over Gaussian Networks With Orthogonal Components

    Get PDF
    Function computation of arbitrarily correlated discrete sources over Gaussian networks with orthogonal components is studied. Two classes of functions are considered: the arithmetic sum function and the type function. The arithmetic sum function in this paper is defined as a set of multiple weighted arithmetic sums, which includes averaging of the sources and estimating each of the sources as special cases. The type or frequency histogram function counts the number of occurrences of each argument, which yields many important statistics such as mean, variance, maximum, minimum, median, and so on. The proposed computation coding first abstracts Gaussian networks into the corresponding modulo sum multiple-access channels via nested lattice codes and linear network coding and then computes the desired function by using linear Slepian-Wolf source coding. For orthogonal Gaussian networks (with no broadcast and multiple-access components), the computation capacity is characterized for a class of networks. For Gaussian networks with multiple-access components (but no broadcast), an approximate computation capacity is characterized for a class of networks.Comment: 30 pages, 12 figures, submitted to IEEE Transactions on Information Theor
    corecore