7 research outputs found

    Network Coding for Computing: Cut-Set Bounds

    Full text link
    The following \textit{network computing} problem is considered. Source nodes in a directed acyclic network generate independent messages and a single receiver node computes a target function ff of the messages. The objective is to maximize the average number of times ff can be computed per network usage, i.e., the ``computing capacity''. The \textit{network coding} problem for a single-receiver network is a special case of the network computing problem in which all of the source messages must be reproduced at the receiver. For network coding with a single receiver, routing is known to achieve the capacity by achieving the network \textit{min-cut} upper bound. We extend the definition of min-cut to the network computing problem and show that the min-cut is still an upper bound on the maximum achievable rate and is tight for computing (using coding) any target function in multi-edge tree networks and for computing linear target functions in any network. We also study the bound's tightness for different classes of target functions. In particular, we give a lower bound on the computing capacity in terms of the Steiner tree packing number and a different bound for symmetric functions. We also show that for certain networks and target functions, the computing capacity can be less than an arbitrarily small fraction of the min-cut bound.Comment: Submitted to the IEEE Transactions on Information Theory (Special Issue on Facets of Coding Theory: from Algorithms to Networks); Revised on Aug 9, 201

    Distributed Scalar Quantization for Computing: High-Resolution Analysis and Extensions

    Get PDF
    Communication of quantized information is frequently followed by a computation. We consider situations of \emph{distributed functional scalar quantization}: distributed scalar quantization of (possibly correlated) sources followed by centralized computation of a function. Under smoothness conditions on the sources and function, companding scalar quantizer designs are developed to minimize mean-squared error (MSE) of the computed function as the quantizer resolution is allowed to grow. Striking improvements over quantizers designed without consideration of the function are possible and are larger in the entropy-constrained setting than in the fixed-rate setting. As extensions to the basic analysis, we characterize a large class of functions for which regular quantization suffices, consider certain functions for which asymptotic optimality is achieved without arbitrarily fine quantization, and allow limited collaboration between source encoders. In the entropy-constrained setting, a single bit per sample communicated between encoders can have an arbitrarily-large effect on functional distortion. In contrast, such communication has very little effect in the fixed-rate setting.Comment: 36 pages, 10 figure

    Quantization in acquisition and computation networks

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 151-165).In modern systems, it is often desirable to extract relevant information from large amounts of data collected at different spatial locations. Applications include sensor networks, wearable health-monitoring devices and a variety of other systems for inference. Several existing source coding techniques, such as Slepian-Wolf and Wyner-Ziv coding, achieve asymptotic compression optimality in distributed systems. However, these techniques are rarely used in sensor networks because of decoding complexity and prohibitively long code length. Moreover, the fundamental limits that arise from existing techniques are intractable to describe for a complicated network topology or when the objective of the system is to perform some computation on the data rather than to reproduce the data. This thesis bridges the technological gap between the needs of real-world systems and the optimistic bounds derived from asymptotic analysis. Specifically, we characterize fundamental trade-offs when the desired computation is incorporated into the compression design and the code length is one. To obtain both performance guarantees and achievable schemes, we use high-resolution quantization theory, which is complementary to the Shannon-theoretic analyses previously used to study distributed systems. We account for varied network topologies, such as those where sensors are allowed to collaborate or the communication links are heterogeneous. In these settings, a small amount of intersensor communication can provide a significant improvement in compression performance. As a result, this work suggests new compression principles and network design for modern distributed systems. Although the ideas in the thesis are motivated by current and future sensor network implementations, the framework applies to a wide range of signal processing questions. We draw connections between the fidelity criteria studied in the thesis and distortion measures used in perceptual coding. As a consequence, we determine the optimal quantizer for expected relative error (ERE), a measure that is widely useful but is often neglected in the source coding community. We further demonstrate that applying the ERE criterion to psychophysical models can explain the Weber-Fechner law, a longstanding hypothesis of how humans perceive the external world. Our results are consistent with the hypothesis that human perception is Bayesian optimal for information acquisition conditioned on limited cognitive resources, thereby supporting the notion that the brain is efficient at acquisition and adaptation.by John Z. Sun.Ph.D

    Source Coding with Distortion through Graph Coloring

    No full text
    Abstract—We consider the following rate distortion problem: given a source X and correlated, decoder side information Y, find the minimum encoding rate for X required to compute f(X, Y) at the decoder within distortion D. This is a generalization of the classical Wyner-Ziv setup and was resolved by Yamamoto (1982). However, this result involved an auxiliary random variable that lacks explicit meaning. To provide a more direct link between this variable and the function f, Orlitsky and Roche (2001) established the minimal rate required in the zero-distortion case as an extension of Körner’s graph entropy. Recently, we (with Jaggi) showed that the zero-distortion rate can be achieved by minimum entropy graph coloring of an appropriate product graph. This leads to a modular architecture for functional source coding with a preprocessing “functional coding ” scheme operating on top of a classical Slepian-Wolf source coding scheme. In this paper, we give a characterization of Yamamoto’s rate distortion function in terms of a reconstruction function. This (non-single-letter) characterization is an extension of our previous results as well as Orlitsky and Roche’s results. We obtain a modular scheme operating with Slepian-Wolf’s scheme for the problem of functional rate distortion. Further, we give an achievable rate (with single-letter characterization) utilizing this scheme that intuitively extends our previous results

    Source Coding with Distortion through Graph Coloring

    No full text
    corecore