10 research outputs found

    Distributed Functional Scalar Quantization Simplified

    Full text link
    Distributed functional scalar quantization (DFSQ) theory provides optimality conditions and predicts performance of data acquisition systems in which a computation on acquired data is desired. We address two limitations of previous works: prohibitively expensive decoder design and a restriction to sources with bounded distributions. We rigorously show that a much simpler decoder has equivalent asymptotic performance as the conditional expectation estimator previously explored, thus reducing decoder design complexity. The simpler decoder has the feature of decoupled communication and computation blocks. Moreover, we extend the DFSQ framework with the simpler decoder to acquire sources with infinite-support distributions such as Gaussian or exponential distributions. Finally, through simulation results we demonstrate that performance at moderate coding rates is well predicted by the asymptotic analysis, and we give new insight on the rate of convergence

    On the Number of Bins in Equilibria for Signaling Games

    Full text link
    We investigate the equilibrium behavior for the decentralized quadratic cheap talk problem in which an encoder and a decoder, viewed as two decision makers, have misaligned objective functions. In prior work, we have shown that the number of bins under any equilibrium has to be at most countable, generalizing a classical result due to Crawford and Sobel who considered sources with density supported on [0,1][0,1]. In this paper, we refine this result in the context of exponential and Gaussian sources. For exponential sources, a relation between the upper bound on the number of bins and the misalignment in the objective functions is derived, the equilibrium costs are compared, and it is shown that there also exist equilibria with infinitely many bins under certain parametric assumptions. For Gaussian sources, it is shown that there exist equilibria with infinitely many bins.Comment: 25 pages, single colum

    Codecell convexity in optimal entropy-constrained vector quantization

    Full text link

    A Framework for Control System Design Subject to Average Data-Rate Constraints

    Get PDF

    Do optimal entropy-constrained quantizers have a finite or infinite number of codewords?

    No full text
    An entropy-constrained quantizer Q is optimal if it minimizes the expected distortion D(Q) subject to a constraint on the output entropy H(Q). In this paper we use the Lagrangian formulation to show the existence and study the structure of optimal entropy-constrained quantizers that achieve a point on the lower convex hull of the R}. In general, an optimal entropy-constrained quantizer may have a countably infinite number of codewords

    Do optimal entropy-constrained quantizers have a finite or infinite number of codewords?

    No full text

    Packetized Predictive Control of Stochastic Systems Over Bit-Rate Limited Channels With Packet Loss

    Full text link

    Quantization in acquisition and computation networks

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 151-165).In modern systems, it is often desirable to extract relevant information from large amounts of data collected at different spatial locations. Applications include sensor networks, wearable health-monitoring devices and a variety of other systems for inference. Several existing source coding techniques, such as Slepian-Wolf and Wyner-Ziv coding, achieve asymptotic compression optimality in distributed systems. However, these techniques are rarely used in sensor networks because of decoding complexity and prohibitively long code length. Moreover, the fundamental limits that arise from existing techniques are intractable to describe for a complicated network topology or when the objective of the system is to perform some computation on the data rather than to reproduce the data. This thesis bridges the technological gap between the needs of real-world systems and the optimistic bounds derived from asymptotic analysis. Specifically, we characterize fundamental trade-offs when the desired computation is incorporated into the compression design and the code length is one. To obtain both performance guarantees and achievable schemes, we use high-resolution quantization theory, which is complementary to the Shannon-theoretic analyses previously used to study distributed systems. We account for varied network topologies, such as those where sensors are allowed to collaborate or the communication links are heterogeneous. In these settings, a small amount of intersensor communication can provide a significant improvement in compression performance. As a result, this work suggests new compression principles and network design for modern distributed systems. Although the ideas in the thesis are motivated by current and future sensor network implementations, the framework applies to a wide range of signal processing questions. We draw connections between the fidelity criteria studied in the thesis and distortion measures used in perceptual coding. As a consequence, we determine the optimal quantizer for expected relative error (ERE), a measure that is widely useful but is often neglected in the source coding community. We further demonstrate that applying the ERE criterion to psychophysical models can explain the Weber-Fechner law, a longstanding hypothesis of how humans perceive the external world. Our results are consistent with the hypothesis that human perception is Bayesian optimal for information acquisition conditioned on limited cognitive resources, thereby supporting the notion that the brain is efficient at acquisition and adaptation.by John Z. Sun.Ph.D
    corecore