57,917 research outputs found

    Distributed Successive Approximation Coding using Broadcast Advantage: The Two-Encoder Case

    Get PDF
    Traditional distributed source coding rarely considers the possible link between separate encoders. However, the broadcast nature of wireless communication in sensor networks provides a free gossip mechanism which can be used to simplify encoding/decoding and reduce transmission power. Using this broadcast advantage, we present a new two-encoder scheme which imitates the ping-pong game and has a successive approximation structure. For the quadratic Gaussian case, we prove that this scheme is successively refinable on the {sum-rate, distortion pair} surface, which is characterized by the rate-distortion region of the distributed two-encoder source coding. A potential energy saving over conventional distributed coding is also illustrated. This ping-pong distributed coding idea can be extended to the multiple encoder case and provides the theoretical foundation for a new class of distributed image coding method in wireless scenarios.Comment: In Proceedings of the 48th Annual Allerton Conference on Communication, Control and Computing, University of Illinois, Monticello, IL, September 29 - October 1, 201

    Separate Source-Channel Coding for Broadcasting Correlated Gaussians

    Full text link
    The problem of broadcasting a pair of correlated Gaussian sources using optimal separate source and channel codes is studied. Considerable performance gains over previously known separate source-channel schemes are observed. Although source-channel separation yields suboptimal performance in general, it is shown that the proposed scheme is very competitive for any bandwidth compression/expansion scenarios. In particular, for a high channel SNR scenario, it can be shown to achieve optimal power-distortion tradeoff.Comment: 6 pages (with an extra proof), ISIT2011, to appea

    Source-Channel Coding for the Multiple-Access Relay Channel

    Full text link
    This work considers reliable transmission of general correlated sources over the multiple-access relay channel (MARC) and the multiple-access broadcast relay channel (MABRC). In MARCs only the destination is interested in a reconstruction of the sources, while in MABRCs both the relay and the destination want to reconstruct the sources. We assume that both the relay and the destination have correlated side information. We find sufficient conditions for reliable communication based on operational separation, as well as necessary conditions on the achievable source-channel rate. For correlated sources transmitted over fading Gaussian MARCs and MABRCs we find conditions under which informational separation is optimal.Comment: Presented in ISWCS 2011, Aachen, German

    Approximate Decoding Approaches for Network Coded Correlated Data

    Get PDF
    This paper considers a framework where data from correlated sources are transmitted with help of network coding in ad-hoc network topologies. The correlated data are encoded independently at sensors and network coding is employed in the intermediate nodes in order to improve the data delivery performance. In such settings, we focus on the problem of reconstructing the sources at decoder when perfect decoding is not possible due to losses or bandwidth bottlenecks. We first show that the source data similarity can be used at decoder to permit decoding based on a novel and simple approximate decoding scheme. We analyze the influence of the network coding parameters and in particular the size of finite coding fields on the decoding performance. We further determine the optimal field size that maximizes the expected decoding performance as a trade-off between information loss incurred by limiting the resolution of the source data and the error probability in the reconstructed data. Moreover, we show that the performance of the approximate decoding improves when the accuracy of the source model increases even with simple approximate decoding techniques. We provide illustrative examples about the possible of our algorithms that can be deployed in sensor networks and distributed imaging applications. In both cases, the experimental results confirm the validity of our analysis and demonstrate the benefits of our low complexity solution for delivery of correlated data sources

    On Joint Source-Channel Coding for Correlated Sources Over Multiple-Access Relay Channels

    Get PDF
    We study the transmission of correlated sources over discrete memoryless (DM) multiple-access-relay channels (MARCs), in which both the relay and the destination have access to side information arbitrarily correlated with the sources. As the optimal transmission scheme is an open problem, in this work we propose a new joint source-channel coding scheme based on a novel combination of the correlation preserving mapping (CPM) technique with Slepian-Wolf (SW) source coding, and obtain the corresponding sufficient conditions. The proposed coding scheme is based on the decode-and-forward strategy, and utilizes CPM for encoding information simultaneously to the relay and the destination, whereas the cooperation information from the relay is encoded via SW source coding. It is shown that there are cases in which the new scheme strictly outperforms the schemes available in the literature. This is the first instance of a source-channel code that uses CPM for encoding information to two different nodes (relay and destination). In addition to sufficient conditions, we present three different sets of single-letter necessary conditions for reliable transmission of correlated sources over DM MARCs. The newly derived conditions are shown to be at least as tight as the previously known necessary conditions.Comment: Accepted to TI

    On some new approaches to practical Slepian-Wolf compression inspired by channel coding

    Get PDF
    This paper considers the problem, first introduced by Ahlswede and Körner in 1975, of lossless source coding with coded side information. Specifically, let X and Y be two random variables such that X is desired losslessly at the decoder while Y serves as side information. The random variables are encoded independently, and both descriptions are used by the decoder to reconstruct X. Ahlswede and Körner describe the achievable rate region in terms of an auxiliary random variable. This paper gives a partial solution for the optimal auxiliary random variable, thereby describing part of the rate region explicitly in terms of the distribution of X and Y

    Integer-Forcing Source Coding

    Full text link
    Integer-Forcing (IF) is a new framework, based on compute-and-forward, for decoding multiple integer linear combinations from the output of a Gaussian multiple-input multiple-output channel. This work applies the IF approach to arrive at a new low-complexity scheme, IF source coding, for distributed lossy compression of correlated Gaussian sources under a minimum mean squared error distortion measure. All encoders use the same nested lattice codebook. Each encoder quantizes its observation using the fine lattice as a quantizer and reduces the result modulo the coarse lattice, which plays the role of binning. Rather than directly recovering the individual quantized signals, the decoder first recovers a full-rank set of judiciously chosen integer linear combinations of the quantized signals, and then inverts it. In general, the linear combinations have smaller average powers than the original signals. This allows to increase the density of the coarse lattice, which in turn translates to smaller compression rates. We also propose and analyze a one-shot version of IF source coding, that is simple enough to potentially lead to a new design principle for analog-to-digital converters that can exploit spatial correlations between the sampled signals.Comment: Submitted to IEEE Transactions on Information Theor
    • …
    corecore