2,566 research outputs found

    Asymptotically Optimal Joint Source-Channel Coding with Minimal Delay

    Get PDF
    We present and analyze a joint source-channel coding strategy for the transmission of a Gaussian source across a Gaussian channel in n channel uses per source symbol. Among all such strategies, our scheme has the following properties: i) the resulting mean-squared error scales optimally with the signal-to-noise ratio, and ii) the scheme is easy to implement and the incurred delay is minimal, in the sense that a single source symbol is encoded at a time.Comment: 5 pages, 1 figure, final version accepted at IEEE Globecom 2009 (Communication Theory Symposium

    A Tight Bound on the Performance of a Minimal-Delay Joint Source-Channel Coding Scheme

    Get PDF
    An analog source is to be transmitted across a Gaussian channel in more than one channel use per source symbol. This paper derives a lower bound on the asymptotic mean squared error for a strategy that consists of repeatedly quantizing the source, transmitting the quantizer outputs in the first channel uses, and sending the remaining quantization error uncoded in the last channel use. The bound coincides with the performance achieved by a suboptimal decoder studied by the authors in a previous paper, thereby establishing that the bound is tight.Comment: 5 pages, submitted to IEEE International Symposium on Information Theory (ISIT) 201

    Energy Management Policies for Energy-Neutral Source-Channel Coding

    Full text link
    In cyber-physical systems where sensors measure the temporal evolution of a given phenomenon of interest and radio communication takes place over short distances, the energy spent for source acquisition and compression may be comparable with that used for transmission. Additionally, in order to avoid limited lifetime issues, sensors may be powered via energy harvesting and thus collect all the energy they need from the environment. This work addresses the problem of energy allocation over source acquisition/compression and transmission for energy-harvesting sensors. At first, focusing on a single-sensor, energy management policies are identified that guarantee a maximal average distortion while at the same time ensuring the stability of the queue connecting source and channel encoders. It is shown that the identified class of policies is optimal in the sense that it stabilizes the queue whenever this is feasible by any other technique that satisfies the same average distortion constraint. Moreover, this class of policies performs an independent resource optimization for the source and channel encoders. Analog transmission techniques as well as suboptimal strategies that do not use the energy buffer (battery) or use it only for adapting either source or channel encoder energy allocation are also studied for performance comparison. The problem of optimizing the desired trade-off between average distortion and delay is then formulated and solved via dynamic programming tools. Finally, a system with multiple sensors is considered and time-division scheduling strategies are derived that are able to maintain the stability of all data queues and to meet the average distortion constraints at all sensors whenever it is feasible.Comment: Submitted to IEEE Transactions on Communications in March 2011; last update in July 201

    Joint Wyner-Ziv/Dirty Paper coding by modulo-lattice modulation

    Full text link
    The combination of source coding with decoder side-information (Wyner-Ziv problem) and channel coding with encoder side-information (Gel'fand-Pinsker problem) can be optimally solved using the separation principle. In this work we show an alternative scheme for the quadratic-Gaussian case, which merges source and channel coding. This scheme achieves the optimal performance by a applying modulo-lattice modulation to the analog source. Thus it saves the complexity of quantization and channel decoding, and remains with the task of "shaping" only. Furthermore, for high signal-to-noise ratio (SNR), the scheme approaches the optimal performance using an SNR-independent encoder, thus it is robust to unknown SNR at the encoder.Comment: Submitted to IEEE Transactions on Information Theory. Presented in part in ISIT-2006, Seattle. New version after revie

    Joint source-channel coding with feedback

    Get PDF
    This paper quantifies the fundamental limits of variable-length transmission of a general (possibly analog) source over a memoryless channel with noiseless feedback, under a distortion constraint. We consider excess distortion, average distortion and guaranteed distortion (dd-semifaithful codes). In contrast to the asymptotic fundamental limit, a general conclusion is that allowing variable-length codes and feedback leads to a sizable improvement in the fundamental delay-distortion tradeoff. In addition, we investigate the minimum energy required to reproduce kk source samples with a given fidelity after transmission over a memoryless Gaussian channel, and we show that the required minimum energy is reduced with feedback and an average (rather than maximal) power constraint.Comment: To appear in IEEE Transactions on Information Theor

    Zero-Delay Rate Distortion via Filtering for Vector-Valued Gaussian Sources

    Full text link
    We deal with zero-delay source coding of a vector-valued Gauss-Markov source subject to a mean-squared error (MSE) fidelity criterion characterized by the operational zero-delay vector-valued Gaussian rate distortion function (RDF). We address this problem by considering the nonanticipative RDF (NRDF) which is a lower bound to the causal optimal performance theoretically attainable (OPTA) function and operational zero-delay RDF. We recall the realization that corresponds to the optimal "test-channel" of the Gaussian NRDF, when considering a vector Gauss-Markov source subject to a MSE distortion in the finite time horizon. Then, we introduce sufficient conditions to show existence of solution for this problem in the infinite time horizon. For the asymptotic regime, we use the asymptotic characterization of the Gaussian NRDF to provide a new equivalent realization scheme with feedback which is characterized by a resource allocation (reverse-waterfilling) problem across the dimension of the vector source. We leverage the new realization to derive a predictive coding scheme via lattice quantization with subtractive dither and joint memoryless entropy coding. This coding scheme offers an upper bound to the operational zero-delay vector-valued Gaussian RDF. When we use scalar quantization, then for "r" active dimensions of the vector Gauss-Markov source the gap between the obtained lower and theoretical upper bounds is less than or equal to 0.254r + 1 bits/vector. We further show that it is possible when we use vector quantization, and assume infinite dimensional Gauss-Markov sources to make the previous gap to be negligible, i.e., Gaussian NRDF approximates the operational zero-delay Gaussian RDF. We also extend our results to vector-valued Gaussian sources of any finite memory under mild conditions. Our theoretical framework is demonstrated with illustrative numerical experiments.Comment: 32 pages, 9 figures, published in IEEE Journal of Selected Topics in Signal Processin

    Source-Channel Diversity for Parallel Channels

    Full text link
    We consider transmitting a source across a pair of independent, non-ergodic channels with random states (e.g., slow fading channels) so as to minimize the average distortion. The general problem is unsolved. Hence, we focus on comparing two commonly used source and channel encoding systems which correspond to exploiting diversity either at the physical layer through parallel channel coding or at the application layer through multiple description source coding. For on-off channel models, source coding diversity offers better performance. For channels with a continuous range of reception quality, we show the reverse is true. Specifically, we introduce a new figure of merit called the distortion exponent which measures how fast the average distortion decays with SNR. For continuous-state models such as additive white Gaussian noise channels with multiplicative Rayleigh fading, optimal channel coding diversity at the physical layer is more efficient than source coding diversity at the application layer in that the former achieves a better distortion exponent. Finally, we consider a third decoding architecture: multiple description encoding with a joint source-channel decoding. We show that this architecture achieves the same distortion exponent as systems with optimal channel coding diversity for continuous-state channels, and maintains the the advantages of multiple description systems for on-off channels. Thus, the multiple description system with joint decoding achieves the best performance, from among the three architectures considered, on both continuous-state and on-off channels.Comment: 48 pages, 14 figure
    • …
    corecore