55,419 research outputs found

    Sampling versus Random Binning for Multiple Descriptions of a Bandlimited Source

    Get PDF
    Random binning is an efficient, yet complex, coding technique for the symmetric L-description source coding problem. We propose an alternative approach, that uses the quantized samples of a bandlimited source as "descriptions". By the Nyquist condition, the source can be reconstructed if enough samples are received. We examine a coding scheme that combines sampling and noise-shaped quantization for a scenario in which only K < L descriptions or all L descriptions are received. Some of the received K-sets of descriptions correspond to uniform sampling while others to non-uniform sampling. This scheme achieves the optimum rate-distortion performance for uniform-sampling K-sets, but suffers noise amplification for nonuniform-sampling K-sets. We then show that by increasing the sampling rate and adding a random-binning stage, the optimal operation point is achieved for any K-set.Comment: Presented at the ITW'13. 5 pages, two-column mode, 3 figure

    Optimal Causal Rate-Constrained Sampling of the Wiener Process

    Get PDF
    We consider the following communication scenario. An encoder causally observes the Wiener process and decides when and what to transmit about it. A decoder makes real-time estimation of the process using causally received codewords. We determine the causal encoding and decoding policies that jointly minimize the mean-square estimation error, under the long-term communication rate constraint of R bits per second. We show that an optimal encoding policy can be implemented as a causal sampling policy followed by a causal compressing policy. We prove that the optimal encoding policy samples the Wiener process once the innovation passes either √(1/R) or −√(1/R), and compresses the sign of the innovation (SOI) using a 1-bit codeword. The SOI coding scheme achieves the operational distortion-rate function, which is equal to D^(op)(R)=1/(6R). Surprisingly, this is significantly better than the distortion-rate tradeoff achieved in the limit of infinite delay by the best non-causal code. This is because the SOI coding scheme leverages the free timing information supplied by the zero-delay channel between the encoder and the decoder. The key to unlock that gain is the event-triggered nature of the SOI sampling policy. In contrast, the distortion-rate tradeoffs achieved with deterministic sampling policies are much worse: we prove that the causal informational distortion-rate function in that scenario is as high as D_(DET)(R)=5/(6R). It is achieved by the uniform sampling policy with the sampling interval 1/R. In either case, the optimal strategy is to sample the process as fast as possible and to transmit 1-bit codewords to the decoder without delay

    Distortion-Rate Function of Sub-Nyquist Sampled Gaussian Sources

    Full text link
    The amount of information lost in sub-Nyquist sampling of a continuous-time Gaussian stationary process is quantified. We consider a combined source coding and sub-Nyquist reconstruction problem in which the input to the encoder is a noisy sub-Nyquist sampled version of the analog source. We first derive an expression for the mean squared error in the reconstruction of the process from a noisy and information rate-limited version of its samples. This expression is a function of the sampling frequency and the average number of bits describing each sample. It is given as the sum of two terms: Minimum mean square error in estimating the source from its noisy but otherwise fully observed sub-Nyquist samples, and a second term obtained by reverse waterfilling over an average of spectral densities associated with the polyphase components of the source. We extend this result to multi-branch uniform sampling, where the samples are available through a set of parallel channels with a uniform sampler and a pre-sampling filter in each branch. Further optimization to reduce distortion is then performed over the pre-sampling filters, and an optimal set of pre-sampling filters associated with the statistics of the input signal and the sampling frequency is found. This results in an expression for the minimal possible distortion achievable under any analog to digital conversion scheme involving uniform sampling and linear filtering. These results thus unify the Shannon-Whittaker-Kotelnikov sampling theorem and Shannon rate-distortion theory for Gaussian sources.Comment: Accepted for publication at the IEEE transactions on information theor
    • …
    corecore