2,373 research outputs found
Block coding for stationary Gaussian sources with memory under a square-error fidelity criterion
In this paper, we present a new version of the source coding theorem for the block coding of stationary Gaussian sources with memory under a square-error distortion criterion. For both time-discrete and time-continuous Gaussian sources, the average square-error distortion of the optimum block source code of rate R > R(D) is shown to decrease at least exponentially in block-length to D, where R(D) is the square-error criterion rate distortion function of the stationary Gaussian source with memory. In both cases, the exponent of convergence of average distortion is explicitly derived
Multiple-Description Coding by Dithered Delta-Sigma Quantization
We address the connection between the multiple-description (MD) problem and
Delta-Sigma quantization. The inherent redundancy due to oversampling in
Delta-Sigma quantization, and the simple linear-additive noise model resulting
from dithered lattice quantization, allow us to construct a symmetric and
time-invariant MD coding scheme. We show that the use of a noise shaping filter
makes it possible to trade off central distortion for side distortion.
Asymptotically as the dimension of the lattice vector quantizer and order of
the noise shaping filter approach infinity, the entropy rate of the dithered
Delta-Sigma quantization scheme approaches the symmetric two-channel MD
rate-distortion function for a memoryless Gaussian source and MSE fidelity
criterion, at any side-to-central distortion ratio and any resolution. In the
optimal scheme, the infinite-order noise shaping filter must be minimum phase
and have a piece-wise flat power spectrum with a single jump discontinuity. An
important advantage of the proposed design is that it is symmetric in rate and
distortion by construction, so the coding rates of the descriptions are
identical and there is therefore no need for source splitting.Comment: Revised, restructured, significantly shortened and minor typos has
been fixed. Accepted for publication in the IEEE Transactions on Information
Theor
Lossy joint source-channel coding in the finite blocklength regime
This paper finds new tight finite-blocklength bounds for the best achievable
lossy joint source-channel code rate, and demonstrates that joint
source-channel code design brings considerable performance advantage over a
separate one in the non-asymptotic regime. A joint source-channel code maps a
block of source symbols onto a length channel codeword, and the
fidelity of reproduction at the receiver end is measured by the probability
that the distortion exceeds a given threshold . For memoryless
sources and channels, it is demonstrated that the parameters of the best joint
source-channel code must satisfy , where and are the channel capacity and channel
dispersion, respectively; and are the source
rate-distortion and rate-dispersion functions; and is the standard Gaussian
complementary cdf. Symbol-by-symbol (uncoded) transmission is known to achieve
the Shannon limit when the source and channel satisfy a certain probabilistic
matching condition. In this paper we show that even when this condition is not
satisfied, symbol-by-symbol transmission is, in some cases, the best known
strategy in the non-asymptotic regime
Design of a digital compression technique for shuttle television
The determination of the performance and hardware complexity of data compression algorithms applicable to color television signals, were studied to assess the feasibility of digital compression techniques for shuttle communications applications. For return link communications, it is shown that a nonadaptive two dimensional DPCM technique compresses the bandwidth of field-sequential color TV to about 13 MBPS and requires less than 60 watts of secondary power. For forward link communications, a facsimile coding technique is recommended which provides high resolution slow scan television on a 144 KBPS channel. The onboard decoder requires about 19 watts of secondary power
Joint source-channel coding with feedback
This paper quantifies the fundamental limits of variable-length transmission
of a general (possibly analog) source over a memoryless channel with noiseless
feedback, under a distortion constraint. We consider excess distortion, average
distortion and guaranteed distortion (-semifaithful codes). In contrast to
the asymptotic fundamental limit, a general conclusion is that allowing
variable-length codes and feedback leads to a sizable improvement in the
fundamental delay-distortion tradeoff. In addition, we investigate the minimum
energy required to reproduce source samples with a given fidelity after
transmission over a memoryless Gaussian channel, and we show that the required
minimum energy is reduced with feedback and an average (rather than maximal)
power constraint.Comment: To appear in IEEE Transactions on Information Theor
Study of information transfer optimization for communication satellites
The results are presented of a study of source coding, modulation/channel coding, and systems techniques for application to teleconferencing over high data rate digital communication satellite links. Simultaneous transmission of video, voice, data, and/or graphics is possible in various teleconferencing modes and one-way, two-way, and broadcast modes are considered. A satellite channel model including filters, limiter, a TWT, detectors, and an optimized equalizer is treated in detail. A complete analysis is presented for one set of system assumptions which exclude nonlinear gain and phase distortion in the TWT. Modulation, demodulation, and channel coding are considered, based on an additive white Gaussian noise channel model which is an idealization of an equalized channel. Source coding with emphasis on video data compression is reviewed, and the experimental facility utilized to test promising techniques is fully described
Improved Upper Bounds to the Causal Quadratic Rate-Distortion Function for Gaussian Stationary Sources
We improve the existing achievable rate regions for causal and for zero-delay
source coding of stationary Gaussian sources under an average mean squared
error (MSE) distortion measure. To begin with, we find a closed-form expression
for the information-theoretic causal rate-distortion function (RDF) under such
distortion measure, denoted by , for first-order Gauss-Markov
processes. Rc^{it}(D) is a lower bound to the optimal performance theoretically
attainable (OPTA) by any causal source code, namely Rc^{op}(D). We show that,
for Gaussian sources, the latter can also be upper bounded as Rc^{op}(D)\leq
Rc^{it}(D) + 0.5 log_{2}(2\pi e) bits/sample. In order to analyze
for arbitrary zero-mean Gaussian stationary sources, we
introduce \bar{Rc^{it}}(D), the information-theoretic causal RDF when the
reconstruction error is jointly stationary with the source. Based upon
\bar{Rc^{it}}(D), we derive three closed-form upper bounds to the additive rate
loss defined as \bar{Rc^{it}}(D) - R(D), where R(D) denotes Shannon's RDF. Two
of these bounds are strictly smaller than 0.5 bits/sample at all rates. These
bounds differ from one another in their tightness and ease of evaluation; the
tighter the bound, the more involved its evaluation. We then show that, for any
source spectral density and any positive distortion D\leq \sigma_{x}^{2},
\bar{Rc^{it}}(D) can be realized by an AWGN channel surrounded by a unique set
of causal pre-, post-, and feedback filters. We show that finding such filters
constitutes a convex optimization problem. In order to solve the latter, we
propose an iterative optimization procedure that yields the optimal filters and
is guaranteed to converge to \bar{Rc^{it}}(D). Finally, by establishing a
connection to feedback quantization we design a causal and a zero-delay coding
scheme which, for Gaussian sources, achieves...Comment: 47 pages, revised version submitted to IEEE Trans. Information Theor
- …