7,630 research outputs found
Optimal Multiresolution Quantization for Broadcast Channels with Random Index Assignment
Shannon's classical separation result holds only in the limit of infinite source code dimension and infinite channel code block length. In addition, Shannon theory does not address the design of good source codes when the probability of channel error is nonzero, which is inevitable for finite-length channel codes. Thus, for practical systems, a joint source and channel code design could improve performance for finite dimension source code and finite block length channel code, as well as complexity and delay.
Consider a multicast system over a broadcast channel, where different end users typically have different capacities. To support such user or capacity diversity, it is desirable to encode the source to be broadcasted into a scalable bit stream along which multiple resolutions of the source can be reconstructed progressively from left to right. Such source coding technique is called multiresolution source coding. In wireless communications, joint source channel coding (JSCC) has attracted wide attention due to its adaptivity to time-varying channels. However, there are few works on joint source channel coding for network multicast, especially for the optimal source coding over broadcast channels.
In this work, we aim at designing and analyzing the optimal multiresolution vector quantization (MRVQ) in conjunction with the subsequent broadcast channel over which the coded scalable bit stream would be transmitted. By adopting random index assignment (RIA) to link MRVQ for the source with superposition coding for the broadcast channel, we establish a closed-form formula of end-to-end distortion for a tandem system of MRVQ and a broadcast channel. From this formula we analyze the intrinsic structure of end-to-end distortion (EED) in a communication system and derive two necessary conditions for optimal multiresolution vector quantization over broadcast channels with random index assignment. According to the two necessary conditions, we propose a greedy iterative algorithm for jointly designed MRVQ with channel conditions, which depends on the channel only through several types of average channel error probabilities rather than the complete knowledge of the channel. Experiments show that MRVQ designed by the proposed algorithm significantly outperforms conventional MRVQ designed without channel information.
By building an closed-form formula for the weighted EED with RIA, it also makes the computational complexity incurred during the performance analysis feasible. In comparison with MRVQ design for a fixed index assignment, the computation complexity for quantization design is significantly reduced by using random index assignment. In addition, simulations indicate that our proposed algorithm shows better robustness against channel mismatch than MRVQ design with a fixed index assignment, simply due to the nature of using only the average channel information. Therefore, we conclude that our proposed algorithm is more appropriate in both wireless communications and applications where the complete knowledge of the channel is hard to obtain.
Furthermore, we propose two novel algorithms for MRVQ over broadcast channels. One aims to optimize the two corresponding quantizers at two layers alternatively and iteratively, and the other applies under the constraint that each encoding cell is convex and contains the reconstruction point. Finally, we analyze the asymptotic performance of weighted EED for the optimal joint MRVQ. The asymptotic result provides a theoretically achievable quantizer performance level and sheds light on the design of the optimal MRVQ over broadcast channel from a different aspect
n-Channel Asymmetric Multiple-Description Lattice Vector Quantization
We present analytical expressions for optimal entropy-constrained
multiple-description lattice vector quantizers which, under high-resolutions
assumptions, minimize the expected distortion for given packet-loss
probabilities. We consider the asymmetric case where packet-loss probabilities
and side entropies are allowed to be unequal and find optimal quantizers for
any number of descriptions in any dimension. We show that the normalized second
moments of the side-quantizers are given by that of an -dimensional sphere
independent of the choice of lattices. Furthermore, we show that the optimal
bit-distribution among the descriptions is not unique. In fact, within certain
limits, bits can be arbitrarily distributed.Comment: To appear in the proceedings of the 2005 IEEE International Symposium
on Information Theory, Adelaide, Australia, September 4-9, 200
Optimal Design of Multiple Description Lattice Vector Quantizers
In the design of multiple description lattice vector quantizers (MDLVQ),
index assignment plays a critical role. In addition, one also needs to choose
the Voronoi cell size of the central lattice v, the sublattice index N, and the
number of side descriptions K to minimize the expected MDLVQ distortion, given
the total entropy rate of all side descriptions Rt and description loss
probability p. In this paper we propose a linear-time MDLVQ index assignment
algorithm for any K >= 2 balanced descriptions in any dimensions, based on a
new construction of so-called K-fraction lattice. The algorithm is greedy in
nature but is proven to be asymptotically (N -> infinity) optimal for any K >=
2 balanced descriptions in any dimensions, given Rt and p. The result is
stronger when K = 2: the optimality holds for finite N as well, under some mild
conditions. For K > 2, a local adjustment algorithm is developed to augment the
greedy index assignment, and conjectured to be optimal for finite N.
Our algorithmic study also leads to better understanding of v, N and K in
optimal MDLVQ design. For K = 2 we derive, for the first time, a
non-asymptotical closed form expression of the expected distortion of optimal
MDLVQ in p, Rt, N. For K > 2, we tighten the current asymptotic formula of the
expected distortion, relating the optimal values of N and K to p and Rt more
precisely.Comment: Submitted to IEEE Trans. on Information Theory, Sep 2006 (30 pages, 7
figures
Multiple Description Quantization via Gram-Schmidt Orthogonalization
The multiple description (MD) problem has received considerable attention as
a model of information transmission over unreliable channels. A general
framework for designing efficient multiple description quantization schemes is
proposed in this paper. We provide a systematic treatment of the El Gamal-Cover
(EGC) achievable MD rate-distortion region, and show that any point in the EGC
region can be achieved via a successive quantization scheme along with
quantization splitting. For the quadratic Gaussian case, the proposed scheme
has an intrinsic connection with the Gram-Schmidt orthogonalization, which
implies that the whole Gaussian MD rate-distortion region is achievable with a
sequential dithered lattice-based quantization scheme as the dimension of the
(optimal) lattice quantizers becomes large. Moreover, this scheme is shown to
be universal for all i.i.d. smooth sources with performance no worse than that
for an i.i.d. Gaussian source with the same variance and asymptotically optimal
at high resolution. A class of low-complexity MD scalar quantizers in the
proposed general framework also is constructed and is illustrated
geometrically; the performance is analyzed in the high resolution regime, which
exhibits a noticeable improvement over the existing MD scalar quantization
schemes.Comment: 48 pages; submitted to IEEE Transactions on Information Theor
Robust vector quantization for noisy channels
The paper briefly discusses techniques for making vector quantizers more tolerant to tranmsission errors. Two algorithms are presented for obtaining an efficient binary word assignment to the vector quantizer codewords without increasing the transmission rate. It is shown that about 4.5 dB gain over random assignment can be achieved with these algorithms. It is also proposed to reduce the effects of error propagation in vector-predictive quantizers by appropriately constraining the response of the predictive loop. The constrained system is shown to have about 4 dB of SNR gain over an unconstrained system in a noisy channel, with a small loss of clean-channel performance
A Mixed-ADC Receiver Architecture for Massive MIMO Systems
Motivated by the demand for energy-efficient communication solutions in the
next generation cellular network, a mixed-ADC receiver architecture for massive
multiple input multiple output (MIMO) systems is proposed, which differs from
previous works in that herein one-bit analog-to-digital converters (ADCs)
partially replace the conventionally assumed high-resolution ADCs. The
information-theoretic tool of generalized mutual information (GMI) is exploited
to analyze the achievable data rates of the proposed system architecture and an
array of analytical results of engineering interest are obtained. For
deterministic single input multiple output (SIMO) channels, a closed-form
expression of the GMI is derived, based on which the linear combiner is
optimized. Then, the asymptotic behaviors of the GMI in both low and high SNR
regimes are explored, and the analytical results suggest a plausible ADC
assignment scheme. Finally, the analytical framework is applied to the
multi-user access scenario, and the corresponding numerical results demonstrate
that the mixed system architecture with a relatively small number of
high-resolution ADCs is able to achieve a large fraction of the channel
capacity without output quantization.Comment: 5 pages, 5 figures, to appear in IEEE Information Theory Workshop
(ITW2015
- …