14,732 research outputs found
Convergence-Optimal Quantizer Design of Distributed Contraction-based Iterative Algorithms with Quantized Message Passing
In this paper, we study the convergence behavior of distributed iterative
algorithms with quantized message passing. We first introduce general iterative
function evaluation algorithms for solving fixed point problems distributively.
We then analyze the convergence of the distributed algorithms, e.g. Jacobi
scheme and Gauss-Seidel scheme, under the quantized message passing. Based on
the closed-form convergence performance derived, we propose two quantizer
designs, namely the time invariant convergence-optimal quantizer (TICOQ) and
the time varying convergence-optimal quantizer (TVCOQ), to minimize the effect
of the quantization error on the convergence. We also study the tradeoff
between the convergence error and message passing overhead for both TICOQ and
TVCOQ. As an example, we apply the TICOQ and TVCOQ designs to the iterative
waterfilling algorithm of MIMO interference game.Comment: 17 pages, 9 figures, Transaction on Signal Processing, accepte
Deep Multiple Description Coding by Learning Scalar Quantization
In this paper, we propose a deep multiple description coding framework, whose
quantizers are adaptively learned via the minimization of multiple description
compressive loss. Firstly, our framework is built upon auto-encoder networks,
which have multiple description multi-scale dilated encoder network and
multiple description decoder networks. Secondly, two entropy estimation
networks are learned to estimate the informative amounts of the quantized
tensors, which can further supervise the learning of multiple description
encoder network to represent the input image delicately. Thirdly, a pair of
scalar quantizers accompanied by two importance-indicator maps is automatically
learned in an end-to-end self-supervised way. Finally, multiple description
structural dissimilarity distance loss is imposed on multiple description
decoded images in pixel domain for diversified multiple description generations
rather than on feature tensors in feature domain, in addition to multiple
description reconstruction loss. Through testing on two commonly used datasets,
it is verified that our method is beyond several state-of-the-art multiple
description coding approaches in terms of coding efficiency.Comment: 8 pages, 4 figures. (DCC 2019: Data Compression Conference). Testing
datasets for "Deep Optimized Multiple Description Image Coding via Scalar
Quantization Learning" can be found in the website of
https://github.com/mdcnn/Deep-Multiple-Description-Codin
Robust Lattice Alignment for K-user MIMO Interference Channels with Imperfect Channel Knowledge
In this paper, we consider a robust lattice alignment design for K-user
quasi-static MIMO interference channels with imperfect channel knowledge. With
random Gaussian inputs, the conventional interference alignment (IA) method has
the feasibility problem when the channel is quasi-static. On the other hand,
structured lattices can create structured interference as opposed to the random
interference caused by random Gaussian symbols. The structured interference
space can be exploited to transmit the desired signals over the gaps. However,
the existing alignment methods on the lattice codes for quasi-static channels
either require infinite SNR or symmetric interference channel coefficients.
Furthermore, perfect channel state information (CSI) is required for these
alignment methods, which is difficult to achieve in practice. In this paper, we
propose a robust lattice alignment method for quasi-static MIMO interference
channels with imperfect CSI at all SNR regimes, and a two-stage decoding
algorithm to decode the desired signal from the structured interference space.
We derive the achievable data rate based on the proposed robust lattice
alignment method, where the design of the precoders, decorrelators, scaling
coefficients and interference quantization coefficients is jointly formulated
as a mixed integer and continuous optimization problem. The effect of imperfect
CSI is also accommodated in the optimization formulation, and hence the derived
solution is robust to imperfect CSI. We also design a low complex iterative
optimization algorithm for our robust lattice alignment method by using the
existing iterative IA algorithm that was designed for the conventional IA
method. Numerical results verify the advantages of the proposed robust lattice
alignment method
Multiple Description Quantization via Gram-Schmidt Orthogonalization
The multiple description (MD) problem has received considerable attention as
a model of information transmission over unreliable channels. A general
framework for designing efficient multiple description quantization schemes is
proposed in this paper. We provide a systematic treatment of the El Gamal-Cover
(EGC) achievable MD rate-distortion region, and show that any point in the EGC
region can be achieved via a successive quantization scheme along with
quantization splitting. For the quadratic Gaussian case, the proposed scheme
has an intrinsic connection with the Gram-Schmidt orthogonalization, which
implies that the whole Gaussian MD rate-distortion region is achievable with a
sequential dithered lattice-based quantization scheme as the dimension of the
(optimal) lattice quantizers becomes large. Moreover, this scheme is shown to
be universal for all i.i.d. smooth sources with performance no worse than that
for an i.i.d. Gaussian source with the same variance and asymptotically optimal
at high resolution. A class of low-complexity MD scalar quantizers in the
proposed general framework also is constructed and is illustrated
geometrically; the performance is analyzed in the high resolution regime, which
exhibits a noticeable improvement over the existing MD scalar quantization
schemes.Comment: 48 pages; submitted to IEEE Transactions on Information Theor
- …