3,255 research outputs found
Parallel vs. Sequential Belief Propagation Decoding of LDPC Codes over GF(q) and Markov Sources
A sequential updating scheme (SUS) for belief propagation (BP) decoding of
LDPC codes over Galois fields, , and correlated Markov sources is
proposed, and compared with the standard parallel updating scheme (PUS). A
thorough experimental study of various transmission settings indicates that the
convergence rate, in iterations, of the BP algorithm (and subsequently its
complexity) for the SUS is about one half of that for the PUS, independent of
the finite field size . Moreover, this 1/2 factor appears regardless of the
correlations of the source and the channel's noise model, while the error
correction performance remains unchanged. These results may imply on the
'universality' of the one half convergence speed-up of SUS decoding
HARQ Buffer Management: An Information-Theoretic View
A key practical constraint on the design of Hybrid automatic repeat request
(HARQ) schemes is the size of the on-chip buffer that is available at the
receiver to store previously received packets. In fact, in modern wireless
standards such as LTE and LTE-A, the HARQ buffer size is one of the main
drivers of the modem area and power consumption. This has recently highlighted
the importance of HARQ buffer management, that is, of the use of buffer-aware
transmission schemes and of advanced compression policies for the storage of
received data. This work investigates HARQ buffer management by leveraging
information-theoretic achievability arguments based on random coding.
Specifically, standard HARQ schemes, namely Type-I, Chase Combining and
Incremental Redundancy, are first studied under the assumption of a
finite-capacity HARQ buffer by considering both coded modulation, via Gaussian
signaling, and Bit Interleaved Coded Modulation (BICM). The analysis sheds
light on the impact of different compression strategies, namely the
conventional compression log-likelihood ratios and the direct digitization of
baseband signals, on the throughput. Then, coding strategies based on layered
modulation and optimized coding blocklength are investigated, highlighting the
benefits of HARQ buffer-aware transmission schemes. The optimization of
baseband compression for multiple-antenna links is also studied, demonstrating
the optimality of a transform coding approach.Comment: submitted to IEEE International Symposium on Information Theory
(ISIT) 2015. 29 pages, 12 figures, submitted to journal publicatio
Lossy Compression of Exponential and Laplacian Sources using Expansion Coding
A general method of source coding over expansion is proposed in this paper,
which enables one to reduce the problem of compressing an analog
(continuous-valued source) to a set of much simpler problems, compressing
discrete sources. Specifically, the focus is on lossy compression of
exponential and Laplacian sources, which is subsequently expanded using a
finite alphabet prior to being quantized. Due to decomposability property of
such sources, the resulting random variables post expansion are independent and
discrete. Thus, each of the expanded levels corresponds to an independent
discrete source coding problem, and the original problem is reduced to coding
over these parallel sources with a total distortion constraint. Any feasible
solution to the optimization problem is an achievable rate distortion pair of
the original continuous-valued source compression problem. Although finding the
solution to this optimization problem at every distortion is hard, we show that
our expansion coding scheme presents a good solution in the low distrotion
regime. Further, by adopting low-complexity codes designed for discrete source
coding, the total coding complexity can be tractable in practice.Comment: 8 pages, 3 figure
First-Passage Time and Large-Deviation Analysis for Erasure Channels with Memory
This article considers the performance of digital communication systems
transmitting messages over finite-state erasure channels with memory.
Information bits are protected from channel erasures using error-correcting
codes; successful receptions of codewords are acknowledged at the source
through instantaneous feedback. The primary focus of this research is on
delay-sensitive applications, codes with finite block lengths and, necessarily,
non-vanishing probabilities of decoding failure. The contribution of this
article is twofold. A methodology to compute the distribution of the time
required to empty a buffer is introduced. Based on this distribution, the mean
hitting time to an empty queue and delay-violation probabilities for specific
thresholds can be computed explicitly. The proposed techniques apply to
situations where the transmit buffer contains a predetermined number of
information bits at the onset of the data transfer. Furthermore, as additional
performance criteria, large deviation principles are obtained for the empirical
mean service time and the average packet-transmission time associated with the
communication process. This rigorous framework yields a pragmatic methodology
to select code rate and block length for the communication unit as functions of
the service requirements. Examples motivated by practical systems are provided
to further illustrate the applicability of these techniques.Comment: To appear in IEEE Transactions on Information Theor
Exponential Strong Converse for Successive Refinement with Causal Decoder Side Information
We consider the -user successive refinement problem with causal decoder
side information and derive an exponential strong converse theorem. The
rate-distortion region for the problem can be derived as a straightforward
extension of the two-user case by Maor and Merhav (2008). We show that for any
rate-distortion tuple outside the rate-distortion region of the -user
successive refinement problem with causal decoder side information, the joint
excess-distortion probability approaches one exponentially fast. Our proof
follows by judiciously adapting the recently proposed strong converse technique
by Oohama using the information spectrum method, the variational form of the
rate-distortion region and H\"older's inequality. The lossy source coding
problem with causal decoder side information considered by El Gamal and
Weissman is a special case () of the current problem. Therefore, the
exponential strong converse theorem for the El Gamal and Weissman problem
follows as a corollary of our result
- …