32 research outputs found
On distributed coding, quantization of channel measurements and faster-than-Nyquist signaling
This dissertation considers three different aspects of modern digital communication
systems and is therefore divided in three parts.
The first part is distributed coding. This part deals with source and source-
channel code design issues for digital communication systems with many transmitters
and one receiver or with one transmitter and one receiver but with side information at
the receiver, which is not available at the transmitter. Such problems are attracting
attention lately, as they constitute a way of extending the classical point-to-point
communication theory to networks. In this first part of this dissertation, novel source
and source-channel codes are designed by converting each of the considered distributed
coding problems into an equivalent classical channel coding or classical source-channel
coding problem. The proposed schemes come very close to the theoretical limits and
thus, are able to exhibit some of the gains predicted by network information theory.
In the other two parts of this dissertation classical point-to-point digital com-
munication systems are considered. The second part is quantization of coded chan-
nel measurements at the receiver. Quantization is a way to limit the accuracy of
continuous-valued measurements so that they can be processed in the digital domain.
Depending on the desired type of processing of the quantized data, different quantizer
design criteria should be used. In this second part of this dissertation, the quantized
received values from the channel are processed by the receiver, which tries to recover
the transmitted information. An exhaustive comparison of several quantization cri-
teria for this case are studied providing illuminating insight for this quantizer design
problem.
The third part of this dissertation is faster-than-Nyquist signaling. The Nyquist
rate in classical point-to-point bandwidth-limited digital communication systems is
considered as the maximum transmission rate or signaling rate and is equal to twice
the bandwidth of the channel. In this last part of the dissertation, we question this
Nyquist rate limitation by transmitting at higher signaling rates through the same
bandwidth. By mitigating the incurred interference due to the faster-than-Nyquist
rates, gains over Nyquist rate systems are obtained
On practical design for joint distributed source and network coding
This paper considers the problem of communicating correlated information from multiple source nodes over a network of noiseless channels to multiple destination nodes, where each destination node wants to recover all sources. The problem involves a joint consideration of distributed compression and network information relaying. Although the optimal rate region has been theoretically characterized, it was not clear how to design practical communication schemes with low complexity. This work provides a partial solution to this problem by proposing a low-complexity scheme for the special case with two sources whose correlation is characterized by a binary symmetric channel. Our scheme is based on a careful combination of linear syndrome-based Slepian-Wolf coding and random linear mixing (network coding). It is in general suboptimal; however, its low complexity and robustness to network dynamics make it suitable for practical implementation
Orthogonal Multiple Access with Correlated Sources: Feasible Region and Pragmatic Schemes
In this paper, we consider orthogonal multiple access coding schemes, where
correlated sources are encoded in a distributed fashion and transmitted,
through additive white Gaussian noise (AWGN) channels, to an access point (AP).
At the AP, component decoders, associated with the source encoders, iteratively
exchange soft information by taking into account the source correlation. The
first goal of this paper is to investigate the ultimate achievable performance
limits in terms of a multi-dimensional feasible region in the space of channel
parameters, deriving insights on the impact of the number of sources. The
second goal is the design of pragmatic schemes, where the sources use
"off-the-shelf" channel codes. In order to analyze the performance of given
coding schemes, we propose an extrinsic information transfer (EXIT)-based
approach, which allows to determine the corresponding multi-dimensional
feasible regions. On the basis of the proposed analytical framework, the
performance of pragmatic coded schemes, based on serially concatenated
convolutional codes (SCCCs), is discussed
Hamming distance spectrum of DAC codes for equiprobable binary sources
Distributed Arithmetic Coding (DAC) is an effective technique for implementing Slepian-Wolf coding (SWC). It has been shown that a DAC code partitions source space into unequal-size codebooks, so that the overall performance of DAC codes depends on the cardinality and structure of these codebooks. The problem of DAC codebook cardinality has been solved by the so-called Codebook Cardinality Spectrum (CCS). This paper extends the previous work on CCS by studying the problem of DAC codebook structure.We define Hamming Distance Spectrum (HDS) to describe DAC codebook structure and propose a mathematical method to calculate the HDS of DAC codes. The theoretical analyses are verified by experimental results
Neural Distributed Compressor Discovers Binning
We consider lossy compression of an information source when the decoder has
lossless access to a correlated one. This setup, also known as the Wyner-Ziv
problem, is a special case of distributed source coding. To this day, practical
approaches for the Wyner-Ziv problem have neither been fully developed nor
heavily investigated. We propose a data-driven method based on machine learning
that leverages the universal function approximation capability of artificial
neural networks. We find that our neural network-based compression scheme,
based on variational vector quantization, recovers some principles of the
optimum theoretical solution of the Wyner-Ziv setup, such as binning in the
source space as well as optimal combination of the quantization index and side
information, for exemplary sources. These behaviors emerge although no
structure exploiting knowledge of the source distributions was imposed. Binning
is a widely used tool in information theoretic proofs and methods, and to our
knowledge, this is the first time it has been explicitly observed to emerge
from data-driven learning.Comment: draft of a journal version of our previous ISIT 2023 paper (available
at: arXiv:2305.04380). arXiv admin note: substantial text overlap with
arXiv:2305.0438
Research and developments of distributed video coding
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The recent developed Distributed Video Coding (DVC) is typically suitable for the applications such as wireless/wired video sensor network, mobile camera etc. where the traditional video coding standard is not feasible due to the constrained computation at the encoder. With DVC, the computational burden is moved from encoder to decoder. The compression efficiency is achieved via joint decoding at the decoder. The practical application of DVC is referred to Wyner-Ziv video coding (WZ) where the side information is available at the decoder to perform joint decoding. This join decoding inevitably causes a very complex decoder. In current WZ video coding issues, many of them emphasise how to improve the system coding performance but neglect the huge complexity caused at the decoder. The complexity of the decoder has direct influence to the system output. The beginning period of this research targets to optimise the decoder in pixel domain WZ video coding (PDWZ), while still achieves similar compression performance. More specifically, four issues are raised to optimise the input block size, the side information generation, the side information refinement process and the feedback channel respectively.
The transform domain WZ video coding (TDWZ) has distinct superior performance to the normal PDWZ due to the exploitation in spatial direction during the encoding. However, since there is no motion estimation at the encoder in WZ video coding, the temporal correlation is not exploited at all at the encoder in all current WZ video coding issues. In the middle period of this research, the 3D DCT is adopted in the TDWZ to remove redundancy in both spatial and temporal direction thus to provide even higher coding performance. In the next step of this research, the performance of transform domain Distributed Multiview Video Coding (DMVC) is also investigated. Particularly, three types transform domain DMVC frameworks which are transform domain DMVC using TDWZ based 2D DCT, transform domain DMVC using TDWZ based on 3D DCT and transform domain residual DMVC using TDWZ based on 3D DCT are investigated respectively.
One of the important applications of WZ coding principle is error-resilience. There have been several attempts to apply WZ error-resilient coding for current video coding standard e.g. H.264/AVC or MEPG 2. The final stage of this research is the design of WZ error-resilient
scheme for wavelet based video codec. To balance the trade-off between error resilience ability and bandwidth consumption, the proposed scheme emphasises the protection of the Region of Interest (ROI) area. The efficiency of bandwidth utilisation is achieved by mutual efforts of WZ coding and sacrificing the quality of unimportant area. In summary, this research work contributed to achieves several advances in WZ video coding. First of all, it is targeting to build an efficient PDWZ with optimised decoder. Secondly, it aims to build an advanced TDWZ based on 3D DCT, which then is applied into multiview video coding to realise advanced transform domain DMVC. Finally, it aims to design an efficient error-resilient scheme for wavelet video codec, with which the trade-off between bandwidth consumption and error-resilience can be better balanced
Bridging Hamming Distance Spectrum with Coset Cardinality Spectrum for Overlapped Arithmetic Codes
Overlapped arithmetic codes, featured by overlapped intervals, are a variant
of arithmetic codes that can be used to implement Slepian-Wolf coding. To
analyze overlapped arithmetic codes, we have proposed two theoretical tools:
Coset Cardinality Spectrum (CCS) and Hamming Distance Spectrum (HDS). The
former describes how source space is partitioned into cosets (equally or
unequally), and the latter describes how codewords are structured within each
coset (densely or sparsely). However, until now, these two tools are almost
parallel to each other, and it seems that there is no intersection between
them. The main contribution of this paper is bridging HDS with CCS through a
rigorous mathematical proof. Specifically, HDS can be quickly and accurately
calculated with CCS in some cases. All theoretical analyses are perfectly
verified by simulation results
Codebook cardinality spectrum of distributed arithmetic codes for stationary memoryless binary sources
It was demonstrated that, as a nonlinear implementation of Slepian-Wolf Coding, Distributed Arithmetic Coding (DAC) outperforms traditional Low-Density Parity-Check (LPDC) codes for short code length and biased sources. This fact triggers research efforts into theoretical analysis of DAC. In our previous work, we proposed two analytical tools, Codebook Cardinality Spectrum (CCS) and Hamming Distance Spectrum, to analyze DAC for independent and identically-distributed (i.i.d.) binary sources with uniform distribution. This article extends our work on CCS from uniform i.i.d. binary sources to biased i.i.d. binary sources. We begin with the final CCS and then deduce each level of CCS backwards by recursion. The main finding of this article is that the final CCS of biased i.i.d. binary sources is not uniformly distributed over [0, 1). This article derives the final CCS of biased i.i.d. binary sources and proposes a numerical algorithm for calculating CCS effectively in practice. All theoretical analyses are well verified by experimental results