33 research outputs found
TTCM-aided rate-adaptive distributed source coding for Rayleigh fading channels
Adaptive turbo-trellis-coded modulation (TTCM)-aided asymmetric distributed source coding (DSC) is proposed, where two correlated sources are transmitted to a destination node. The first source sequence is TTCM encoded and is further compressed before it is transmitted through a Rayleigh fading channel, whereas the second source signal is assumed to be perfectly decoded and, hence, to be flawlessly shown at the destination for exploitation as side information for improving the decoding performance of the first source. The proposed scheme is capable of reliable communications within 0.80 dB of the Slepian-Wolf/Shannon (SW/S) theoretical limit at a bit error rate (BER) of 10-5. Furthermore, its encoder is capable of accommodating time-variant short-term correlation between the two sources
Distributed coding using punctured quasi-arithmetic codes for memory and memoryless sources
This correspondence considers the use of punctured
quasi-arithmetic (QA) codes for the Slepian鈥揥olf problem. These
entropy codes are defined by finite state machines for memoryless and
first-order memory sources. Puncturing an entropy coded bit-stream leads
to an ambiguity at the decoder side. The decoder makes use of a correlated
version of the original message in order to remove this ambiguity. A
complete distributed source coding (DSC) scheme based on QA encoding
with side information at the decoder is presented, together with iterative
structures based on QA codes. The proposed schemes are adapted to
memoryless and first-order memory sources. Simulation results reveal
that the proposed schemes are efficient in terms of decoding performance
for short sequences compared to well-known DSC solutions using channel
codes.Peer ReviewedPostprint (published version
Design techniques for graph-based error-correcting codes and their applications
In Shannon脗s seminal paper, 脗A Mathematical Theory of Communication脗, he defined 脗Channel Capacity脗 which predicted the ultimate performance that transmission systems can achieve and suggested that capacity is achievable by error-correcting (channel) coding. The main idea of error-correcting codes is to add redundancy to the information to be transmitted so that the receiver can explore the correlation between transmitted information and redundancy and correct or detect errors caused by channels afterward. The discovery of turbo codes and rediscovery of Low Density Parity Check codes (LDPC) have revived the research in channel coding with novel ideas and techniques on code concatenation, iterative decoding, graph-based construction and design based on density evolution. This dissertation focuses on the design aspect of graph-based channel codes such as LDPC and Irregular Repeat Accumulate (IRA) codes via density evolution, and use the technique (density evolution) to design IRA codes for scalable image/video communication and LDPC codes for distributed source coding, which can be considered as a channel coding problem.
The first part of the dissertation includes design and analysis of rate-compatible IRA codes for scalable image transmission systems. This part presents the analysis with density evolution the effect of puncturing applied to IRA codes and the asymptotic analysis of the performance of the systems.
In the second part of the dissertation, we consider designing source-optimized IRA codes. The idea is to take advantage of the capability of Unequal Error Protection (UEP) of IRA codes against errors because of their irregularities. In video and image transmission systems, the performance is measured by Peak Signal to Noise Ratio (PSNR). We propose an approach to design IRA codes optimized for such a criterion.
In the third part of the dissertation, we investigate Slepian-Wolf coding problem using LDPC codes. The problems to be addressed include coding problem involving multiple sources and non-binary sources, and coding using multi-level codes and nonbinary codes
Distributed Joint Source-Channel Coding in Wireless Sensor Networks
Considering the fact that sensors are energy-limited and the wireless channel conditions in wireless sensor networks, there is an urgent need for a low-complexity coding method with high compression ratio and noise-resisted features. This paper reviews the progress made in distributed joint source-channel coding which can address this issue. The main existing deployments, from the theory to practice, of distributed joint source-channel coding over the independent channels, the multiple access channels and the broadcast channels are introduced, respectively. To this end, we also present a practical scheme for compressing multiple correlated sources over the independent channels. The simulation results demonstrate the desired efficiency
Efficient Information Reconciliation for Quantum Key Distribution = Reconciliaci贸n eficiente de informaci贸n para la distribuci贸n cu谩ntica de claves
Advances in modern cryptography for secret-key agreement are driving the development of new methods and techniques in key distillation. Most of these developments, focusing on information reconciliation and privacy amplification, are for the direct benefit of quantum key distribution (QKD). In this context, information reconciliation has historically been done using heavily interactive protocols, i.e. with a high number of channel communications, such as the well-known Cascade. In this work we show how modern coding techniques can improve the performance of these methods for information reconciliation in QKD. Here, we propose the use of low-density parity-check (LDPC) codes, since they are good both in efficiency and throughput. A price to pay, a priori, using LDPC codes is that good efficiency is only attained for very long codes and in a very narrow range of error rates. This forces to use several codes in cases when the error rate varies significantly in different uses of the channel, a common situation for instance in QKD. To overcome these problems, this study examines various techniques for adapting LDPC codes, thus reducing the number of codes needed to cover the target range of error rates. These techniques are also used to improve the average efficiency of short-length LDPC codes based on a feedback coding scheme. The importance of short codes lies in the fact that they can be used for high throughput hardware implementations. In a further advancement, a protocol is proposed that avoids the a priori error rate estimation required in other approaches. This blind protocol also brings interesting implications to the finite key analysis. Los avances en la criptograf铆a moderna para el acuerdo de clave secreta est谩n empujando el desarrollo de nuevos m茅todos y t茅cnicas para la destilaci贸n de claves. La mayor铆a de estos desarrollos, centrados en la reconciliaci贸n de informaci贸n y la amplificaci贸n de privacidad, proporcionan un beneficio directo para la distribuci贸n cu谩ntica de claves (QKD). En este contexto, la reconciliaci贸n de informaci贸n se ha realizado hist贸ricamente por medio de protocolos altamente interativos, es decir, con un alto n煤mero de comunicaciones, tal y como ocurre con el protocolo Cascade. En este trabajo mostramos c贸mo las t茅cnicas de codificaci贸n modernas pueden mejorar el rendimiento de estos m茅todos para la reconciliaci贸n de informaci贸n en QKD. Proponemos el uso de c贸digos low-density parity-check (LDPC), puesto que estos son buenos tanto en eficiencia como en tasa de correcci贸n. Un precio a pagar, a priori, utilizando c贸digos LDPC es que una buena eficiencia s贸lo se alcanza para c贸digos muy largos y en un rango de error limitado. Este hecho nos obliga a utilizar varios c贸digos en aquellos casos en los que la tasa de error var铆a significativamente para distintos usos del canal, una situaci贸n com煤n por ejemplo en QKD. Para superar estos problemas, en este trabajo analizamos varias t茅cnicas para la adaptaci贸n de c贸digos LDPC, y as铆 poder reducir el n煤mero de c贸digos necesarios para cubrir el rango de errores deseado. Estas t茅cnicas son tambi茅n utilizadas para mejorar la eficiencia promedio de c贸digos LDPC cortos en un esquema de codificaci贸n con retroalimentaci贸n o realimentaci贸n (mensaje de retorno). El inter茅s de los c贸digos cortos reside en el vii hecho de que estos pueden ser utilizados para implementaciones hardware de alto rendimiento. En un avance posterior, proponemos un nuevo protocolo que evita la estimaci贸n inicial de la tasa de error, requerida en otras propuestas. Este protocolo ciego tambi茅n nos brinda implicaciones interesantes en el an谩lisis de clave finita
REGION-BASED ADAPTIVE DISTRIBUTED VIDEO CODING CODEC
The recently developed Distributed Video Coding (DVC) is typically suitable for the
applications where the conventional video coding is not feasible because of its
inherent high-complexity encoding. Examples include video surveillance usmg
wireless/wired video sensor network and applications using mobile cameras etc. With
DVC, the complexity is shifted from the encoder to the decoder.
The practical application of DVC is referred to as Wyner-Ziv video coding (WZ)
where an estimate of the original frame called "side information" is generated using
motion compensation at the decoder. The compression is achieved by sending only
that extra information that is needed to correct this estimation. An error-correcting
code is used with the assumption that the estimate is a noisy version of the original
frame and the rate needed is certain amount of the parity bits. The side information is
assumed to have become available at the decoder through a virtual channel. Due to
the limitation of compensation method, the predicted frame, or the side information, is
expected to have varying degrees of success. These limitations stem from locationspecific
non-stationary estimation noise. In order to avoid these, the conventional
video coders, like MPEG, make use of frame partitioning to allocate optimum coder
for each partition and hence achieve better rate-distortion performance. The same,
however, has not been used in DVC as it increases the encoder complexity.
This work proposes partitioning the considered frame into many coding units
(region) where each unit is encoded differently. This partitioning is, however, done at
the decoder while generating the side-information and the region map is sent over to
encoder at very little rate penalty. The partitioning allows allocation of appropriate
DVC coding parameters (virtual channel, rate, and quantizer) to each region. The
resulting regions map is compressed by employing quadtree algorithm and
communicated to the encoder via the feedback channel. The rate control in DVC is
performed by channel coding techniques (turbo codes, LDPC, etc.). The performance
of the channel code depends heavily on the accuracy of virtual channel model that models estimation error for each region. In this work, a turbo code has been used and
an adaptive WZ DVC is designed both in transform domain and in pixel domain. The
transform domain WZ video coding (TDWZ) has distinct superior performance as
compared to the normal Pixel Domain Wyner-Ziv (PDWZ), since it exploits the
'
spatial redundancy during the encoding. The performance evaluations show that the
proposed system is superior to the existing distributed video coding solutions.
Although the, proposed system requires extra bits representing the "regions map" to be
transmitted, fuut still the rate gain is noticeable and it outperforms the state-of-the-art
frame based DVC by 0.6-1.9 dB.
The feedback channel (FC) has the role to adapt the bit rate to the changing
'
statistics between the side infonmation and the frame to be encoded. In the
unidirectional scenario, the encoder must perform the rate control. To correctly
estimate the rate, the encoder must calculate typical side information. However, the
rate cannot be exactly calculated at the encoder, instead it can only be estimated. This
work also prbposes a feedback-free region-based adaptive DVC solution in pixel
domain based on machine learning approach to estimate the side information.
Although the performance evaluations show rate-penalty but it is acceptable
considering the simplicity of the proposed algorithm.
vii
On distributed coding, quantization of channel measurements and faster-than-Nyquist signaling
This dissertation considers three different aspects of modern digital communication
systems and is therefore divided in three parts.
The first part is distributed coding. This part deals with source and source-
channel code design issues for digital communication systems with many transmitters
and one receiver or with one transmitter and one receiver but with side information at
the receiver, which is not available at the transmitter. Such problems are attracting
attention lately, as they constitute a way of extending the classical point-to-point
communication theory to networks. In this first part of this dissertation, novel source
and source-channel codes are designed by converting each of the considered distributed
coding problems into an equivalent classical channel coding or classical source-channel
coding problem. The proposed schemes come very close to the theoretical limits and
thus, are able to exhibit some of the gains predicted by network information theory.
In the other two parts of this dissertation classical point-to-point digital com-
munication systems are considered. The second part is quantization of coded chan-
nel measurements at the receiver. Quantization is a way to limit the accuracy of
continuous-valued measurements so that they can be processed in the digital domain.
Depending on the desired type of processing of the quantized data, different quantizer
design criteria should be used. In this second part of this dissertation, the quantized
received values from the channel are processed by the receiver, which tries to recover
the transmitted information. An exhaustive comparison of several quantization cri-
teria for this case are studied providing illuminating insight for this quantizer design
problem.
The third part of this dissertation is faster-than-Nyquist signaling. The Nyquist
rate in classical point-to-point bandwidth-limited digital communication systems is
considered as the maximum transmission rate or signaling rate and is equal to twice
the bandwidth of the channel. In this last part of the dissertation, we question this
Nyquist rate limitation by transmitting at higher signaling rates through the same
bandwidth. By mitigating the incurred interference due to the faster-than-Nyquist
rates, gains over Nyquist rate systems are obtained
Research and developments of distributed video coding
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The recent developed Distributed Video Coding (DVC) is typically suitable for the applications such as wireless/wired video sensor network, mobile camera etc. where the traditional video coding standard is not feasible due to the constrained computation at the encoder. With DVC, the computational burden is moved from encoder to decoder. The compression efficiency is achieved via joint decoding at the decoder. The practical application of DVC is referred to Wyner-Ziv video coding (WZ) where the side information is available at the decoder to perform joint decoding. This join decoding inevitably causes a very complex decoder. In current WZ video coding issues, many of them emphasise how to improve the system coding performance but neglect the huge complexity caused at the decoder. The complexity of the decoder has direct influence to the system output. The beginning period of this research targets to optimise the decoder in pixel domain WZ video coding (PDWZ), while still achieves similar compression performance. More specifically, four issues are raised to optimise the input block size, the side information generation, the side information refinement process and the feedback channel respectively.
The transform domain WZ video coding (TDWZ) has distinct superior performance to the normal PDWZ due to the exploitation in spatial direction during the encoding. However, since there is no motion estimation at the encoder in WZ video coding, the temporal correlation is not exploited at all at the encoder in all current WZ video coding issues. In the middle period of this research, the 3D DCT is adopted in the TDWZ to remove redundancy in both spatial and temporal direction thus to provide even higher coding performance. In the next step of this research, the performance of transform domain Distributed Multiview Video Coding (DMVC) is also investigated. Particularly, three types transform domain DMVC frameworks which are transform domain DMVC using TDWZ based 2D DCT, transform domain DMVC using TDWZ based on 3D DCT and transform domain residual DMVC using TDWZ based on 3D DCT are investigated respectively.
One of the important applications of WZ coding principle is error-resilience. There have been several attempts to apply WZ error-resilient coding for current video coding standard e.g. H.264/AVC or MEPG 2. The final stage of this research is the design of WZ error-resilient
scheme for wavelet based video codec. To balance the trade-off between error resilience ability and bandwidth consumption, the proposed scheme emphasises the protection of the Region of Interest (ROI) area. The efficiency of bandwidth utilisation is achieved by mutual efforts of WZ coding and sacrificing the quality of unimportant area. In summary, this research work contributed to achieves several advances in WZ video coding. First of all, it is targeting to build an efficient PDWZ with optimised decoder. Secondly, it aims to build an advanced TDWZ based on 3D DCT, which then is applied into multiview video coding to realise advanced transform domain DMVC. Finally, it aims to design an efficient error-resilient scheme for wavelet video codec, with which the trade-off between bandwidth consumption and error-resilience can be better balanced