160 research outputs found
Coding in 802.11 WLANs
Forward error correction (FEC) coding is widely used in communication systems to correct transmis-
sion errors. In IEEE 802.11a/g transmitters, convolutional codes are used for FEC at the physical
(PHY) layer. As is typical in wireless systems, only a limited choice of pre-speci¯ed coding rates is
supported. These are implemented in hardware and thus di±cult to change, and the coding rates are
selected with point to point operation in mind.
This thesis is concerned with using FEC coding in 802.11 WLANs in more interesting ways that are
better aligned with application requirements. For example, coding to support multicast tra±c rather
than simple point to point tra±c; coding that is cognisant of the multiuser nature of the wireless
channel; and coding which takes account of delay requirements as well as losses. We consider layering
additional coding on top of the existing 802.11 PHY layer coding, and investigate the tradeo® between
higher layer coding and PHY layer modulation and FEC coding as well as MAC layer scheduling.
Firstly we consider the joint multicast performance of higher-layer fountain coding concatenated
with 802.11a/g OFDM PHY modulation/coding. A study on the optimal choice of PHY rates with and
without fountain coding is carried out for standard 802.11 WLANs. We ¯nd that, in contrast to studies
in cellular networks, in 802.11a/g WLANs the PHY rate that optimizes uncoded multicast performance
is also close to optimal for fountain-coded multicast tra±c. This indicates that in 802.11a/g WLANs
cross-layer rate control for higher-layer fountain coding concatenated with physical layer modulation
and FEC would bring few bene¯ts.
Secondly, using experimental measurements taken in an outdoor environment, we model the chan-
nel provided by outdoor 802.11 links as a hybrid binary symmetric/packet erasure channel. This
hybrid channel o®ers capacity increases of more than 100% compared to a conventional packet erasure
channel (PEC) over a wide range of RSSIs. Based upon the established channel model, we further
consider the potential performance gains of adopting a binary symmetric channel (BSC) paradigm for
multi-destination aggregations in 802.11 WLANs. We consider two BSC-based higher-layer coding
approaches, i.e. superposition coding and a simpler time-sharing coding, for multi-destination aggre-
gated packets. The performance results for both unicast and multicast tra±c, taking account of MAC
layer overheads, demonstrate that increases in network throughput of more than 100% are possible
over a wide range of channel conditions, and that the simpler time-sharing approach yields most of
these gains and have minor loss of performance.
Finally, we consider the proportional fair allocation of high-layer coding rates and airtimes in 802.11
WLANs, taking link losses and delay constraints into account. We ¯nd that a layered approach of
separating MAC scheduling and higher-layer coding rate selection is optimal. The proportional fair
coding rate and airtime allocation (i) assigns equal total airtime (i.e. airtime including both successful
and failed transmissions) to every station in a WLAN, (ii) the station airtimes sum to unity (ensuring
operation at the rate region boundary), and (iii) the optimal coding rate is selected to maximise
goodput (treating packets decoded after the delay deadline as losses)
Near-capacity fixed-rate and rateless channel code constructions
Fixed-rate and rateless channel code constructions are designed for satisfying conflicting design tradeoffs, leading to codes that benefit from practical implementations, whilst offering a good bit error ratio (BER) and block error ratio (BLER) performance. More explicitly, two novel low-density parity-check code (LDPC) constructions are proposed; the first construction constitutes a family of quasi-cyclic protograph LDPC codes, which has a Vandermonde-like parity-check matrix (PCM). The second construction constitutes a specific class of protograph LDPC codes, which are termed as multilevel structured (MLS) LDPC codes. These codes possess a PCM construction that allows the coexistence of both pseudo-randomness as well as a structure requiring a reduced memory. More importantly, it is also demonstrated that these benefits accrue without any compromise in the attainable BER/BLER performance. We also present the novel concept of separating multiple users by means of user-specific channel codes, which is referred to as channel code division multiple access (CCDMA), and provide an example based on MLS LDPC codes. In particular, we circumvent the difficulty of having potentially high memory requirements, while ensuring that each user’s bits in the CCDMA system are equally protected. With regards to rateless channel coding, we propose a novel family of codes, which we refer to as reconfigurable rateless codes, that are capable of not only varying their code-rate but also to adaptively modify their encoding/decoding strategy according to the near-instantaneous channel conditions. We demonstrate that the proposed reconfigurable rateless codes are capable of shaping their own degree distribution according to the nearinstantaneous requirements imposed by the channel, but without any explicit channel knowledge at the transmitter. Additionally, a generalised transmit preprocessing aided closed-loop downlink multiple-input multiple-output (MIMO) system is presented, in which both the channel coding components as well as the linear transmit precoder exploit the knowledge of the channel state information (CSI). More explicitly, we embed a rateless code in a MIMO transmit preprocessing scheme, in order to attain near-capacity performance across a wide range of channel signal-to-ratios (SNRs), rather than only at a specific SNR. The performance of our scheme is further enhanced with the aid of a technique, referred to as pilot symbol assisted rateless (PSAR) coding, whereby a predetermined fraction of pilot bits is appropriately interspersed with the original information bits at the channel coding stage, instead of multiplexing pilots at the modulation stage, as in classic pilot symbol assisted modulation (PSAM). We subsequently demonstrate that the PSAR code-aided transmit preprocessing scheme succeeds in gleaning more information from the inserted pilots than the classic PSAM technique, because the pilot bits are not only useful for sounding the channel at the receiver but also beneficial for significantly reducing the computational complexity of the rateless channel decoder
A STUDY OF ERASURE CORRECTING CODES
This work focus on erasure codes, particularly those that of high performance,
and the related decoding algorithms, especially with low
computational complexity. The work is composed of different pieces,
but the main components are developed within the following two main
themes.
Ideas of message passing are applied to solve the erasures after the
transmission. Efficient matrix-representation of the belief propagation
(BP) decoding algorithm on the BEG is introduced as the recovery
algorithm. Gallager's bit-flipping algorithm are further developed
into the guess and multi-guess algorithms especially for the
application to recover the unsolved erasures after the recovery algorithm.
A novel maximum-likelihood decoding algorithm, the In-place
algorithm, is proposed with a reduced computational complexity. A
further study on the marginal number of correctable erasures by the
In-place algoritinn determines a lower bound of the average number
of correctable erasures. Following the spirit in search of the most likable
codeword based on the received vector, we propose a new branch-evaluation-
search-on-the-code-tree (BESOT) algorithm, which is powerful
enough to approach the ML performance for all linear block
codes.
To maximise the recovery capability of the In-place algorithm in
network transmissions, we propose the product packetisation structure
to reconcile the computational complexity of the In-place algorithm.
Combined with the proposed product packetisation structure,
the computational complexity is less than the quadratic complexity
bound. We then extend this to application of the Rayleigh fading
channel to solve the errors and erasures. By concatenating an outer
code, such as BCH codes, the product-packetised RS codes have the
performance of the hard-decision In-place algorithm significantly better
than that of the soft-decision iterative algorithms on optimally
designed LDPC codes
Hierarchical colour-shift-keying aided layered video streaming for the visible light downlink
Colour-shift keying (CSK) constitutes an important modulation scheme conceived for the visible light communications (VLC). The signal constellation of CSK relies on three different-color light sources invoked for information transmission. The CSK constellation has been optimized for minimizing the bit error rate, but no effort has been invested in investigating the feasibility of CSK aided unequal error protection (UEP) schemes conceived for video sources. Hence, in this treatise, we conceive a hierarchical CSK (HCSK) modulation scheme based on the traditional CSK, which is capable of generating interdependent layers of signals having different error probability, which can be readily reconfigured by changing its parameters. Furthermore, we conceived an HCSK design example for transmitting scalable video sources with the aid of a recursive systematic convolutional (RSC) code. An optimization method is conceived for enhancing the UEP and for improving the quality of the received video. Our simulation results show that the proposed optimized-UEP 16-HCSK-RSC system outperforms the traditional equal error protection scheme by ~ 1.7 dB of optical SNR at a peak signal-to-noise ratio of 37 dB, while optical SNR savings of up to 6.5 dB are attained at a lower PSNR of 36 dB
Variable Rate Transmission Over Noisy Channels
Hybrid automatic repeat request transmission (hybrid ARQ) schemes aim to provide
system reliability for transmissions over noisy channels while still maintaining a reasonably
high throughput efficiency by combining retransmissions of automatic repeat
requests with forward error correction (FEC) coding methods. In type-II hybrid ARQ
schemes, the additional parity information required by channel codes to achieve forward
error correction is provided only when errors have been detected. Hence, the
available bits are partitioned into segments, some of which are sent to the receiver immediately,
others are held back and only transmitted upon the detection of errors. This
scheme raises two questions. Firstly, how should the available bits be ordered for optimal
partitioning into consecutive segments? Secondly, how large should the individual
segments be?
This thesis aims to provide an answer to both of these questions for the transmission
of convolutional and Turbo Codes over additive white Gaussian noise (AWGN),
inter-symbol interference (ISI) and Rayleigh channels. Firstly, the ordering of bits is
investigated by simulating the transmission of packets split into segments with a size of
1 bit and finding the critical number of bits, i.e. the number of bits where the output of
the decoder is error-free. This approach provides a maximum, practical performance
limit over a range of signal-to-noise levels. With these practical performance limits, the
attention is turned to the size of the individual segments, since packets of 1 bit cause
an intolerable overhead and delay. An adaptive, hybrid ARQ system is investigated,
in which the transmitter uses the number of bits sent to the receiver and the receiver
decoding results to adjust the size of the first, initial, packet and subsequent segments
to the conditions of a stationary channel
- …