41 research outputs found
Shortened Array Codes of Large Girth
One approach to designing structured low-density parity-check (LDPC) codes
with large girth is to shorten codes with small girth in such a manner that the
deleted columns of the parity-check matrix contain all the variables involved
in short cycles. This approach is especially effective if the parity-check
matrix of a code is a matrix composed of blocks of circulant permutation
matrices, as is the case for the class of codes known as array codes. We show
how to shorten array codes by deleting certain columns of their parity-check
matrices so as to increase their girth. The shortening approach is based on the
observation that for array codes, and in fact for a slightly more general class
of LDPC codes, the cycles in the corresponding Tanner graph are governed by
certain homogeneous linear equations with integer coefficients. Consequently,
we can selectively eliminate cycles from an array code by only retaining those
columns from the parity-check matrix of the original code that are indexed by
integer sequences that do not contain solutions to the equations governing
those cycles. We provide Ramsey-theoretic estimates for the maximum number of
columns that can be retained from the original parity-check matrix with the
property that the sequence of their indices avoid solutions to various types of
cycle-governing equations. This translates to estimates of the rate penalty
incurred in shortening a code to eliminate cycles. Simulation results show that
for the codes considered, shortening them to increase the girth can lead to
significant gains in signal-to-noise ratio in the case of communication over an
additive white Gaussian noise channel.Comment: 16 pages; 8 figures; to appear in IEEE Transactions on Information
Theory, Aug 200
Catalytic quantum error correction
We develop the theory of entanglement-assisted quantum error correcting
(EAQEC) codes, a generalization of the stabilizer formalism to the setting in
which the sender and receiver have access to pre-shared entanglement.
Conventional stabilizer codes are equivalent to dual-containing symplectic
codes. In contrast, EAQEC codes do not require the dual-containing condition,
which greatly simplifies their construction. We show how any quaternary
classical code can be made into a EAQEC code. In particular, efficient modern
codes, like LDPC codes, which attain the Shannon capacity, can be made into
EAQEC codes attaining the hashing bound. In a quantum computation setting,
EAQEC codes give rise to catalytic quantum codes which maintain a region of
inherited noiseless qubits.
We also give an alternative construction of EAQEC codes by making classical
entanglement assisted codes coherent.Comment: 30 pages, 10 figures. Notation change: [[n,k;c]] instead of
[[n,k-c;c]
Entanglement-assisted Coding Theory
In this dissertation, I present a general method for studying quantum error
correction codes (QECCs). This method not only provides us an intuitive way of
understanding QECCs, but also leads to several extensions of standard QECCs,
including the operator quantum error correction (OQECC), the
entanglement-assisted quantum error correction (EAQECC). Furthermore, we can
combine both OQECC and EAQECC into a unified formalism, the
entanglement-assisted operator formalism. This provides great flexibility of
designing QECCs for different applications. Finally, I show that the
performance of quantum low-density parity-check codes will be largely improved
using entanglement-assisted formalism.Comment: PhD dissertation, 102 page
Recommended from our members
Expanders with Symmetry: Constructions and Applications
Expanders are sparse yet well-connected graphs with numerous theoretical and practical uses. Symmetry is a valuable structure for expanders as it enables efficient algorithms and a richer set of applications. This thesis studies expanders with symmetry, giving new constructions and applications. We extend expander construction techniques to work with symmetry and give explicit constructions of expanders with varying quality of expansion and symmetries of various groups. In particular, we construct graphs with large Abelian group symmetries via the technique of \textit{graph lifts}. We also give a generic amplification procedure that converts a weak expander to an almost optimal one while preserving symmetries. This procedure is obtained by generalizing prior amplification techniques that work for Cayley graphs over Abelian groups to Cayley graphs over any finite group. In particular, we obtain almost-Ramanujan expanders over every non-abelian finite simple group. We then explore the utility of having both symmetry and expansion simultaneously. We obtain explicit quantum LDPC codes of almost linear distance and \textit{good} classical quasi-cyclic codes with varying circulant sizes using prior results and our constructions of graphs with Abelian symmetries. We show how our generic amplification machinery boosts various structured expander-like objects: \textit{quantum expanders}, \textit{dimension expanders}, and \textit{monotone expanders}. Finally, we prove a structural result about expanding Cayley graphs, showing that they satisfy a \enquote{degree-2} variant of the \textit{expander mixing lemma}. As an application of this, we give a randomness-efficient query algorithm for \textit{homomorphism testing} of unitary-valued functions on finite groups and a derandomized version of the celebrated Babai--Nikolov--Pyber (BNP) lemma
Symmetric rearrangeable networks and algorithms
A class of symmetric rearrangeable nonblocking networks has been considered in this thesis. A particular focus of this thesis is on Benes networks built with 2 x 2 switching elements. Symmetric rearrangeable networks built with larger switching elements have also being considered. New applications of these networks are found in the areas of System on Chip (SoC) and Network on Chip (NoC). Deterministic routing algorithms used in NoC applications suffer low scalability and slow execution time. On the other hand, faster algorithms are blocking and thus limit throughput. This will be an acceptable trade-off for many applications where achieving âwire speedâ on the on-chip network would require extensive optimisation of the attached devices. In this thesis I designed an algorithm that has much lower blocking probabilities than other suboptimal algorithms but a much faster execution time than deterministic routing algorithms. The suboptimal method uses the looping algorithm in its outermost stages and then in the two distinct subnetworks deeper in the switch uses a fast but suboptimal path search method to find available paths. The worst case time complexity of this new routing method is O(NlogN) using a single processor, which matches the best known results reported in the literature.
Disruption of the ongoing communications in this class of networks during rearrangements is an open issue. In this thesis I explored a modification of the topology of these networks which gives rise to what is termed as repackable networks. A repackable topology allows rearrangements of paths without intermittently losing connectivity by breaking the existing communication paths momentarily. The repackable network structure proposed in this thesis is efficient in its use of hardware when compared to other proposals in the literature.
As most of the deterministic algorithms designed for Benes networks implement a permutation of all inputs to find the routing tags for the requested inputoutput pairs, I proposed a new algorithm that can work for partial permutations. If the network load is defined as Ï, the mean number of active inputs in a partial permutation is, m = ÏN, where N is the network size. This new method is based on mapping the network stages into a set of sub-matrices and then determines the routing tags for each pair of requests by populating the cells of the sub-matrices without creating a blocking state. Overall the serial time complexity of this method is O(NlogN) and O(mlogN) where all N inputs are active and with m < N active inputs respectively. With minor modification to the serial algorithm this method can be made to work in the parallel domain. The time complexity of this routing algorithm in a parallel machine with N completely connected processors is O(log^2 N). With m active requests the time complexity goes down to (logmlogN), which is better than the O(log^2 m + logN), reported in the literature for 2^0.5((log^2 -4logN)^0.5-logN)<= Ï <= 1. I also designed multistage symmetric rearrangeable networks using larger switching elements and implement a new routing algorithm for these classes of networks.
The network topology and routing algorithms presented in this thesis should allow large scale networks of modest cost, with low setup times and moderate blocking rates, to be constructed. Such switching networks will be required to meet the bandwidth requirements of future communication networks
Conception Avancée des codes LDPC binaires pour des applications pratiques
The design of binary LDPC codes with low error floors is still a significant problem not fully resolved in the literature. This thesis aims to design optimal/optimized binary LDPC codes. We have two main contributions to build the LDPC codes with low error floors. Our first contribution is an algorithm that enables the design of optimal QC-LDPC codes with maximum girth and mini-mum sizes. We show by simulations that our algorithm reaches the minimum bounds for regular QC-LDPC codes (3, d c ) with low d c . Our second contribution is an algorithm that allows the design optimized of regular LDPC codes by minimizing dominant trapping-sets/expansion-sets. This minimization is performed by a predictive detection of dominant trapping-sets/expansion-sets defined for a regular code C(d v , d c ) of girth g t . By simulations on different rate codes, we show that the codes designed by minimizing dominant trapping-sets/expansion-sets have better performance than the designed codes without taking account of trapping-sets/expansion-sets. The algorithms we proposed are based on the generalized RandPEG. These algorithms take into account non-cycles seen in the case of quasi-cyclic codes to ensure the predictions.La conception de codes LDPC binaires avec un faible plancher dâerreurs est encore un problĂšme considĂ©rable non entiĂšrement rĂ©solu dans la littĂ©rature. Cette thĂšse a pour objectif la conception optimale/optimisĂ©e de codes LDPC binaires. Nous avons deux contributions principales pour la construction de codes LDPC Ă faible plancher dâerreurs. Notre premiĂšre contribution est un algorithme qui permet de concevoir des codes QC-LDPC optimaux Ă large girth avec les tailles minimales. Nous montrons par des simulations que notre algorithme atteint les bornes minimales fixĂ©es pour les codes QC-LDPC rĂ©guliers (3, d c ) avec d c faible. Notre deuxiĂšme contribution est un algorithme qui permet la conception optimisĂ©e des codes LDPC rĂ©guliers en minimisant les trapping-sets/expansion-sets dominants(es). Cette minimisation sâeffectue par une dĂ©tection prĂ©dictive des trapping-sets/expansion-sets dominants(es) dĂ©finies pour un code rĂ©gulier C(d v , d c ) de girth gt . Par simulations sur des codes de rendement diffĂ©rent, nous montrons que les codes conçus en minimisant les trapping-sets/expansion-sets dominants(es) ont de meilleures performances que les codes conçus sans la prise en compte des trapping-sets/expansion-sets. Les algorithmes que nous avons proposĂ©s se basent sur le RandPEG gĂ©nĂ©ralisĂ©. Ces algorithmes prennent en compte les cycles non-vus dans le cas des codes quasi-cycliques pour garantir les prĂ©dictions
Data Processing in Continuous-Variable Quantum Key Distribution Under Composable Finite-Size Security
Continuous-variable quantum key distribution (CV-QKD) uses amplitude and phase modulation of light, in order to establish secure communications between two remote parties. The laws of quantum mechanics ensure the theoretical security of the protocol, in spite of the noise and losses of the communication channel. In practice, however, the resulting secret key rate depends not only on these two factors, but also on a series of data-processing steps, needed for transforming shared correlations into a final secret binary string.
In this work, we investigate the operation of three Gaussian-modulated coherent-state (GMCS) CV-QKD protocols: the homodyne detection, heterodyne detection and the continuous-variable measurement-device-independent (CV-MDI) protocol. We propose a comprehensive strategy covering their entire course, starting from the preparation and transmission of quantum states, until the extraction of a shared secret key. We also provide rigorous security proofs, considering optimal eavesdropper strategies and incorporating the composable framework under finite-size effects, which offers the highest level of security. In addition, we present results, where we explore the performance of different quantities of interest in the high signal-to-noise regime and identify intervals of parameters, where communications are regarded as secure. This is achieved under the assistance of our self-developed open-source Python library, which we use to simulate the stage of quantum communications and, afterwards, to process the resulting data via the stages of parameter estimation, information reconciliation and privacy amplification. Here, short-range communications are of particular interest. To enhance data processing in this high signal-to-noise ratio setting, we have combined an appropriate data preprocessing scheme with the use of high-rate, non-binary low-density parity-check (LDPC) codes. This allows us to examine the performance of short-range CV-QKD in practical implementations and optimize the parameters connected to the aforementioned steps
Tailoring surface codes: Improvements in quantum error correction with biased noise
For quantum computers to reach their full potential will require error correction. We study the surface code, one of the most promising quantum error correcting codes, in the context of predominantly dephasing (Z-biased) noise, as found in many quantum architectures. We find that the surface code is highly resilient to Y-biased noise, and tailor it to Z-biased noise, whilst retaining its practical features. We demonstrate ultrahigh thresholds for the tailored surface code: ~39% with a realistic bias of = 100, and ~50% with pure Z noise, far exceeding known thresholds for the standard surface code: ~11% with pure Z noise, and ~19% with depolarizing noise. Furthermore, we provide strong evidence that the threshold of the tailored surface code tracks the hashing bound for all biases. We reveal the hidden structure of the tailored surface code with pure Z noise that is responsible for these ultrahigh thresholds. As a consequence, we prove that its threshold with pure Z noise is 50%, and we show that its distance to Z errors, and the number of failure modes, can be tuned by modifying its boundary. For codes with appropriately modified boundaries, the distance to Z errors is O(n) compared to O(n1/2) for square codes, where n is the number of physical qubits. We demonstrate that these characteristics yield a significant improvement in logical error rate with pure Z and Z-biased noise. Finally, we introduce an efficient approach to decoding that exploits code symmetries with respect to a given noise model, and extends readily to the fault-tolerant context, where measurements are unreliable. We use this approach to define a decoder for the tailored surface code with Z-biased noise. Although the decoder is suboptimal, we observe exceptionally high fault-tolerant thresholds of ~5% with bias = 100 and exceeding 6% with pure Z noise. Our results open up many avenues of research and, recent developments in bias-preserving gates, highlight their direct relevance to experiment
Spread-spectrum techniques for environmentally-friendly underwater acoustic communications
PhD ThesisAnthropogenic underwater noise has been shown to have a negative impact on marine life.
Acoustic data transmissions have also been shown to cause behavioural responses in marine
mammals. A promising approach to address these issues is through reducing the power of
acoustic data transmissions. Firstly, limiting the maximum acoustic transmit power to a safe limit
that causes no injury, and secondly, reducing the radius of the discomfort zone whilst maximising
the receivable range. The discomfort zone is dependent on the signal design as well as the signal
power. To achieve these aims requires a signal and receiver design capable of synchronisation
and data reception at low-received-SNR, down to around â15 dB, with Doppler effects. These
requirements lead to very high-ratio spread-spectrum signaling with efficient modulation to
maximise data rate, which necessitates effective Doppler correction in the receiver structure.
This thesis examines the state-of-the-art in this area and investigates the design, development
and implementation of a suitable signal and receiver structure, with experimental validation in
a variety of real-world channels. Data signals are designed around m-ary orthogonal signaling
based on bandlimited carrierless PN sequences to create an M-ary Orthogonal Code Keying
(M-OCK) modulation scheme. Synchronisation signal structures combining the energy of
multiple unique PN symbols are shown to outperform single PN sequences of the same bandwidth
and duration in channels with low SNR and significant Doppler effects.
Signals and receiver structures are shown to be capable of reliable communications with band
of 8 kHz to 16 kHz and transmit power limited to less than 170.8 dB re 1 ÎŒPa @ 1m, or 1W of
acoustic power, over ranges of 10 km in sea trials, with low-received-SNR below â10 dB, at
data rates of up to 140.69 bit/s. Channel recordings with AWGN demonstrated limits of signal
and receiver performance of BER 10â3 at â14 dB for 35.63 bit/s, and â8.5 dB for 106.92 bit/s.
Piloted study of multipath exploitation showed this performance could be improved to â10.5 dB
for 106.92 bit/s by combining the energy of two arrival paths.
Doppler compensation techniques are explored with experimental validation showing synchronisation
and data demodulation at velocities over ranges of ±2.7m/s.
Non-binary low density parity check (LDPC) error correction coding with M-OCK signals is
investigated showing improved performance over Reed-Solomon (RS) coding of equivalent code
rate in simulations and experiments in real underwater channels.
The receiver structures are implemented on an Android mobile device with experiments
showing live real-time synchronisation and data demodulation of signals transmitted through an
underwater channel.UK Engineering and Physical Sciences Research
Council (EPSRC):
PhD Doctoral Training Account (DTA)
NOVEL OFDM SYSTEM BASED ON DUAL-TREE COMPLEX WAVELET TRANSFORM
The demand for higher and higher capacity in wireless networks, such as cellular,
mobile and local area network etc, is driving the development of new signaling
techniques with improved spectral and power efficiencies. At all stages of a
transceiver, from the bandwidth efficiency of the modulation schemes through highly
nonlinear power amplifier of the transmitters to the channel sharing between different
users, the problems relating to power usage and spectrum are aplenty. In the coming
future, orthogonal frequency division multiplexing (OFDM) technology promises to
be a ready solution to achieving the high data capacity and better spectral efficiency in
wireless communication systems by virtue of its well-known and desirable
characteristics.
Towards these ends, this dissertation investigates a novel OFDM system based on
dual-tree complex wavelet transform (D