39 research outputs found
System Development and VLSI Implementation of High Throughput and Hardware Efficient Polar Code Decoder
Polar code is the first channel code which is provable to achieve the Shannon capacity. Additionally, it has a very good performance in terms of low error floor. All these merits make it a potential candidate for the future standard of wireless communication or storage system. Polar code is received increasing research interest these years. However, the hardware implementation of hardware decoder still has not meet the expectation of practical applications, no matter from neither throughput aspect nor hardware efficient aspect. This dissertation presents several system development approaches and hardware structures for three widely known decoding algorithms. These algorithms are successive cancellation (SC), list successive cancellation (LSC) and belief propagation (BP). All the efforts are in order to maximize the throughput meanwhile minimize the hardware cost.
Throughput centric successive cancellation (TCSC) decoder is proposed for SC decoding. By introducing the concept of constituent code, the decoding latency is significantly reduced with a negligible decoding performance loss. However, the specifically designed computation unites dramatically increase the hardware cost, and how to handle the conventional polar code sets and constituent codes sets makes the hardware implementation more complicated. By exploiting the natural property of conventional SC decoder, datapaths for decoding constituent codes are compatibly built via computation units sharing technique. This approach does not incur additional hardware cost expect some multiplexer logic, but can significantly increase the decoding throughput. Other techniques such as pre-computing and gate-level optimization are used as well in order to further increase the decoding throughput. A specific designed partial sum generator (PSG) is also investigated in this dissertation. This PSG is hardware efficient and timing compatible with proposed TCSC decoder. Additionally, a polar code construction scheme with constituent codes optimization is also presents. This construction scheme aims to reduce the constituent codes based SC decoding latency. Results show that, compared with the state-of-art decoder, TCSC can achieve at least 60% latency reduction for the codes with length n = 1024. By using Nangate FreePDK 45nm process, TCSC decoder can reach throughput up to 5.81 Gbps and 2.01 Gbps for (1024, 870) and (1024, 512) polar code, respectively. Besides, with the proposed construction scheme, the TCSC decoder generally is able to further achieve at least around 20% latency deduction with an negligible gain loss. Overlapped List Successive Cancellation (OLSC) is proposed for LSC decoding as a design approach. LSC decoding has a better performance than LS decoding at the cost of hardware consumption. With such approach, the l (l > 1) instances of successive cancellation (SC) decoder for LSC with list size l can be cut down to only one. This results in a dramatic reduction of the hardware complexity without any decoding performance loss. Meanwhile, approaches to reduce the latency associated with the pipeline scheme are also investigated. Simulation results show that with proposed design approach the hardware efficiency is increased significantly over the recently proposed LSC decoders. Express Journey Belief Propagation (XJBP) is proposed for BP decoding. This idea origins from extending the constituent codes concept from SC to BP decoding. Express journey refers to the datapath of specific constituent codes in the factor graph, which accelerates the belief information propagation speed. The XJBP decoder is able to achieve 40.6% computational complexity reduction with the conventional BP decoding. This enables an energy efficient hardware implementation.
In summary, all the efforts to optimize the polar code decoder are presented in this dissertation, supported by the careful analysis, precise description, extensively numerical simulations, thoughtful discussion and RTL implementation on VLSI design platforms
System Development and VLSI Implementation of High Throughput and Hardware Efficient Polar Code Decoder
Polar code is the first channel code which is provable to achieve the Shannon capacity. Additionally, it has a very good performance in terms of low error floor. All these merits make it a potential candidate for the future standard of wireless communication or storage system. Polar code is received increasing research interest these years. However, the hardware implementation of hardware decoder still has not meet the expectation of practical applications, no matter from neither throughput aspect nor hardware efficient aspect. This dissertation presents several system development approaches and hardware structures for three widely known decoding algorithms. These algorithms are successive cancellation (SC), list successive cancellation (LSC) and belief propagation (BP). All the efforts are in order to maximize the throughput meanwhile minimize the hardware cost.
Throughput centric successive cancellation (TCSC) decoder is proposed for SC decoding. By introducing the concept of constituent code, the decoding latency is significantly reduced with a negligible decoding performance loss. However, the specifically designed computation unites dramatically increase the hardware cost, and how to handle the conventional polar code sets and constituent codes sets makes the hardware implementation more complicated. By exploiting the natural property of conventional SC decoder, datapaths for decoding constituent codes are compatibly built via computation units sharing technique. This approach does not incur additional hardware cost expect some multiplexer logic, but can significantly increase the decoding throughput. Other techniques such as pre-computing and gate-level optimization are used as well in order to further increase the decoding throughput. A specific designed partial sum generator (PSG) is also investigated in this dissertation. This PSG is hardware efficient and timing compatible with proposed TCSC decoder. Additionally, a polar code construction scheme with constituent codes optimization is also presents. This construction scheme aims to reduce the constituent codes based SC decoding latency. Results show that, compared with the state-of-art decoder, TCSC can achieve at least 60% latency reduction for the codes with length n = 1024. By using Nangate FreePDK 45nm process, TCSC decoder can reach throughput up to 5.81 Gbps and 2.01 Gbps for (1024, 870) and (1024, 512) polar code, respectively. Besides, with the proposed construction scheme, the TCSC decoder generally is able to further achieve at least around 20% latency deduction with an negligible gain loss. Overlapped List Successive Cancellation (OLSC) is proposed for LSC decoding as a design approach. LSC decoding has a better performance than LS decoding at the cost of hardware consumption. With such approach, the l (l > 1) instances of successive cancellation (SC) decoder for LSC with list size l can be cut down to only one. This results in a dramatic reduction of the hardware complexity without any decoding performance loss. Meanwhile, approaches to reduce the latency associated with the pipeline scheme are also investigated. Simulation results show that with proposed design approach the hardware efficiency is increased significantly over the recently proposed LSC decoders. Express Journey Belief Propagation (XJBP) is proposed for BP decoding. This idea origins from extending the constituent codes concept from SC to BP decoding. Express journey refers to the datapath of specific constituent codes in the factor graph, which accelerates the belief information propagation speed. The XJBP decoder is able to achieve 40.6% computational complexity reduction with the conventional BP decoding. This enables an energy efficient hardware implementation.
In summary, all the efforts to optimize the polar code decoder are presented in this dissertation, supported by the careful analysis, precise description, extensively numerical simulations, thoughtful discussion and RTL implementation on VLSI design platforms
Quantum Low-Density Parity-Check Codes
Quantum error correction is an indispensable ingredient for scalable quantum computing. In this Perspective we discuss a particular class of quantum codes called “quantum low-density parity-check (LDPC) codes.” The codes we discuss are alternatives to the surface code, which is currently the leading candidate to implement quantum fault tolerance. We introduce the zoo of quantum LDPC codes and discuss their potential for making quantum computers robust with regard to noise. In particular, we explain recent advances in the theory of quantum LDPC codes related to certain product constructions and discuss open problems in the field
A STUDY OF LINEAR ERROR CORRECTING CODES
Since Shannon's ground-breaking work in 1948, there have been two main development streams
of channel coding in approaching the limit of communication channels, namely classical coding
theory which aims at designing codes with large minimum Hamming distance and probabilistic
coding which places the emphasis on low complexity probabilistic decoding using long codes built
from simple constituent codes. This work presents some further investigations in these two channel
coding development streams.
Low-density parity-check (LDPC) codes form a class of capacity-approaching codes with sparse
parity-check matrix and low-complexity decoder Two novel methods of constructing algebraic binary
LDPC codes are presented. These methods are based on the theory of cyclotomic cosets, idempotents
and Mattson-Solomon polynomials, and are complementary to each other. The two methods
generate in addition to some new cyclic iteratively decodable codes, the well-known Euclidean and
projective geometry codes. Their extension to non binary fields is shown to be straightforward.
These algebraic cyclic LDPC codes, for short block lengths, converge considerably well under iterative
decoding. It is also shown that for some of these codes, maximum likelihood performance may
be achieved by a modified belief propagation decoder which uses a different subset of 7^ codewords
of the dual code for each iteration.
Following a property of the revolving-door combination generator, multi-threaded minimum
Hamming distance computation algorithms are developed. Using these algorithms, the previously
unknown, minimum Hamming distance of the quadratic residue code for prime 199 has been evaluated.
In addition, the highest minimum Hamming distance attainable by all binary cyclic codes
of odd lengths from 129 to 189 has been determined, and as many as 901 new binary linear codes
which have higher minimum Hamming distance than the previously considered best known linear
code have been found.
It is shown that by exploiting the structure of circulant matrices, the number of codewords
required, to compute the minimum Hamming distance and the number of codewords of a given
Hamming weight of binary double-circulant codes based on primes, may be reduced. A means
of independently verifying the exhaustively computed number of codewords of a given Hamming
weight of these double-circulant codes is developed and in coiyunction with this, it is proved that
some published results are incorrect and the correct weight spectra are presented. Moreover, it is
shown that it is possible to estimate the minimum Hamming distance of this family of prime-based
double-circulant codes.
It is shown that linear codes may be efficiently decoded using the incremental correlation Dorsch
algorithm. By extending this algorithm, a list decoder is derived and a novel, CRC-less error detection
mechanism that offers much better throughput and performance than the conventional ORG
scheme is described. Using the same method it is shown that the performance of conventional CRC
scheme may be considerably enhanced. Error detection is an integral part of an incremental redundancy
communications system and it is shown that sequences of good error correction codes,
suitable for use in incremental redundancy communications systems may be obtained using the
Constructions X and XX. Examples are given and their performances presented in comparison to
conventional CRC schemes
Multiple-Input Multiple-Output Detection Algorithms for Generalized Frequency Division Multiplexing
Since its invention, cellular communication has dramatically transformed personal lifes and the evolution of mobile networks is still ongoing. Evergrowing demand for higher data rates has driven development of 3G and 4G systems, but foreseen 5G requirements also address diverse characteristics such as low latency or massive connectivity. It is speculated that the 4G plain cyclic prefix (CP)-orthogonal frequency division multiplexing (OFDM) cannot sufficiently fulfill all requirements and hence alternative waveforms have been in-vestigated, where generalized frequency division multiplexing (GFDM) is one popular option. An important aspect for any modern wireless communication system is the application of multi-antenna, i.e. MIMO techiques, as MIMO can deliver gains in terms of capacity, reliability and connectivity. Due to its channel-independent orthogonality, CP-OFDM straightforwardly supports broadband MIMO techniques, as the resulting inter-antenna interference (IAI) can readily be resolved. In this regard, CP-OFDM is unique among multicarrier waveforms. Other waveforms suffer from additional inter-carrier interference (ICI), inter-symbol interference (ISI) or both. This possibly 3-dimensional interference renders an optimal MIMO detection much more complex. In this thesis, weinvestigate how GFDM can support an efficient multiple-input multiple-output (MIMO) operation given its 3-dimensional interference structure. To this end, we first connect the mathematical theory of time-frequency analysis (TFA) with multicarrier waveforms in general, leading to theoretical insights into GFDM. Second, we show that the detection problem can be seen as a detection problem on a large, banded linear model under Gaussian noise. Basing on this observation, we propose methods for applying both space-time code (STC) and spatial multiplexing techniques to GFDM. Subsequently, we propose methods to decode the transmitted signals and numerically and theoretically analyze their performance in terms of complexiy and achieved frame error rate (FER). After showing that GFDM modulation and linear demodulation is a direct application of Gabor expansion and transform, we apply results from TFA to explain singularities of the modulation matrix and derive low-complexity expressions for receiver filters. We derive two linear detection algorithms for STC encoded GFDM signals and we show that their performance is equal to OFDM. In the case of spatial multiplexing, we derive both non-iterative and iterative detection algorithms which base on successive interference cancellation (SIC) and minimum mean squared error (MMSE)-parallel interference cancellation (PIC) detection, respectively. By analyzing the error propagation of the SIC algorithm, we explain its significantly inferior performance compared to OFDM. Using feedback information from the channel decoder, we can eventually show that near-optimal GFDM detection can outperform an optimal OFDM detector by up to 3dB for high SNR regions. We conclude that GFDM, given the obtained results, is not a general-purpose replacement for CP-OFDM, due to higher complexity and varying performance. Instead, we can propose GFDM for scenarios with strong frequency-selectivity and stringent spectral and FER requirements
Technologies of information transmission and processing
Сборник содержит статьи, тематика которых посвящена научно-теоретическим разработкам в области сетей телекоммуникаций, информационной безопасности, технологий передачи и обработки информации. Предназначен для научных сотрудников в области инфокоммуникаций, преподавателей, аспирантов, магистрантов и студентов технических вузов
Flexible encoder and decoder of low density parity check codes
У дисертацији су предложена брза, флексибилна и хардверски ефикасна решења за
кодовање и декодовање изузетно нерегуларних кодова са проверама парности мале густине
(енгл. low-density parity-check, LDPC, codes) захтевана у савременим комуникационим
стандардима.
Један део доприноса дисертације је у новој делимично паралелној архитектури LDPC
кодера за пету генерацију мобилних комуникација. Архитектура је заснована на
флексибилној мрежи за кружни померај која омогућава паралелно процесирање више делова
контролне матрице кратких кодова чиме се остварује сличан ниво паралелизма као и при
кодовању дугачких кодова. Поред архитектуралног решења, предложена је оптимизација
редоследа процесирања контролне матрице заснована на генетичком алгоритму, која
омогућава постизање великих протока, малог кашњења и тренутно најбоље ефикасности
искоришћења хардверских ресурса.
У другом делу дисертације предложено је ново алгоритамско и архитектурално решење
за декодовање структурираних LDPC кодова. Често коришћени приступ у LDPC декодерима
је слојевито декодовање, код кога се услед проточне обраде јављају хазарди података који
смањују проток. Декодер предложен у дисертацији у конфликтним ситуацијама на погодан
начин комбинује слојевито и симултано декодовање чиме се избегавају циклуси паузе
изазвани хазардима података. Овај приступ даје могућност за увођење великог броја степени
проточне обраде чиме се постиже висока учестаност сигнала такта. Додатно, редослед
процесирања контролне матрице је оптимизован коришћењем генетичког алгоритма за
побољшане перформансе контроле грешака. Остварени резултати показују да, у поређењу са
референтним решењима, предложени декодер остварује значајна побољшања у протоку и
најбољу ефикасност за исте перформансе контроле грешака.The dissertation proposes high speed, flexible and hardware efficient solutions for coding and
decoding of highly irregular low-density parity-check (LDPC) codes, required by many modern
communication standards.
The first part of the dissertation’s contributions is in the novel partially parallel LDPC
encoder architecture for 5G. The architecture was built around the flexible shifting network that
enables parallel processing of multiple parity check matrix elements for short to medium code
lengths, thus providing almost the same level of parallelism as for long code encoding. In addition,
the processing schedule was optimized for minimal encoding time using the genetic algorithm. The
optimization procedure contributes to achieving high throughputs, low latency, and up to date the
best hardware usage efficiency (HUE).
The second part proposes a new algorithmic and architectural solution for structured LDPC
code decoding. A widely used approach in LDPC decoders is a layered decoding schedule, which
frequently suffers from pipeline data hazards that reduce the throughput. The decoder proposed in
the dissertation conveniently incorporates both the layered and the flooding schedules in cases when
hazards occur and thus facilitates LDPC decoding without stall cycles caused by pipeline hazards.
Therefore, the proposed architecture enables insertion of many pipeline stages, which consequently
provides a high operating clock frequency. Additionally, the decoding schedule was optimized for
better signal-to-noise ratio (SNR) performance using genetic algorithm. The obtained results show
that the proposed decoder achieves great throughput increase and the best HUE when compared
with the state of the art for the same SNR performance
Advanced DSP Techniques for High-Capacity and Energy-Efficient Optical Fiber Communications
The rapid proliferation of the Internet has been driving communication networks closer and closer to their limits, while available bandwidth is disappearing due to an ever-increasing network load. Over the past decade, optical fiber communication technology has increased per fiber data rate from 10 Tb/s to exceeding 10 Pb/s. The major explosion came after the maturity of coherent detection and advanced digital signal processing (DSP). DSP has played a critical role in accommodating channel impairments mitigation, enabling advanced modulation formats for spectral efficiency transmission and realizing flexible bandwidth. This book aims to explore novel, advanced DSP techniques to enable multi-Tb/s/channel optical transmission to address pressing bandwidth and power-efficiency demands. It provides state-of-the-art advances and future perspectives of DSP as well
Compute-and-Forward Relay Networks with Asynchronous, Mobile, and Delay-Sensitive Users
We consider a wireless network consisting of multiple source nodes, a set of relays
and a destination node. Suppose the sources transmit their messages simultaneously
to the relays and the destination aims to decode all the messages. At the physical layer,
a conventional approach would be for the relay to decode the individual message
one at a time while treating rest of the messages as interference. Compute-and-forward
is a novel strategy which attempts to turn the situation around by treating
the interference as a constructive phenomenon. In compute-and-forward, each relay
attempts to directly compute a combination of the transmitted messages and then
forwards it to the destination. Upon receiving the combinations of messages from the
relays, the destination can recover all the messages by solving the received equations.
When identical lattice codes are employed at the sources, error correction to integer
combination of messages is a viable option by exploiting the algebraic structure of
lattice codes. Therefore, compute-and-forward with lattice codes enables the relay
to manage interference and perform error correction concurrently. It is shown that
compute-and-forward exhibits substantial improvement in the achievable rate compared
with other state-of-the-art schemes for medium to high signal-to-noise ratio
regime.
Despite several results that show the excellent performance of compute-and-forward,
there are still important challenges to overcome before we can utilize compute-and-
forward in practice. Some important challenges include the assumptions of \perfect
timing synchronization "and \quasi-static fading", since these assumptions rarely
hold in realistic wireless channels. So far, there are no conclusive answers to whether
compute-and-forward can still provide substantial gains even when these assumptions
are removed. When lattice codewords are misaligned and mixed up, decoding integer
combination of messages is not straightforward since the linearity of lattice codes is
generally not invariant to time shift. When channel exhibits time selectivity, it brings
challenges to compute-and-forward since the linearity of lattice codes does not suit
the time varying nature of the channel. Another challenge comes from the emerging
technologies for future 5G communication, e.g., autonomous driving and virtual
reality, where low-latency communication with high reliability is necessary. In this
regard, powerful short channel codes with reasonable encoding/decoding complexity
are indispensable. Although there are fruitful results on designing short channel
codes for point-to-point communication, studies on short code design specifically for
compute-and-forward are rarely found.
The objective of this dissertation is threefold. First, we study compute-and-forward
with timing-asynchronous users. Second, we consider the problem of compute-and-
forward over block-fading channels. Finally, the problem of compute-and-forward
for low-latency communication is studied. Throughout the dissertation, the research
methods and proposed remedies will center around the design of lattice codes in order
to facilitate the use of compute-and-forward in the presence of these challenges
Optimization and Applications of Modern Wireless Networks and Symmetry
Due to the future demands of wireless communications, this book focuses on channel coding, multi-access, network protocol, and the related techniques for IoT/5G. Channel coding is widely used to enhance reliability and spectral efficiency. In particular, low-density parity check (LDPC) codes and polar codes are optimized for next wireless standard. Moreover, advanced network protocol is developed to improve wireless throughput. This invokes a great deal of attention on modern communications