90 research outputs found

    Perturbed Adaptive Belief Propagation Decoding for High-Density Parity-Check Codes

    Get PDF
    Algebraic codes such as BCH code are receiving renewed interest as their short block lengths and low/no error floors make them attractive for ultra-reliable low-latency communications (URLLC) in 5G wireless networks. This article aims at enhancing the traditional adaptive belief propagation (ABP) decoding, which is a soft-in-soft-out (SISO) decoding for high-density parity-check (HDPC) algebraic codes, such as Reed-Solomon (RS) codes, Bose-Chaudhuri-Hocquenghem (BCH) codes, and product codes. The key idea of traditional ABP is to sparsify certain columns of the parity-check matrix corresponding to the least reliable bits with small log-likelihood-ratio (LLR) values. This sparsification strategy may not be optimal when some bits have large LLR magnitudes but wrong signs. Motivated by this observation, we propose a Perturbed ABP (P-ABP) to incorporate a small number of unstable bits with large LLRs into the sparsification operation of the parity-check matrix. In addition, we propose to apply partial layered scheduling or hybrid dynamic scheduling to further enhance the performance of P-ABP. Simulation results show that our proposed decoding algorithms lead to improved error correction performances and faster convergence rates than the prior-art ABP variants

    New Identification and Decoding Techniques for Low-Density Parity-Check Codes

    Get PDF
    Error-correction coding schemes are indispensable for high-capacity high data-rate communication systems nowadays. Among various channel coding schemes, low-density parity-check (LDPC) codes introduced by pioneer Robert G. Gallager are prominent due to the capacity-approaching and superior error-correcting properties. There is no hard constraint on the code rate of LDPC codes. Consequently, it is ideal to incorporate LDPC codes with various code rate and codeword length in the adaptive modulation and coding (AMC) systems which change the encoder and the modulator adaptively to improve the system throughput. In conventional AMC systems, a dedicated control channel is assigned to coordinate the encoder/decoder changes. A questions then rises: if the AMC system still works when such a control channel is absent. This work gives positive answer to this question by investigating various scenarios consisting of different modulation schemes, such as quadrature-amplitude modulation (QAM), frequency-shift keying (FSK), and different channels, such as additive white Gaussian noise (AWGN) channels and fading channels. On the other hand, LDPC decoding is usually carried out by iterative belief-propagation (BP) algorithms. As LDPC codes become prevalent in advanced communication and storage systems, low-complexity LDPC decoding algorithms are favored in practical applications. In the conventional BP decoding algorithm, the stopping criterion is to check if all the parities are satisfied. This single rule may not be able to identify the undecodable blocks, as a result, the decoding time and power consumption are wasted for executing unnecessary iterations. In this work, we propose a new stopping criterion to identify the undecodable blocks in the early stage of the iterative decoding process. Furthermore, in the conventional BP decoding algorithm, the variable (check) nodes are updated in parallel. It is known that the number of iterations can be reduced by the serial scheduling algorithm. The informed dynamic scheduling (IDS) algorithms were proposed in the existing literatures to further reduce the number of iterations. However, the computational complexity involved in finding the update node in the existing IDS algorithms would not be neglected. In this work, we propose a new efficient IDS scheme which can provide better performance-complexity trade-off compared to the existing IDS ones. In addition, the iterative decoding threshold, which is used for differentiating which LDPC code is better, is investigated in this work. A family of LDPC codes, called LDPC convolutional codes, has drawn a lot of attentions from researchers in recent years due to the threshold saturation phenomenon. The IDT for an LDPC convolutional code may be computationally demanding when the termination length goes to thousand or even approaches infinity, especially for AWGN channels. In this work, we propose a fast IDT estimation algorithm which can greatly reduce the complexity of the IDT calculation for LDPC convolutional codes with arbitrary large termination length (including infinity). By utilizing our new IDT estimation algorithm, the IDTs for LDPC convolutional codes with arbitrary large termination length (including infinity) can be quickly obtained

    ๋‚ธ๋“œํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ ์˜ค๋ฅ˜์ •์ •์„ ์œ„ํ•œ ๊ณ ์„ฑ๋Šฅ LDPC ๋ณตํ˜ธ๋ฐฉ๋ฒ• ์—ฐ๊ตฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2013. 8. ์„ฑ์›์šฉ.๋ฐ˜๋„์ฒด ๊ณต์ •์˜ ๋ฏธ์„ธํ™”์— ๋”ฐ๋ผ ๋น„ํŠธ ์—๋Ÿฌ์œจ์ด ์ฆ๊ฐ€ํ•˜๋Š” ๋‚ธ๋“œ ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ์—์„œ ๊ณ ์„ฑ๋Šฅ ์—๋Ÿฌ ์ •์ • ๋ฐฉ๋ฒ•์€ ํ•„์ˆ˜์ ์ด๋‹ค. Low-density parity-check (LDPC) ๋ถ€ํ˜ธ์™€ ๊ฐ™์€ ์—ฐํŒ์ • ์—๋Ÿฌ ์ •์ • ๋ถ€ํ˜ธ๋Š” ๋›ฐ์–ด๋‚œ ์—๋Ÿฌ ์ •์ • ์„ฑ๋Šฅ์„ ๋ณด์ด์ง€๋งŒ, ๋†’์€ ๊ตฌํ˜„ ๋ณต์žก๋„๋กœ ์ธํ•ด ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ ์‹œ์Šคํ…œ์— ์ ์šฉ๋˜๊ธฐ ํž˜๋“  ๋‹จ์ ์ด ์žˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” LDPC ๋ถ€ํ˜ธ์˜ ํšจ์œจ์ ์ธ ๋ณตํ˜ธ๋ฅผ ์œ„ํ•ด ๊ณ ์„ฑ๋Šฅ ๋ฉ”์‹œ์ง€ ์ „ํŒŒ ์Šค์ผ€์ค„๋ง ๋ฐฉ๋ฒ•๊ณผ ์ € ๋ณต์žก๋„ ๋ณตํ˜ธ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ œ์•ˆํ•œ๋‹ค. ํŠนํžˆ finite geometry (FG) LDPC ๋ถ€ํ˜ธ์— ๋Œ€ํ•œ ํšจ์œจ์ ์ธ ๋””์ฝ”๋” ์•„ํ‚คํ…์ณ๋ฅผ ์ œ์•ˆํ•˜๋ฉฐ, ๊ตฌํ˜„๋œ ๋””์ฝ”๋”๋ฅผ ์ด์šฉํ•˜์—ฌ ๋‚ธ๋“œ ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ์— ๋Œ€ํ•ด ์—ฐํŒ์ • ๋ณตํ˜ธ์‹œ์˜ ์—๋„ˆ์ง€ ์†Œ๋ชจ๋Ÿ‰์— ๋Œ€ํ•ด ์—ฐ๊ตฌํ•œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์˜ ์ฒซ ๋ฒˆ์งธ ๋ถ€๋ถ„์—์„œ๋Š” ๋™์  ์Šค์ผ€์ค„๋ง (informed dynamic scheduling, IDS) ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์„ฑ๋Šฅํ–ฅ์ƒ ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์—ฐ๊ตฌํ•œ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ์šฐ์„  ๊ธฐ์กด์˜ ๊ฐ€์žฅ ๋น ๋ฅธ ์ˆ˜๋ ด ์†๋„๋ฅผ ๋ณด์ด๋Š” IDS ์•Œ๊ณ ๋ฆฌ์ฆ˜์ธ ๋ ˆ์ง€๋“€์–ผ ์‹ ๋ขฐ ์ „ํŒŒ (residual belief propagation, RBP) ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ๋™์ž‘ ํŠน์„ฑ์„ ๋ถ„์„ํ•˜๊ณ , ์ด๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ํŠน์ • ๋…ธ๋“œ์— ๋ฉ”์‹œ์ง€ ๊ฐฑ์‹ ์ด ์ง‘์ค‘๋˜๋Š” ๊ฒƒ์„ ๋ฐฉ์ง€ํ•˜์—ฌ RBP ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์ˆ˜๋ ด์†๋„๋ฅผ ์ฆ๊ฐ€์‹œํ‚จ improved RBP (iRBP) ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ œ์•ˆํ•œ๋‹ค. ๋˜ํ•œ iRBP์˜ ๋›ฐ์–ด๋‚œ ์ˆ˜๋ ด์†๋„์™€ ๊ธฐ์กด์˜ NS ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์šฐ์ˆ˜ํ•œ ์—๋Ÿฌ ์ •์ • ๋Šฅ๋ ฅ์„ ๋ชจ๋‘ ๊ฐ–์ถ˜ ์‹ ๋“œ๋กฌ ๊ธฐ๋ฐ˜์˜ ํ˜ผํ•ฉ ์Šค์ผ€์ค„๋ง (mixed scheduling) ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ๋์œผ๋กœ ๋‹ค์–‘ํ•œ ๋ถ€ํ˜ธ์œจ์˜ LDPC ๋ถ€ํ˜ธ์— ๋Œ€ํ•œ ๋ชจ์˜์‹คํ—˜์„ ํ†ตํ•ด ์ œ์•ˆ๋œ ์‹ ๋“œ๋กฌ ๊ธฐ๋ฐ˜์˜ ํ˜ผํ•ฉ ์Šค์ผ€์ค„๋ง ๋ฐฉ๋ฒ•์ด ๋ณธ ๋…ผ๋ฌธ์—์„œ ์‹œํ—˜๋œ ๋‹ค๋ฅธ ๋ชจ๋“  ์Šค์ผ€์ค„๋ง ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์„ฑ๋Šฅ์„ ๋Šฅ๊ฐ€ํ•จ์„ ํ™•์ธํ•˜์˜€๋‹ค. ๋…ผ๋ฌธ์˜ ๋‘ ๋ฒˆ์งธ ๋ถ€๋ถ„์—์„œ๋Š” ๋ณตํ˜ธ ์‹คํŒจ์‹œ ๋งŽ์€ ๋น„ํŠธ ์—๋Ÿฌ๋ฅผ ๋ฐœ์ƒ์‹œํ‚ค๋Š” a posteriori probability (APP) ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ๊ฐœ์„  ๋ฐฉ์•ˆ์— ๋ฐฉ์•ˆ์„ ์ œ์•ˆํ•œ๋‹ค. ๋˜ํ•œ ๋น ๋ฅธ ์ˆ˜๋ ด์†๋„์™€ ์šฐ์ˆ˜ํ•œ ์—๋Ÿฌ ๋งˆ๋ฃจ (error-floor) ์„ฑ๋Šฅ์œผ๋กœ ๋ฐ์ดํ„ฐ ์ €์žฅ์žฅ์น˜์— ์ ํ•ฉํ•œ FG-LDPC ๋ถ€ํ˜ธ์— ๋Œ€ํ•ด ์ œ์•ˆ๋œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์ด ์ ์šฉ๋œ ํ•˜๋“œ์›จ์–ด ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ œ์•ˆ๋œ ์•„ํ‚คํ…์ฒ˜๋Š” ๋†’์€ ๋…ธ๋“œ ๊ฐ€์ค‘์น˜๋ฅผ ๊ฐ€์ง€๋Š” FG-LDPC ๋ถ€ํ˜ธ์— ์ ํ•ฉํ•˜๋„๋ก ์‰ฌํ”„ํŠธ ๋ ˆ์ง€์Šคํ„ฐ (shift registers)์™€ SRAM ๊ธฐ๋ฐ˜์˜ ํ˜ผํ•ฉ ๊ตฌ์กฐ๋ฅผ ์ฑ„์šฉํ•˜๋ฉฐ, ๋†’์€ ์ฒ˜๋ฆฌ๋Ÿ‰์„ ์–ป๊ธฐ ์œ„ํ•ด ํŒŒ์ดํ”„๋ผ์ธ๋œ ๋ณ‘๋ ฌ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค. ๋˜ํ•œ ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰์„ ์ค„์ด๊ธฐ ์œ„ํ•ด ์„ธ ๊ฐ€์ง€์˜ ๋ฉ”๋ชจ๋ฆฌ ์šฉ๋Ÿ‰ ๊ฐ์†Œ ๊ธฐ๋ฒ•์„ ์ ์šฉํ•˜๋ฉฐ, ์ „๋ ฅ ์†Œ๋น„๋ฅผ ์ค„์ด๊ธฐ ์œ„ํ•ด ๋‘ ๊ฐ€์ง€์˜ ์ €์ „๋ ฅ ๊ธฐ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ๋ณธ ์ œ์•ˆ๋œ ์•„ํ‚คํ…์ฒ˜๋Š” ๋ถ€ํ˜ธ์œจ 0.96์˜ (68254, 65536) Euclidean geometry LDPC ๋ถ€ํ˜ธ์— ๋Œ€ํ•ด 0.13-um CMOS ๊ณต์ •์—์„œ ๊ตฌํ˜„ํ•˜์˜€๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์—ฐํŒ์ • ๋ณตํ˜ธ๊ฐ€ ์ ์šฉ๋œ ๋‚ธ๋“œ ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ ์‹œ์Šคํ…œ์˜ ์—๋„ˆ์ง€ ์†Œ๋ชจ๋ฅผ ๋‚ฎ์ถ”๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์ œ์•ˆํ•œ๋‹ค. ์—ฐํŒ์ • ๊ธฐ๋ฐ˜์˜ ์—๋Ÿฌ ์ •์ • ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ๋†’์€ ์„ฑ๋Šฅ์„ ๋ณด์ด์ง€๋งŒ, ์ด๋Š” ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ์˜ ์„ผ์‹ฑ ์ˆ˜์™€ ์—๋„ˆ์ง€ ์†Œ๋ชจ๋ฅผ ์ฆ๊ฐ€ ์‹œํ‚ค๋Š” ๋‹จ์ ์ด ์žˆ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์•ž์„œ ๊ตฌํ˜„๋œ LDPC ๋””์ฝ”๋”๊ฐ€ ์ฑ„์šฉ๋œ ๋‚ธ๋“œ ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ ์‹œ์Šคํ…œ์˜ ์—๋„ˆ์ง€ ์†Œ๋ชจ๋ฅผ ๋ถ„์„ํ•˜๊ณ , LDPC ๋””์ฝ”๋”์™€ BCH ๋””์ฝ”๋” ๊ฐ„์˜ ์นฉ ์‚ฌ์ด์ฆˆ์™€ ์—๋„ˆ์ง€ ์†Œ๋ชจ๋Ÿ‰์„ ๋น„๊ตํ•˜์˜€๋‹ค. ์ด์™€ ๋”๋ถˆ์–ด ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” LDPC ๋””์ฝ”๋”๋ฅผ ์ด์šฉํ•œ ์„ผ์‹ฑ ์ •๋ฐ€๋„ ๊ฒฐ์ • ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋ฅผ ํ†ตํ•ด ์ œ์•ˆ๋œ ๋ณตํ˜ธ ๋ฐ ์Šค์ผ€์ค„๋ง ์•Œ๊ณ ๋ฆฌ์ฆ˜, VLSI ์•„ํ‚คํ…์ณ, ๊ทธ๋ฆฌ๊ณ  ์ฝ๊ธฐ ์ •๋ฐ€๋„ ๊ฒฐ์ • ๋ฐฉ๋ฒ•์„ ํ†ตํ•ด ๋‚ธ๋“œ ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ ์‹œ์Šคํ…œ์˜ ์—๋Ÿฌ ์ •์ • ์„ฑ๋Šฅ์„ ๊ทน๋Œ€ํ™” ํ•˜๊ณ  ์—๋„ˆ์ง€ ์†Œ๋ชจ๋ฅผ ์ตœ์†Œํ™” ํ•  ์ˆ˜ ์žˆ๋‹ค.High-performance error correction for NAND flash memory is greatly needed because the raw bit error rate increases as the semiconductor geometry shrinks for high density. Soft-decision error correction, such as low-density parity-check (LDPC) codes, offers high performance but their implementation complexity hinders wide adoption to consumer products. This dissertation proposes two high-performance message-passing schedules and a low-complexity decoding algorithm for LDPC codes. In particular, an efficient decoder architecture for finite geometry (FG) LDPC codes is proposed, and the energy consumption of soft-decision decoding for NAND flash memory is analyzed. The first part of this dissertation is devoted to improving the informed dynamic scheduling (IDS) algorithms. We analyze the behavior of the residual belief propagation (RBP), which is the fastest IDS algorithm, and develop an improved RBP (iRBP) by avoiding the concentration of message updates at a particular node. We also study the syndrome-based mixed scheduling of the iRBP and the node-wise scheduling (NS). The proposed mixed scheduling outperforms all other scheduling methods tested in this work. The next part of this dissertation is to develop a conditional variable node update scheme for the a posteriori probability (APP) algorithm. The developed algorithm is robust to decoding failures and can reduce the dynamic power consumption by lowering switching activities in the LDPC decoder. To implement the developed algorithm, we propose a memory-efficient pipelined parallel architecture for LDPC decoding. The architecture employs FG-LDPC codes that not only show fast convergence speed and good error-floor performance but also perform well with iterative decoding algorithms, which is especially suitable for data storage devices. We also developed a rate-0.96 (68254, 65536) Euclidean geometry LDPC code and implemented the proposed architecture in 0.13-um CMOS technology. This dissertation also covers low-energy error correction of NAND flash memory through soft-decision decoding. The soft-decision-based error correction algorithms show high performance, but they demand an increased number of flash memory sensing operations and consume more energy for memory access. We examine the energy consumption of a NAND flash memory system equipping an LDPC code-based soft-decision error correction circuit. The sum of energy consumed at NAND flash memory and the LDPC decoder is minimized. In addition, the chip size and energy consumption of the decoder were compared with those of two Bose-Chaudhuri-Hocquenghem (BCH) decoding circuits showing the comparable error performance and the throughput. We also propose an LDPC decoder-assisted precision selection method that needs virtually no overhead. This dissertation is intended to develop high-performance and low-power error correction circuits for NAND flash memory by studying improved decoding and scheduling algorithms, VLSI architecture, and a read precision selection method.1 Introduction 1 1.1 NAND Flash Memory 1 1.2 LDPC Codes 4 1.3 Outline of the Dissertation 6 2 LDPC Decoding and Scheduling Algorithms 8 2.1 Introduction 8 2.2 Decoding Algorithms for LDPC Codes 10 2.2.1 Belief Propagation Algorithm 10 2.2.2 Simplified Belief Propagation Algorithms 12 2.3 Message-Passing Schedules for Decoding of LDPC Codes 15 2.3.1 Static Schedules 15 2.3.2 Dynamic Schedules 17 3 Improved Dynamic Scheduling Algorithms for Decoding of LDPC Codes 22 3.1 Introduction 22 3.2 Improved Residual Belief Propagation Algorithm 23 3.3 Syndrome-Based Mixed Scheduling of iRBP and NS 26 3.4 Complexity Analysis and Simulation Results 28 3.4.1 Complexity Analysis 28 3.4.2 Simulation Results 29 3.5 Concluding Remarks 33 4 A Pipelined Parallel Architecture for Decoding of Finite-Geometry LDPC Codes 36 4.1 Introduction 36 4.2 Finite-Geometry LDPC Codes and Conditional Variable Node Update Algorithm 38 4.2.1 Finite-Geometry LDPC codes 38 4.2.2 Conditional Variable Node Update Algorithm for Fixed-Point Normalized APP-Based Algorithm 40 4.3 Decoder Architecture 46 4.3.1 Baseline Sequential Architecture 46 4.3.2 Pipelined-Parallel Architecture 54 4.3.3 Memory Capacity Reduction 57 4.4 Implementation Results 60 4.5 Concluding Remarks 64 5 Low-Energy Error Correction of NAND Flash Memory through Soft-Decision Decoding 66 5.1 Introduction 66 5.2 Energy Consumption of Read Operations in NAND Flash Memory 67 5.2.1 Voltage Sensing Scheme for Soft-Decision Data Output 67 5.2.2 LSB and MSB Concurrent Access Scheme for Low-Energy Soft-Decision Data Output 72 5.2.3 Energy Consumption of Read Operations in NAND Flash Memory 73 5.3 The Performance of Soft-Decision Error Correction over a NAND Flash Memory Channel 76 5.4 Hardware Performance of the (68254, 65536) LDPC Decoder 81 5.4.1 Energy Consumption of the LDPC Decoder 81 5.4.2 Performance Comparison of the LDPC Decoder and Two BCH Decoders 83 5.5 Low-Energy Error Correction Scheme for NAND Flash Memory 87 5.5.1 Optimum Precision for Low-Energy Decoding 87 5.5.2 Iteration Count-Based Precision Selection 90 5.6 Concluding Remarks 91 6 Conclusion 94 Bibliography 96 Abstract in Korean 110 ๊ฐ์‚ฌ์˜ ๊ธ€ 112Docto

    Iterative graphical algorithms for phase noise channels.

    Get PDF
    Doctoral Degree. University of KwaZulu-Natal, Durban.This thesis proposes algorithms based on graphical models to detect signals and charac- terise the performance of communication systems in the presence of Wiener phase noise. The algorithms exploit properties of phase noise and consequently use graphical models to develop low complexity approaches of signal detection. The contributions are presented in the form of papers. The first paper investigates the effect of message scheduling on the performance of graphical algorithms. A serial message scheduling is proposed for Orthogonal Frequency Division Multiplexing (OFDM) systems in the presence of carrier frequency offset and phase noise. The algorithm is shown to have better convergence compared to non-serial scheduling algorithms. The second paper introduces a concept referred to as circular random variables which is based on exploiting the properties of phase noise. An iterative algorithm is proposed to detect Low Density Parity Check (LDPC) codes in the presence of Wiener phase noise. The proposed algorithm is shown to have similar performance as existing algorithms with very low complexity. The third paper extends the concept of circular variables to detect coherent optical OFDM signals in the presence of residual carrier frequency offset and Wiener phase noise. The proposed iterative algorithm shows a significant improvement in complexity compared to existing algorithms. The fourth paper proposes two methods based on minimising the free energy function of graphical models. The first method combines the Belief Propagation (BP) and the Uniformly Re-weighted BP (URWBP) algorithms. The second method combines the Mean Field (MF) and the URWBP algorithms. The proposed methods are used to detect LDPC codes in Wiener phase noise channels. The proposed methods show good balance between complexity and performance compared to existing methods. The last paper proposes parameter based computation of the information bounds of the Wiener phase noise channel. The proposed methods compute the information lower and upper bounds using parameters of the Gaussian probability density function. The results show that these methods achieve similar performance as existing methods with low complexity

    Short-length Low-density Parity-check Codes: Construction and Decoding Algorithms

    Get PDF
    Error control coding is an essential part of modern communications systems. LDPC codes have been demonstrated to offer performance near the fundamental limits of channels corrupted by random noise. Optimal maximum likelihood decoding of LDPC codes is too complex to be practically useful even at short block lengths and so a graph-based message passing decoder known as the belief propagation algorithm is used instead. In fact, on graphs without closed paths known as cycles the iterative message passing decoding is known to be optimal and may converge in a single iteration, although identifying the message update schedule which allows single-iteration convergence is not trivial. At finite block lengths graphs without cycles have poor minimum distance properties and perform poorly even under optimal decoding. LDPC codes with large block length have been demonstrated to offer performance close to that predicted for codes of infinite length, as the cycles present in the graph are quite long. In this thesis, LDPC codes of shorter length are considered as they offer advantages in terms of latency and complexity, at the cost of performance degradation from the increased number of short cycles in the graph. For these shorter LDPC codes, the problems considered are: First, improved construction of structured and unstructured LDPC code graphs of short length with a view to reducing the harmful effects of the cycles on error rate performance, based on knowledge of the decoding process. Structured code graphs are particularly interesting as they allow benefits in encoding and decoding complexity and speed. Secondly, the design and construction of LDPC codes for the block fading channel, a particularly challenging scenario from the point of view of error control code design. Both established and novel classes of codes for the channel are considered. Finally the decoding of LDPC codes by the belief propagation algorithm is considered, in particular the scheduling of messages passed in the iterative decoder. A knowledge-aided approach is developed based on message reliabilities and residuals to allow fast convergence and significant improvements in error rate performance

    Artificial intelligence methods for security and cyber security systems

    Get PDF
    This research is in threat analysis and countermeasures employing Artificial Intelligence (AI) methods within the civilian domain, where safety and mission-critical aspects are essential. AI has challenges of repeatable determinism and decision explanation. This research proposed methods for dense and convolutional networks that provided repeatable determinism. In dense networks, the proposed alternative method had an equal performance with more structured learnt weights. The proposed method also had earlier learning and higher accuracy in the Convolutional networks. When demonstrated in colour image classification, the accuracy improved in the first epoch to 67%, from 29% in the existing scheme. Examined in transferred learning with the Fast Sign Gradient Method (FSGM) as an analytical method to control distortion of dissimilarity, a finding was that the proposed method had more significant retention of the learnt model, with 31% accuracy instead of 9%. The research also proposed a threat analysis method with set-mappings and first principle analytical steps applied to a Symbolic AI method using an algebraic expert system with virtualized neurons. The neural expert system method demonstrated the infilling of parameters by calculating beamwidths with variations in the uncertainty of the antenna type. When combined with a proposed formula extraction method, it provides the potential for machine learning of new rules as a Neuro-Symbolic AI method. The proposed method uses extra weights allocated to neuron input value ranges as activation strengths. The method simplifies the learnt representation reducing model depth, thus with less significant dropout potential. Finally, an image classification method for emitter identification is proposed with a synthetic dataset generation method and shows the accurate identification between fourteen radar emission modes with high ambiguity between them (and achieved 99.8% accuracy). That method would be a mechanism to recognize non-threat civil radars aimed at threat alert when deviations from those civilian emitters are detected

    Computational Methods for Protein Inference in Shotgun Proteomics Experiments

    Get PDF
    In den letzten Jahrzehnten kam es zu einem signifikanten Anstiegs des Einsatzes von Hochdurchsatzmethoden in verschiedensten Bereichen der Naturwissenschaften, welche zu einem regelrechten Paradigmenwechsel fรผhrte. Eine groรŸe Anzahl an neuen Technologien wurde entwickelt um die Quantifizierung von Molekรผlen, die in verschiedenste biologische Prozesse involviert sind, voranzutreiben und zu beschleunigen. Damit einhergehend konnte eine betrรคchtliche Steigerung an Daten festgestellt werden, die durch diese verbesserten Methoden generiert wurden. Durch die Bereitstellung von computergestรผtzten Verfahren zur Analyse eben dieser Masse an Rohdaten, spielt der Forschungsbereich der Bioinformatik eine immer grรถรŸere Rolle bei der Extraktion biologischer Erkenntnisse. Im Speziellen hilft die computergestรผtzte Massenspektrometrie bei der Prozessierung, Analyse und Visualisierung von Daten aus massenspektrometrischen Hochdursatzexperimenten. Bei der Erforschung der Gesamtheit aller Proteine einer Zelle oder einer anderweitigen Probe biologischen Materials, kommen selbst neueste Methoden an ihre Grenzen. Deswegen greifen viele Labore zu einer, dem Massenspektrometer vorgeschalteten, Verdauung der Probe um die Komplexitรคt der zu messenden Molekรผle zu verringern. Diese sogenannten "Bottom-up"-Proteomikexperimente mit Massenspektrometern fรผhren allerdings zu einer erhรถhten Schwierigkeit bei der anschlieรŸenden computergestรผtzen Analyse. Durch die Verdauung von Proteinen zu Peptiden mรผssen komplexe Mehrdeutigkeiten wรคhrend Proteininferenz, Proteingruppierung und Proteinquantifizierung berรผcksichtigt und/oder aufgelรถst werden. Im Rahmen dieser Dissertation stellen wir mehrere Entwicklungen vor, die dabei helfen sollen eine effiziente und vollstรคndig automatisierte Analyse von komplexen und umfangreichen \glqq Bottom-up\grqq{}-Proteomikexperimenten zu ermรถglichen. Um die hinderliche Komplexitรคt diskreter, Bayes'scher Proteininferenzmethoden zu verringern, wird neuerdings von sogenannten Faltungsbรคumen (engl. "convolution trees") Gebrauch gemacht. Diese bieten bis jetzt jedoch keine genaue und gleichzeitig numerisch stabile Mรถglichkeit um "max-product"-Inferenz zu betreiben. Deswegen wird in dieser Dissertation zunรคchst eine neue Methode beschrieben die das mithilfe eines stรผckweisen bzw. extrapolierendem Verfahren ermรถglicht. Basierend auf der Integration dieser Methode in eine mitentwickelte Bibliothek fรผr Bayes'sche Inferenz, wird dann ein OpenMS-Tool fรผr Proteininferenz prรคsentiert. Dieses Tool ermรถglicht effiziente Proteininferenz auf Basis eines diskreten Bayes'schen Netzwerks mithilfe eines "loopy belief propagation" Algorithmus'. Trotz der streng probabilistischen Formulierung des Problems รผbertrifft unser Verfahren die meisten etablierten Methoden in Recheneffizienz. Das Interface des Algorithmus' bietet auรŸerdem einzigartige Eingabe- und Ausgabeoptionen, wie z.B. das Regularisieren der Anzahl von Proteinen in einer Gruppe, proteinspezifische "Priors", oder rekalibrierte "Posteriors" der Peptide. SchlieรŸlich zeigt diese Arbeit einen kompletten, einfach zu benutzenden, aber trotzdem skalierenden Workflow fรผr Proteininferenz und -quantifizierung, welcher um das neue Tool entwickelt wurde. Die Pipeline wurde in nextflow implementiert und ist Teil einer Gruppe von standardisierten, regelmรครŸig getesteten und von einer Community gepflegten Standardworkflows gebรผndelt unter dem Projekt nf-core. Unser Workflow ist in der Lage selbst groรŸe Datensรคtze mit komplizierten experimentellen Designs zu prozessieren. Mit einem einzigen Befehl erlaubt er eine (Re-)Analyse von lokalen oder รถffentlich verfรผgbaren Datensรคtzen mit kompetetiver Genauigkeit und ausgezeichneter Performance auf verschiedensten Hochleistungsrechenumgebungen oder der Cloud.Since the beginning of this millennium, the advent of high-throughput methods in numerous fields of the life sciences led to a shift in paradigms. A broad variety of technologies emerged that allow comprehensive quantification of molecules involved in biological processes. Simultaneously, a major increase in data volume has been recorded with these techniques through enhanced instrumentation and other technical advances. By supplying computational methods that automatically process raw data to obtain biological information, the field of bioinformatics plays an increasingly important role in the analysis of the ever-growing mass of data. Computational mass spectrometry in particular, is a bioinformatics field of research which provides means to gather, analyze and visualize data from high-throughput mass spectrometric experiments. For the study of the entirety of proteins in a cell or an environmental sample, even current techniques reach limitations that need to be circumvented by simplifying the samples subjected to the mass spectrometer. These pre-digested (so-called bottom-up) proteomics experiments then pose an even bigger computational burden during analysis since complex ambiguities need to be resolved during protein inference, grouping and quantification. In this thesis, we present several developments in the pursuit of our goal to provide means for a fully automated analysis of complex and large-scale bottom-up proteomics experiments. Firstly, due to prohibitive computational complexities in state-of-the-art Bayesian protein inference techniques, a refined, more stable technique for performing inference on sums of random variables was developed to enable a variation of standard Bayesian inference for the problem. nextflow and part of a set of standardized, well-tested, and community-maintained workflows by the nf-core collective. Our workflow runs on large-scale data with complex experimental designs and allows a one-command analysis of local and publicly available data sets with state-of-the-art accuracy on various high-performance computing environments or the cloud

    Advances in Condition Monitoring, Optimization and Control for Complex Industrial Processes

    Get PDF
    The book documents 25 papers collected from the Special Issue โ€œAdvances in Condition Monitoring, Optimization and Control for Complex Industrial Processesโ€, highlighting recent research trends in complex industrial processes. The book aims to stimulate the research field and be of benefit to readers from both academic institutes and industrial sectors
    • โ€ฆ
    corecore