672 research outputs found

    ๋‚ธ๋“œํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ ์˜ค๋ฅ˜์ •์ •์„ ์œ„ํ•œ ๊ณ ์„ฑ๋Šฅ LDPC ๋ณตํ˜ธ๋ฐฉ๋ฒ• ์—ฐ๊ตฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2013. 8. ์„ฑ์›์šฉ.๋ฐ˜๋„์ฒด ๊ณต์ •์˜ ๋ฏธ์„ธํ™”์— ๋”ฐ๋ผ ๋น„ํŠธ ์—๋Ÿฌ์œจ์ด ์ฆ๊ฐ€ํ•˜๋Š” ๋‚ธ๋“œ ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ์—์„œ ๊ณ ์„ฑ๋Šฅ ์—๋Ÿฌ ์ •์ • ๋ฐฉ๋ฒ•์€ ํ•„์ˆ˜์ ์ด๋‹ค. Low-density parity-check (LDPC) ๋ถ€ํ˜ธ์™€ ๊ฐ™์€ ์—ฐํŒ์ • ์—๋Ÿฌ ์ •์ • ๋ถ€ํ˜ธ๋Š” ๋›ฐ์–ด๋‚œ ์—๋Ÿฌ ์ •์ • ์„ฑ๋Šฅ์„ ๋ณด์ด์ง€๋งŒ, ๋†’์€ ๊ตฌํ˜„ ๋ณต์žก๋„๋กœ ์ธํ•ด ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ ์‹œ์Šคํ…œ์— ์ ์šฉ๋˜๊ธฐ ํž˜๋“  ๋‹จ์ ์ด ์žˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” LDPC ๋ถ€ํ˜ธ์˜ ํšจ์œจ์ ์ธ ๋ณตํ˜ธ๋ฅผ ์œ„ํ•ด ๊ณ ์„ฑ๋Šฅ ๋ฉ”์‹œ์ง€ ์ „ํŒŒ ์Šค์ผ€์ค„๋ง ๋ฐฉ๋ฒ•๊ณผ ์ € ๋ณต์žก๋„ ๋ณตํ˜ธ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ œ์•ˆํ•œ๋‹ค. ํŠนํžˆ finite geometry (FG) LDPC ๋ถ€ํ˜ธ์— ๋Œ€ํ•œ ํšจ์œจ์ ์ธ ๋””์ฝ”๋” ์•„ํ‚คํ…์ณ๋ฅผ ์ œ์•ˆํ•˜๋ฉฐ, ๊ตฌํ˜„๋œ ๋””์ฝ”๋”๋ฅผ ์ด์šฉํ•˜์—ฌ ๋‚ธ๋“œ ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ์— ๋Œ€ํ•ด ์—ฐํŒ์ • ๋ณตํ˜ธ์‹œ์˜ ์—๋„ˆ์ง€ ์†Œ๋ชจ๋Ÿ‰์— ๋Œ€ํ•ด ์—ฐ๊ตฌํ•œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์˜ ์ฒซ ๋ฒˆ์งธ ๋ถ€๋ถ„์—์„œ๋Š” ๋™์  ์Šค์ผ€์ค„๋ง (informed dynamic scheduling, IDS) ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์„ฑ๋Šฅํ–ฅ์ƒ ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์—ฐ๊ตฌํ•œ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ์šฐ์„  ๊ธฐ์กด์˜ ๊ฐ€์žฅ ๋น ๋ฅธ ์ˆ˜๋ ด ์†๋„๋ฅผ ๋ณด์ด๋Š” IDS ์•Œ๊ณ ๋ฆฌ์ฆ˜์ธ ๋ ˆ์ง€๋“€์–ผ ์‹ ๋ขฐ ์ „ํŒŒ (residual belief propagation, RBP) ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ๋™์ž‘ ํŠน์„ฑ์„ ๋ถ„์„ํ•˜๊ณ , ์ด๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ํŠน์ • ๋…ธ๋“œ์— ๋ฉ”์‹œ์ง€ ๊ฐฑ์‹ ์ด ์ง‘์ค‘๋˜๋Š” ๊ฒƒ์„ ๋ฐฉ์ง€ํ•˜์—ฌ RBP ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์ˆ˜๋ ด์†๋„๋ฅผ ์ฆ๊ฐ€์‹œํ‚จ improved RBP (iRBP) ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ œ์•ˆํ•œ๋‹ค. ๋˜ํ•œ iRBP์˜ ๋›ฐ์–ด๋‚œ ์ˆ˜๋ ด์†๋„์™€ ๊ธฐ์กด์˜ NS ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์šฐ์ˆ˜ํ•œ ์—๋Ÿฌ ์ •์ • ๋Šฅ๋ ฅ์„ ๋ชจ๋‘ ๊ฐ–์ถ˜ ์‹ ๋“œ๋กฌ ๊ธฐ๋ฐ˜์˜ ํ˜ผํ•ฉ ์Šค์ผ€์ค„๋ง (mixed scheduling) ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ๋์œผ๋กœ ๋‹ค์–‘ํ•œ ๋ถ€ํ˜ธ์œจ์˜ LDPC ๋ถ€ํ˜ธ์— ๋Œ€ํ•œ ๋ชจ์˜์‹คํ—˜์„ ํ†ตํ•ด ์ œ์•ˆ๋œ ์‹ ๋“œ๋กฌ ๊ธฐ๋ฐ˜์˜ ํ˜ผํ•ฉ ์Šค์ผ€์ค„๋ง ๋ฐฉ๋ฒ•์ด ๋ณธ ๋…ผ๋ฌธ์—์„œ ์‹œํ—˜๋œ ๋‹ค๋ฅธ ๋ชจ๋“  ์Šค์ผ€์ค„๋ง ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์„ฑ๋Šฅ์„ ๋Šฅ๊ฐ€ํ•จ์„ ํ™•์ธํ•˜์˜€๋‹ค. ๋…ผ๋ฌธ์˜ ๋‘ ๋ฒˆ์งธ ๋ถ€๋ถ„์—์„œ๋Š” ๋ณตํ˜ธ ์‹คํŒจ์‹œ ๋งŽ์€ ๋น„ํŠธ ์—๋Ÿฌ๋ฅผ ๋ฐœ์ƒ์‹œํ‚ค๋Š” a posteriori probability (APP) ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ๊ฐœ์„  ๋ฐฉ์•ˆ์— ๋ฐฉ์•ˆ์„ ์ œ์•ˆํ•œ๋‹ค. ๋˜ํ•œ ๋น ๋ฅธ ์ˆ˜๋ ด์†๋„์™€ ์šฐ์ˆ˜ํ•œ ์—๋Ÿฌ ๋งˆ๋ฃจ (error-floor) ์„ฑ๋Šฅ์œผ๋กœ ๋ฐ์ดํ„ฐ ์ €์žฅ์žฅ์น˜์— ์ ํ•ฉํ•œ FG-LDPC ๋ถ€ํ˜ธ์— ๋Œ€ํ•ด ์ œ์•ˆ๋œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์ด ์ ์šฉ๋œ ํ•˜๋“œ์›จ์–ด ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ œ์•ˆ๋œ ์•„ํ‚คํ…์ฒ˜๋Š” ๋†’์€ ๋…ธ๋“œ ๊ฐ€์ค‘์น˜๋ฅผ ๊ฐ€์ง€๋Š” FG-LDPC ๋ถ€ํ˜ธ์— ์ ํ•ฉํ•˜๋„๋ก ์‰ฌํ”„ํŠธ ๋ ˆ์ง€์Šคํ„ฐ (shift registers)์™€ SRAM ๊ธฐ๋ฐ˜์˜ ํ˜ผํ•ฉ ๊ตฌ์กฐ๋ฅผ ์ฑ„์šฉํ•˜๋ฉฐ, ๋†’์€ ์ฒ˜๋ฆฌ๋Ÿ‰์„ ์–ป๊ธฐ ์œ„ํ•ด ํŒŒ์ดํ”„๋ผ์ธ๋œ ๋ณ‘๋ ฌ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค. ๋˜ํ•œ ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰์„ ์ค„์ด๊ธฐ ์œ„ํ•ด ์„ธ ๊ฐ€์ง€์˜ ๋ฉ”๋ชจ๋ฆฌ ์šฉ๋Ÿ‰ ๊ฐ์†Œ ๊ธฐ๋ฒ•์„ ์ ์šฉํ•˜๋ฉฐ, ์ „๋ ฅ ์†Œ๋น„๋ฅผ ์ค„์ด๊ธฐ ์œ„ํ•ด ๋‘ ๊ฐ€์ง€์˜ ์ €์ „๋ ฅ ๊ธฐ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ๋ณธ ์ œ์•ˆ๋œ ์•„ํ‚คํ…์ฒ˜๋Š” ๋ถ€ํ˜ธ์œจ 0.96์˜ (68254, 65536) Euclidean geometry LDPC ๋ถ€ํ˜ธ์— ๋Œ€ํ•ด 0.13-um CMOS ๊ณต์ •์—์„œ ๊ตฌํ˜„ํ•˜์˜€๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์—ฐํŒ์ • ๋ณตํ˜ธ๊ฐ€ ์ ์šฉ๋œ ๋‚ธ๋“œ ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ ์‹œ์Šคํ…œ์˜ ์—๋„ˆ์ง€ ์†Œ๋ชจ๋ฅผ ๋‚ฎ์ถ”๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์ œ์•ˆํ•œ๋‹ค. ์—ฐํŒ์ • ๊ธฐ๋ฐ˜์˜ ์—๋Ÿฌ ์ •์ • ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ๋†’์€ ์„ฑ๋Šฅ์„ ๋ณด์ด์ง€๋งŒ, ์ด๋Š” ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ์˜ ์„ผ์‹ฑ ์ˆ˜์™€ ์—๋„ˆ์ง€ ์†Œ๋ชจ๋ฅผ ์ฆ๊ฐ€ ์‹œํ‚ค๋Š” ๋‹จ์ ์ด ์žˆ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์•ž์„œ ๊ตฌํ˜„๋œ LDPC ๋””์ฝ”๋”๊ฐ€ ์ฑ„์šฉ๋œ ๋‚ธ๋“œ ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ ์‹œ์Šคํ…œ์˜ ์—๋„ˆ์ง€ ์†Œ๋ชจ๋ฅผ ๋ถ„์„ํ•˜๊ณ , LDPC ๋””์ฝ”๋”์™€ BCH ๋””์ฝ”๋” ๊ฐ„์˜ ์นฉ ์‚ฌ์ด์ฆˆ์™€ ์—๋„ˆ์ง€ ์†Œ๋ชจ๋Ÿ‰์„ ๋น„๊ตํ•˜์˜€๋‹ค. ์ด์™€ ๋”๋ถˆ์–ด ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” LDPC ๋””์ฝ”๋”๋ฅผ ์ด์šฉํ•œ ์„ผ์‹ฑ ์ •๋ฐ€๋„ ๊ฒฐ์ • ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋ฅผ ํ†ตํ•ด ์ œ์•ˆ๋œ ๋ณตํ˜ธ ๋ฐ ์Šค์ผ€์ค„๋ง ์•Œ๊ณ ๋ฆฌ์ฆ˜, VLSI ์•„ํ‚คํ…์ณ, ๊ทธ๋ฆฌ๊ณ  ์ฝ๊ธฐ ์ •๋ฐ€๋„ ๊ฒฐ์ • ๋ฐฉ๋ฒ•์„ ํ†ตํ•ด ๋‚ธ๋“œ ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ ์‹œ์Šคํ…œ์˜ ์—๋Ÿฌ ์ •์ • ์„ฑ๋Šฅ์„ ๊ทน๋Œ€ํ™” ํ•˜๊ณ  ์—๋„ˆ์ง€ ์†Œ๋ชจ๋ฅผ ์ตœ์†Œํ™” ํ•  ์ˆ˜ ์žˆ๋‹ค.High-performance error correction for NAND flash memory is greatly needed because the raw bit error rate increases as the semiconductor geometry shrinks for high density. Soft-decision error correction, such as low-density parity-check (LDPC) codes, offers high performance but their implementation complexity hinders wide adoption to consumer products. This dissertation proposes two high-performance message-passing schedules and a low-complexity decoding algorithm for LDPC codes. In particular, an efficient decoder architecture for finite geometry (FG) LDPC codes is proposed, and the energy consumption of soft-decision decoding for NAND flash memory is analyzed. The first part of this dissertation is devoted to improving the informed dynamic scheduling (IDS) algorithms. We analyze the behavior of the residual belief propagation (RBP), which is the fastest IDS algorithm, and develop an improved RBP (iRBP) by avoiding the concentration of message updates at a particular node. We also study the syndrome-based mixed scheduling of the iRBP and the node-wise scheduling (NS). The proposed mixed scheduling outperforms all other scheduling methods tested in this work. The next part of this dissertation is to develop a conditional variable node update scheme for the a posteriori probability (APP) algorithm. The developed algorithm is robust to decoding failures and can reduce the dynamic power consumption by lowering switching activities in the LDPC decoder. To implement the developed algorithm, we propose a memory-efficient pipelined parallel architecture for LDPC decoding. The architecture employs FG-LDPC codes that not only show fast convergence speed and good error-floor performance but also perform well with iterative decoding algorithms, which is especially suitable for data storage devices. We also developed a rate-0.96 (68254, 65536) Euclidean geometry LDPC code and implemented the proposed architecture in 0.13-um CMOS technology. This dissertation also covers low-energy error correction of NAND flash memory through soft-decision decoding. The soft-decision-based error correction algorithms show high performance, but they demand an increased number of flash memory sensing operations and consume more energy for memory access. We examine the energy consumption of a NAND flash memory system equipping an LDPC code-based soft-decision error correction circuit. The sum of energy consumed at NAND flash memory and the LDPC decoder is minimized. In addition, the chip size and energy consumption of the decoder were compared with those of two Bose-Chaudhuri-Hocquenghem (BCH) decoding circuits showing the comparable error performance and the throughput. We also propose an LDPC decoder-assisted precision selection method that needs virtually no overhead. This dissertation is intended to develop high-performance and low-power error correction circuits for NAND flash memory by studying improved decoding and scheduling algorithms, VLSI architecture, and a read precision selection method.1 Introduction 1 1.1 NAND Flash Memory 1 1.2 LDPC Codes 4 1.3 Outline of the Dissertation 6 2 LDPC Decoding and Scheduling Algorithms 8 2.1 Introduction 8 2.2 Decoding Algorithms for LDPC Codes 10 2.2.1 Belief Propagation Algorithm 10 2.2.2 Simplified Belief Propagation Algorithms 12 2.3 Message-Passing Schedules for Decoding of LDPC Codes 15 2.3.1 Static Schedules 15 2.3.2 Dynamic Schedules 17 3 Improved Dynamic Scheduling Algorithms for Decoding of LDPC Codes 22 3.1 Introduction 22 3.2 Improved Residual Belief Propagation Algorithm 23 3.3 Syndrome-Based Mixed Scheduling of iRBP and NS 26 3.4 Complexity Analysis and Simulation Results 28 3.4.1 Complexity Analysis 28 3.4.2 Simulation Results 29 3.5 Concluding Remarks 33 4 A Pipelined Parallel Architecture for Decoding of Finite-Geometry LDPC Codes 36 4.1 Introduction 36 4.2 Finite-Geometry LDPC Codes and Conditional Variable Node Update Algorithm 38 4.2.1 Finite-Geometry LDPC codes 38 4.2.2 Conditional Variable Node Update Algorithm for Fixed-Point Normalized APP-Based Algorithm 40 4.3 Decoder Architecture 46 4.3.1 Baseline Sequential Architecture 46 4.3.2 Pipelined-Parallel Architecture 54 4.3.3 Memory Capacity Reduction 57 4.4 Implementation Results 60 4.5 Concluding Remarks 64 5 Low-Energy Error Correction of NAND Flash Memory through Soft-Decision Decoding 66 5.1 Introduction 66 5.2 Energy Consumption of Read Operations in NAND Flash Memory 67 5.2.1 Voltage Sensing Scheme for Soft-Decision Data Output 67 5.2.2 LSB and MSB Concurrent Access Scheme for Low-Energy Soft-Decision Data Output 72 5.2.3 Energy Consumption of Read Operations in NAND Flash Memory 73 5.3 The Performance of Soft-Decision Error Correction over a NAND Flash Memory Channel 76 5.4 Hardware Performance of the (68254, 65536) LDPC Decoder 81 5.4.1 Energy Consumption of the LDPC Decoder 81 5.4.2 Performance Comparison of the LDPC Decoder and Two BCH Decoders 83 5.5 Low-Energy Error Correction Scheme for NAND Flash Memory 87 5.5.1 Optimum Precision for Low-Energy Decoding 87 5.5.2 Iteration Count-Based Precision Selection 90 5.6 Concluding Remarks 91 6 Conclusion 94 Bibliography 96 Abstract in Korean 110 ๊ฐ์‚ฌ์˜ ๊ธ€ 112Docto

    Tree-Based Construction of LDPC Codes Having Good Pseudocodeword Weights

    Full text link
    We present a tree-based construction of LDPC codes that have minimum pseudocodeword weight equal to or almost equal to the minimum distance, and perform well with iterative decoding. The construction involves enumerating a dd-regular tree for a fixed number of layers and employing a connection algorithm based on permutations or mutually orthogonal Latin squares to close the tree. Methods are presented for degrees d=psd=p^s and d=ps+1d = p^s+1, for pp a prime. One class corresponds to the well-known finite-geometry and finite generalized quadrangle LDPC codes; the other codes presented are new. We also present some bounds on pseudocodeword weight for pp-ary LDPC codes. Treating these codes as pp-ary LDPC codes rather than binary LDPC codes improves their rates, minimum distances, and pseudocodeword weights, thereby giving a new importance to the finite geometry LDPC codes where p>2p > 2.Comment: Submitted to Transactions on Information Theory. Submitted: Oct. 1, 2005; Revised: May 1, 2006, Nov. 25, 200

    Decomposition Methods for Large Scale LP Decoding

    Full text link
    When binary linear error-correcting codes are used over symmetric channels, a relaxed version of the maximum likelihood decoding problem can be stated as a linear program (LP). This LP decoder can be used to decode error-correcting codes at bit-error-rates comparable to state-of-the-art belief propagation (BP) decoders, but with significantly stronger theoretical guarantees. However, LP decoding when implemented with standard LP solvers does not easily scale to the block lengths of modern error correcting codes. In this paper we draw on decomposition methods from optimization theory, specifically the Alternating Directions Method of Multipliers (ADMM), to develop efficient distributed algorithms for LP decoding. The key enabling technical result is a "two-slice" characterization of the geometry of the parity polytope, which is the convex hull of all codewords of a single parity check code. This new characterization simplifies the representation of points in the polytope. Using this simplification, we develop an efficient algorithm for Euclidean norm projection onto the parity polytope. This projection is required by ADMM and allows us to use LP decoding, with all its theoretical guarantees, to decode large-scale error correcting codes efficiently. We present numerical results for LDPC codes of lengths more than 1000. The waterfall region of LP decoding is seen to initiate at a slightly higher signal-to-noise ratio than for sum-product BP, however an error floor is not observed for LP decoding, which is not the case for BP. Our implementation of LP decoding using ADMM executes as fast as our baseline sum-product BP decoder, is fully parallelizable, and can be seen to implement a type of message-passing with a particularly simple schedule.Comment: 35 pages, 11 figures. An early version of this work appeared at the 49th Annual Allerton Conference, September 2011. This version to appear in IEEE Transactions on Information Theor
    • โ€ฆ
    corecore