10 research outputs found

    A Study on DNA Memory Encoding Architecture

    Get PDF
    The amount of raw generated data is growing at an exponential rate due to the greatly increasing number of sensors in electronic systems. While the majority of this data is never used, it is often kept for cases such as failure analysis. As such, archival memory storage, where data can be stored at an extremely high density at the cost of read latency, is becoming more popular than ever for long term storage. In biological organisms, Deoxyribonucleic Acid (DNA) is used as a method of storing information in terms of simple building blocks, as to allow for larger and more complicated struc- tures in a density much higher than can currently be realized on modern memory devices. Given the ability for organisms to store this information in a set of four bases for an extremely long amounts of time with limited degradation, DNA presents itself as a possible way to store data in a manner similar to binary data. This work investigates the use of DNA strands as a storage regime, where system-level data is translated into an efficient encoding to minimize base pair errors both at a local level and at the chain level. An encoding method using a Bose-Chaudhuri-Hocquenghem (BCH) pre-coded Raptor scheme is implemented in conjunction with an 8 to 6 bi- nary to base translation, yielding an informational density of 1.18 bits/base pair. A Field-Programmable Gate Array (FPGA) is then used in conjunction with a soft-core processor to verify address and key translation abilities, providing strong support that a strand-pool DNA model is reasonable for archival storage

    ์ €๋ฐ€๋„ ๋ถ€ํ˜ธ์˜ ์‘์šฉ: ๋ฌถ์Œ ์ง€๊ทธ์žฌ๊ทธ ํŒŒ์šดํ‹ด ๋ถ€ํ˜ธ์™€ WOM ๋ถ€ํ˜ธ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2017. 2. ๋…ธ์ข…์„ .This dissertation contains the following two contributions on the applications of sparse codes. Fountain codes Batched zigzag (BZ) fountain codes โ€“ Two-phase batched zigzag (TBZ) fountain codes Write-once memory (WOM) codes โ€“ WOM codes implemented by rate-compatible low-density generator matrix (RC-LDGM) codes First, two classes of fountain codes, called batched zigzag fountain codes and two-phase batched zigzag fountain codes, are proposed for the symbol erasure channel. At a cost of slightly lengthened code symbols, the involved message symbols in each batch of the proposed codes can be recovered by low complexity zigzag decoding algorithm. Thus, the proposed codes have low buffer occupancy during decoding process. These features are suitable for receivers with limited hardware resources in the broadcasting channel. A method to obtain degree distributions of code symbols for the proposed codes via ripple size evolution is also proposed by taking into account the released code symbols from the batches. It is shown that the proposed codes outperform Luby transform codes and zigzag decodable fountain codes with respect to intermediate recovery rate and coding overhead when message length is short, symbol erasure rate is low, and available buffer size is limited. In the second part of this dissertation, WOM codes constructed by sparse codes are presented. Recently, WOM codes are adopted to NAND flash-based solid-state drive (SSD) in order to extend the lifetime by reducing the number of erasure operations. Here, a new rewriting scheme for the SSD is proposed, which is implemented by multiple binary erasure quantization (BEQ) codes. The corresponding BEQ codes are constructed by RC-LDGM codes. Moreover, by putting RC-LDGM codes together with a page selection method, writing efficiency can be improved. It is verified via simulation that the SSD with proposed rewriting scheme outperforms the SSD without and with the conventional WOM codes for single level cell (SLC) and multi-level cell (MLC) flash memories.1 Introduction 1 1.1 Background 1 1.2 Overview of Dissertation 5 2 Sparse Codes 7 2.1 Linear Block Codes 7 2.2 LDPC Codes 9 2.3 Message Passing Decoder 11 3 New Fountain Codes with Improved Intermediate Recovery Based on Batched Zigzag Coding 13 3.1 Preliminaries 17 3.1.1 Definitions and Notation 17 3.1.2 LT Codes 18 3.1.3 Zigzag Decodable Codes 20 3.1.4 Bit-Level Overhead 22 3.2 New Fountain Codes Based on Batched Zigzag Coding 23 3.2.1 Construction of Shift Matrix 24 3.2.2 Encoding and Decoding of the Proposed BZ Fountain Codes 25 3.2.3 Storage and Computational Complexity 28 3.3 Degree Distribution of BZ Fountain Codes 31 3.3.1 Relation Between ฮจ(x)\Psi(x) and ฮฉ(x)\Omega(x) 31 3.3.2 Derivation of ฮฉ(x)\Omega(x) via Ripple Size Evolution 32 3.4 Two-Phase Batched Zigzag Fountain Codes with Additional Memory 40 3.4.1 Code Construction 41 3.4.2 Bit-Level Overhead 46 3.5 Numerical Analysis 49 4 Write-Once Memory Codes Using Rate-Compatible LDGM Codes 60 4.1 Preliminaries 62 4.1.1 NAND Flash Memory 62 4.1.2 Rewriting Schemes for Flash Memory 62 4.1.3 Construction of Rewriting Codes by BEQ Codes 65 4.2 Proposed Rewriting Codes 67 4.2.1 System Model 67 4.2.2 Multi-rate Rewriting Codes 68 4.2.3 Page Selection for Rewriting 70 4.3 RC-LDGM Codes 74 4.4 Numerical Analysis 76 5 Conclusions 80 Bibliography 82 ์ดˆ๋ก 94Docto

    ์—ฐํŒ์ • ์˜ค๋ฅ˜์ •์ •์„ ์œ„ํ•œ ๋‚ฎ์€ ๋ณต์žก๋„์˜ ๋ธ”๋ก ํ„ฐ๋ณด๋ถ€ํ˜ธ ๋ณตํ˜ธํ™” ์—ฐ๊ตฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2016. 8. ์„ฑ์›์šฉ.As the throughput needed for communication systems and storage devices increases, high-performance forward error correction (FEC), especially soft-decision (SD) based technique, becomes essential. In particular, block turbo codes (BTCs) and low-density parity check (LDPC) codes are considered as candidate FEC codes for the next generation systems, such as beyond-100Gbps optical networks and under-20nm NAND flash memory devices, which require capacity-approaching performance and very low error floor. The BTCs have definite strengths in diversity and encoding complexity because they generally employ a two-dimensional structure, which enables sub-frame level decoding for the row or column code-words. This sub-frame level decoding gives a strong advantage for parallel processing. The BTC decoding throughput can be improved by applying a low-complexity algorithm to the small level decoding or by running multiple sub-frame decoding modules simultaneously. In this dissertation, we develop high-throughput BTC decoding software that pursuits these advantages. The first part of this dissertation is devoted to finding efficient test patterns in the Chase-Pyndiah algorithm. Although the complexity of this algorithm linearly increases according to the number of the test patterns, it naively considers all possible patterns containing least reliable positions. As a result, consideration of one more position nearly doubles the complexity. To solve this issue, we first introduce a new position selection criterion that excludes some of the selected ones having a relatively large reliability. This technique excludes the selection of sufficiently reliable positions, which greatly reduces the complexity. Secondly, we propose a pattern selection scheme considering the error coverage. We define the error coverage factor that represents the influence on the error-correcting performance and compute it by analyzing error events. Based on the computed factor, we select the patterns with the greedy algorithm. By using these methods, we can flexibly balance the complexity and the performance. The second part of this dissertation is developing low-complexity soft-output processing methods needed for BTC decoding. In the Chase-Pyndiah algorithm, the soft-output is updated in two different ways according to whether competing code-words exist on the updating positions or not. If the competing code-words exist, the Euclidean distance between the soft-input signal and the code-words that are generated from the test patterns is used. However, the cost of distance computation is very high and linearly increases with the sub-frame length. We identify computationally redundant positions and optimize the computing process by ignoring them. If the competing ones do not exist, the reliability factor that should be pre-determined by an extensive search is demanded. To avoid this, we propose adaptive determination methods, which provides even better error-correcting performance. In addition, we investigate the Pyndiah's soft-output computation and find its drawbacks that appear during the approximation process. To remove the drawbacks, we replace the updating method of the positions that are expected to be seriously damaged by the approximation with the reliability factor-based one, which is much simpler, even though they have the competing words. This dissertation also develops a graphics processing unit (GPU) based BTC decoding program. In order to hide the latency of arithmetic and memory access operations, this software applies the kernel structure that processes multiple BTC-words and allocates multiple sub-frames to each thread-block. Global memory access optimization and data compression, which demands less shared memory space, are also employed. For efficient mapping of the Chase-Pyndiah algorithm onto GPUs, we propose parallel processing schemes employing efficient reduction algorithms and provide step-by-step parallel algorithms for the algebraic decoding. The last part of this dissertation is devoted to summarizing the developed decoding method and comparing it with the decoding of the LDPC convolutional code (CC), which is currently reported as the most powerful candidate for the 100Gbps optical network. We first investigate the complexity reduction and the error rate performance improvement of the developed method. Then, we analyze the complexity of the LDPC-CC decoding and compare it with the developed BTC decoding for the 20% overhead codes. This dissertation is intended to develop high-throughput SD decoding software by introducing complexity reduction techniques for the Chase-Pyndiah algorithm and efficient parallel processing methods, and to emphasize the competitiveness of the BTC. The proposed decoding methods and parallel processing algorithms verified in the GPU-based systems are also applicable to hardware-based ones. By implementing hardware-based decoders that employ the developed methods in this dissertation, significant improvements on the throughputs and the energy efficiency can be obtained. Moreover, thanks to the wide rate coverage of the BTC, the developed techniques can be applied to many high-throughput error correction applications, such as the next-generation optical network and storage device systems.Chapter 1 Introduction 1 1.1 Turbo Codes 1 1.2 Applications of Turbo Codes 4 1.3 Outline of the Dissertation 5 Chapter 2 Encoding and Iterative Decoding of Block Turbo Codes 7 2.1 Introduction 7 2.2 Encoding Procedure of Shortened-Extended BTCs 9 2.3 Scheduling Methods for Iterative Decoding 9 2.3.1 Serial Scheduling 10 2.3.2 Parallel Scheduling 10 2.3.3 Replica Scheduling 11 2.4 Elementary Decoding with Chase-Pyndiah Algorithm 13 2.4.1 Chase-Pyndiah Algorithm for Extended BTCs 13 2.4.2 Reliability Computation of the ML Code-Word 17 2.4.3 Algebraic Decoding for SEC and DEC BCH Codes 20 2.5 Issues of Chase-Pyndiah Algorithm 23 Chapter 3 Complexity Reduction Techniques for Code-Word Set Generation of the Chase-Pyndiah Algorithm 24 3.1 Introduction 24 3.2 Adaptive Selection of LRPs 25 3.2.1 Selection Constraints of LRPs 25 3.2.2 Simulation Results 26 3.3 Test Pattern Selection 29 3.3.1 The Error Coverage Factor of Test Patterns 30 3.3.2 Greedy Selection of Test Patterns 33 3.3.3 Simulation Results 34 3.4 Concluding Remarks 34 Chapter 4 Complexity Reduction Techniques for Soft-Output Update of the Chase-Pyndiah Algorithm 37 4.1 Introduction 37 4.2 Distance Computation 38 4.2.1 Position-Index List Based Method 39 4.2.2 Double Index Set-Based Method 42 4.2.3 Complexity Analysis 46 4.2.4 Simulation Results 47 4.3 Reliability Factor Determination 49 4.3.1 Refinement of Distance-Based Reliability Factor 51 4.3.2 Adaptive Determination of the Reliability Factor 51 4.3.3 Simulation Results 53 4.4 Accuracy Improvement in Extrinsic Information Update 54 4.4.1 Drawbacks of the Sub-Optimal Update 55 4.4.2 Low-Complexity Extrinsic Information Update 58 4.4.3 Simulation Results 59 4.5 Concluding Remarks 61 Chapter 5 High-Throughput BTC Decoding on GPUs 64 5.1 Introduction 64 5.2 BTC Decoder Architecture for GPU Implementations 66 5.3 Memory Optimization 68 5.3.1 Global Memory Access Reduction 68 5.3.2 Improvement of Global Memory Access Coalescing 68 5.3.3 Efficient Shared Memory Control with Data Compression 70 5.3.4 Index Parity Check Scheme 73 5.4 Parallel Algorithms with the CUDA Shuffle Function 77 5.5 Implementation of Algebraic Decoder 78 5.5.1 Galois Field Operations with Look-Up Tables 78 5.5.2 Error-Locator Polynomial Setting with the LUTs 81 5.5.3 Parallel Chien Search with the LUTs 84 5.6 Simulation Results 85 5.7 Concluding Remarks 89 Chapter 6 Competitiveness of BTCs as FEC codes for the Next-Generation Optical Networks 91 6.1 Introduction 91 6.2 The Complexity Reduction of the Modified Chase-Pyndiah Algorithm 92 6.2.1 Summary of the Complexity Reduction 92 6.2.2 The Error-Correcting Performance 94 6.3 Comparison of BTCs and LDPC-CCs 97 6.3.1 Complexity Analysis of the LDPC-CC Decoding 97 6.3.2 Comparison of the 20% Overhead BTC and LDPC-CC 100 6.4 Concluding Remarks 101 Chapter 7 Conclusion 102 Bibliography 105 ๊ตญ๋ฌธ ์ดˆ๋ก 113Docto

    VLSI decoding architectures: flexibility, robustness and performance

    Get PDF
    Stemming from previous studies on flexible LDPC decoders, this thesis work has been mainly focused on the development of flexible turbo and LDPC decoder designs, and on the narrowing of the power, area and speed gap they might present with respect to dedicated solutions. Additional studies have been carried out within the field of increased code performance and of decoder resiliency to hardware errors. The first chapter regroups several main contributions in the design and implementation of flexible channel decoders. The first part concerns the design of a Network-on-Chip (NoC) serving as an interconnection network for a partially parallel LDPC decoder. A best-fit NoC architecture is designed and a complete multi-standard turbo/LDPC decoder is designed and implemented. Every time the code is changed, the decoder must be reconfigured. A number of variables influence the duration of the reconfiguration process, starting from the involved codes down to decoder design choices. These are taken in account in the flexible decoder designed, and novel traffic reduction and optimization methods are then implemented. In the second chapter a study on the early stopping of iterations for LDPC decoders is presented. The energy expenditure of any LDPC decoder is directly linked to the iterative nature of the decoding algorithm. We propose an innovative multi-standard early stopping criterion for LDPC decoders that observes the evolution of simple metrics and relies on on-the-fly threshold computation. Its effectiveness is evaluated against existing techniques both in terms of saved iterations and, after implementation, in terms of actual energy saving. The third chapter portrays a study on the resilience of LDPC decoders under the effect of memory errors. Given that the purpose of channel decoders is to correct errors, LDPC decoders are intrinsically characterized by a certain degree of resistance to hardware faults. This characteristic, together with the soft nature of the stored values, results in LDPC decoders being affected differently according to the meaning of the wrong bits: ad-hoc error protection techniques, like the Unequal Error Protection devised in this chapter, can consequently be applied to different bits according to their significance. In the fourth chapter the serial concatenation of LDPC and turbo codes is presented. The concatenated FEC targets very high error correction capabilities, joining the performance of turbo codes at low SNR with that of LDPC codes at high SNR, and outperforming both current deep-space FEC schemes and concatenation-based FECs. A unified decoder for the concatenated scheme is subsequently propose

    Introductory Computer Forensics

    Get PDF
    INTERPOL (International Police) built cybercrime programs to keep up with emerging cyber threats, and aims to coordinate and assist international operations for ?ghting crimes involving computers. Although signi?cant international efforts are being made in dealing with cybercrime and cyber-terrorism, ?nding effective, cooperative, and collaborative ways to deal with complicated cases that span multiple jurisdictions has proven dif?cult in practic
    corecore