568 research outputs found

    Correcting Charge-Constrained Errors in the Rank-Modulation Scheme

    Get PDF
    We investigate error-correcting codes for a the rank-modulation scheme with an application to flash memory devices. In this scheme, a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. The resulting scheme eliminates the need for discrete cell levels, overcomes overshoot errors when programming cells (a serious problem that reduces the writing speed), and mitigates the problem of asymmetric errors. In this paper, we study the properties of error-correcting codes for charge-constrained errors in the rank-modulation scheme. In this error model the number of errors corresponds to the minimal number of adjacent transpositions required to change a given stored permutation to another erroneous one—a distance measure known as Kendall’s τ-distance.We show bounds on the size of such codes, and use metric-embedding techniques to give constructions which translate a wealth of knowledge of codes in the Lee metric to codes over permutations in Kendall’s τ-metric. Specifically, the one-error-correcting codes we construct are at least half the ball-packing upper bound

    Coding theory, information theory and cryptology : proceedings of the EIDMA winter meeting, Veldhoven, December 19-21, 1994

    Get PDF

    Coding theory, information theory and cryptology : proceedings of the EIDMA winter meeting, Veldhoven, December 19-21, 1994

    Get PDF

    NASA SERC 1990 Symposium on VLSI Design

    Get PDF
    This document contains papers presented at the first annual NASA Symposium on VLSI Design. NASA's involvement in this event demonstrates a need for research and development in high performance computing. High performance computing addresses problems faced by the scientific and industrial communities. High performance computing is needed in: (1) real-time manipulation of large data sets; (2) advanced systems control of spacecraft; (3) digital data transmission, error correction, and image compression; and (4) expert system control of spacecraft. Clearly, a valuable technology in meeting these needs is Very Large Scale Integration (VLSI). This conference addresses the following issues in VLSI design: (1) system architectures; (2) electronics; (3) algorithms; and (4) CAD tools

    Synchronization with permutation codes and Reed-Solomon codes

    Get PDF
    D.Ing. (Electrical And Electronic Engineering)We address the issue of synchronization, using sync-words (or markers), for encoded data. We focus on data that is encoded using permutation codes or Reed-Solomon codes. For each type of code (permutation code and Reed-Solomon code) we give a synchronization procedure or algorithm such that synchronization is improved compared to when the procedure is not employed. The gure of merit for judging the performance is probability of synchronization (acquisition). The word acquisition is used to indicate that a sync-word is acquired or found in the right place in a frame. A new synchronization procedure for permutation codes is presented. This procedure is about nding sync-words that can be used speci cally with permutation codes, such that acceptable synchronization performance is possible even under channels with frequency selective fading/jamming, such as the power line communication channel. Our new procedure is tested with permutation codes known as distance-preserving mappings (DPMs). DPMs were chosen because they have de ned encoding and decoding procedures. Another new procedure for avoiding symbols in Reed-Solomon codes is presented. We call the procedure symbol avoidance. The symbol avoidance procedure is then used to improve the synchronization performance of Reed-Solomon codes, where known binary sync-words are used for synchronization. We give performance comparison results, in terms of probability of synchronization, where we compare Reed-Solomon with and without symbol avoidance applied

    Resiliency Mechanisms for In-Memory Column Stores

    Get PDF
    The key objective of database systems is to reliably manage data, while high query throughput and low query latency are core requirements. To date, database research activities mostly concentrated on the second part. However, due to the constant shrinking of transistor feature sizes, integrated circuits become more and more unreliable and transient hardware errors in the form of multi-bit flips become more and more prominent. In a more recent study (2013), in a large high-performance cluster with around 8500 nodes, a failure rate of 40 FIT per DRAM device was measured. For their system, this means that every 10 hours there occurs a single- or multi-bit flip, which is unacceptably high for enterprise and HPC scenarios. Causes can be cosmic rays, heat, or electrical crosstalk, with the latter being exploited actively through the RowHammer attack. It was shown that memory cells are more prone to bit flips than logic gates and several surveys found multi-bit flip events in main memory modules of today's data centers. Due to the shift towards in-memory data management systems, where all business related data and query intermediate results are kept solely in fast main memory, such systems are in great danger to deliver corrupt results to their users. Hardware techniques can not be scaled to compensate the exponentially increasing error rates. In other domains, there is an increasing interest in software-based solutions to this problem, but these proposed methods come along with huge runtime and/or storage overheads. These are unacceptable for in-memory data management systems. In this thesis, we investigate how to integrate bit flip detection mechanisms into in-memory data management systems. To achieve this goal, we first build an understanding of bit flip detection techniques and select two error codes, AN codes and XOR checksums, suitable to the requirements of in-memory data management systems. The most important requirement is effectiveness of the codes to detect bit flips. We meet this goal through AN codes, which exhibit better and adaptable error detection capabilities than those found in today's hardware. The second most important goal is efficiency in terms of coding latency. We meet this by introducing a fundamental performance improvements to AN codes, and by vectorizing both chosen codes' operations. We integrate bit flip detection mechanisms into the lowest storage layer and the query processing layer in such a way that the remaining data management system and the user can stay oblivious of any error detection. This includes both base columns and pointer-heavy index structures such as the ubiquitous B-Tree. Additionally, our approach allows adaptable, on-the-fly bit flip detection during query processing, with only very little impact on query latency. AN coding allows to recode intermediate results with virtually no performance penalty. We support our claims by providing exhaustive runtime and throughput measurements throughout the whole thesis and with an end-to-end evaluation using the Star Schema Benchmark. To the best of our knowledge, we are the first to present such holistic and fast bit flip detection in a large software infrastructure such as in-memory data management systems. Finally, most of the source code fragments used to obtain the results in this thesis are open source and freely available.:1 INTRODUCTION 1.1 Contributions of this Thesis 1.2 Outline 2 PROBLEM DESCRIPTION AND RELATED WORK 2.1 Reliable Data Management on Reliable Hardware 2.2 The Shift Towards Unreliable Hardware 2.3 Hardware-Based Mitigation of Bit Flips 2.4 Data Management System Requirements 2.5 Software-Based Techniques For Handling Bit Flips 2.5.1 Operating System-Level Techniques 2.5.2 Compiler-Level Techniques 2.5.3 Application-Level Techniques 2.6 Summary and Conclusions 3 ANALYSIS OF CODING TECHNIQUES 3.1 Selection of Error Codes 3.1.1 Hamming Coding 3.1.2 XOR Checksums 3.1.3 AN Coding 3.1.4 Summary and Conclusions 3.2 Probabilities of Silent Data Corruption 3.2.1 Probabilities of Hamming Codes 3.2.2 Probabilities of XOR Checksums 3.2.3 Probabilities of AN Codes 3.2.4 Concrete Error Models 3.2.5 Summary and Conclusions 3.3 Throughput Considerations 3.3.1 Test Systems Descriptions 3.3.2 Vectorizing Hamming Coding 3.3.3 Vectorizing XOR Checksums 3.3.4 Vectorizing AN Coding 3.3.5 Summary and Conclusions 3.4 Comparison of Error Codes 3.4.1 Effectiveness 3.4.2 Efficiency 3.4.3 Runtime Adaptability 3.5 Performance Optimizations for AN Coding 3.5.1 The Modular Multiplicative Inverse 3.5.2 Faster Softening 3.5.3 Faster Error Detection 3.5.4 Comparison to Original AN Coding 3.5.5 The Multiplicative Inverse Anomaly 3.6 Summary 4 BIT FLIP DETECTING STORAGE 4.1 Column Store Architecture 4.1.1 Logical Data Types 4.1.2 Storage Model 4.1.3 Data Representation 4.1.4 Data Layout 4.1.5 Tree Index Structures 4.1.6 Summary 4.2 Hardened Data Storage 4.2.1 Hardened Physical Data Types 4.2.2 Hardened Lightweight Compression 4.2.3 Hardened Data Layout 4.2.4 UDI Operations 4.2.5 Summary and Conclusions 4.3 Hardened Tree Index Structures 4.3.1 B-Tree Verification Techniques 4.3.2 Justification For Further Techniques 4.3.3 The Error Detecting B-Tree 4.4 Summary 5 BIT FLIP DETECTING QUERY PROCESSING 5.1 Column Store Query Processing 5.2 Bit Flip Detection Opportunities 5.2.1 Early Onetime Detection 5.2.2 Late Onetime Detection 5.2.3 Continuous Detection 5.2.4 Miscellaneous Processing Aspects 5.2.5 Summary and Conclusions 5.3 Hardened Intermediate Results 5.3.1 Materialization of Hardened Intermediates 5.3.2 Hardened Bitmaps 5.4 Summary 6 END-TO-END EVALUATION 6.1 Prototype Implementation 6.1.1 AHEAD Architecture 6.1.2 Diversity of Physical Operators 6.1.3 One Concrete Operator Realization 6.1.4 Summary and Conclusions 6.2 Performance of Individual Operators 6.2.1 Selection on One Predicate 6.2.2 Selection on Two Predicates 6.2.3 Join Operators 6.2.4 Grouping and Aggregation 6.2.5 Delta Operator 6.2.6 Summary and Conclusions 6.3 Star Schema Benchmark Queries 6.3.1 Query Runtimes 6.3.2 Improvements Through Vectorization 6.3.3 Storage Overhead 6.3.4 Summary and Conclusions 6.4 Error Detecting B-Tree 6.4.1 Single Key Lookup 6.4.2 Key Value-Pair Insertion 6.5 Summary 7 SUMMARY AND CONCLUSIONS 7.1 Future Work A APPENDIX A.1 List of Golden As A.2 More on Hamming Coding A.2.1 Code examples A.2.2 Vectorization BIBLIOGRAPHY LIST OF FIGURES LIST OF TABLES LIST OF LISTINGS LIST OF ACRONYMS LIST OF SYMBOLS LIST OF DEFINITION

    Overcoming CubeSat downlink limits with VITAMIN: a new variable coded modulation protocol

    Get PDF
    Thesis (M.S.) University of Alaska Fairbanks, 2013Many space missions, including low earth orbit CubeSats, communicate in a highly dynamic environment because of variations in geometry, weather, and interference. At the same time, most missions communicate using fixed channel codes, modulations, and symbol rates, resulting in a constant data rate that does not adapt to the dynamic conditions. When conditions are good, the fixed date rate can be far below the theoretical maximum, called the Shannon limit; when conditions are bad, the fixed data rate may not work at all. To move beyond these fixed communications and achieve higher total data volume from emerging high-tech instruments, this thesis investigates the use of error correcting codes and different modulations. Variable coded modulation (VCM) takes advantage of the dynamic link by transmitting more information when the signal-to-noise ratio (SNR) is high. Likewise, VCM can throttle down the information rate when SNR is low without having to stop all communications. VCM outperforms fixed communications which can only operate at a fixed information rate as long as a certain signal threshold is met. This thesis presents a new VCM protocol and tests its performance in both software and hardware simulations. The protocol is geared towards CubeSat downlinks as complexity is focused in the receiver, while the transmission operations are kept simple. This thesis explores bin-packing as a way to optimize the selection of VCM modes based on expected SNR levels over time. Working end-to-end simulations were created using MATLAB and LabVIEW, while the hardware simulations were done with software defined radios. Results show that a CubeSat using VCM communications will deliver twice the data throughput of a fixed communications system
    • 

    corecore