1,685 research outputs found

    Error control for reliable digital data transmission and storage systems

    Get PDF
    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. In this paper we present some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative alorithm to find the error locator polynomial. Two codes are considered: (1) a d sub min = 4 single-byte-error-correcting (SBEC), double-byte-error-detecting (DBED) RS code; and (2) a d sub min = 6 double-byte-error-correcting (DBEC), triple-byte-error-detecting (TBED) RS code

    Design of a fault tolerant airborne digital computer. Volume 1: Architecture

    Get PDF
    This volume is concerned with the architecture of a fault tolerant digital computer for an advanced commercial aircraft. All of the computations of the aircraft, including those presently carried out by analogue techniques, are to be carried out in this digital computer. Among the important qualities of the computer are the following: (1) The capacity is to be matched to the aircraft environment. (2) The reliability is to be selectively matched to the criticality and deadline requirements of each of the computations. (3) The system is to be readily expandable. contractible, and (4) The design is to appropriate to post 1975 technology. Three candidate architectures are discussed and assessed in terms of the above qualities. Of the three candidates, a newly conceived architecture, Software Implemented Fault Tolerance (SIFT), provides the best match to the above qualities. In addition SIFT is particularly simple and believable. The other candidates, Bus Checker System (BUCS), also newly conceived in this project, and the Hopkins multiprocessor are potentially more efficient than SIFT in the use of redundancy, but otherwise are not as attractive

    Reed Solomon codes for error control in byte organized computer memory systems

    Get PDF
    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation are presented. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial

    Integer Codes Correcting Single Errors and Detecting Burst Errors Within a Byte

    Get PDF
    Correcting single and detecting adjacent errors has become important in memory systems using high density DRAM chips. The reason is that, in these systems, the strike of a single energetic particle can upset one or more adjacent bits. In this article, we present a simple solution for this problem based on integer codes capable of correcting single errors and detecting l -bit burst errors confined to a b -bit byte ( 1<l<b ). Unlike the classical approach, the proposed one does not rely on the use of dedicated encoding/decoding hardware. Instead, it uses the processor as both encoder and decoder. The effectiveness of such solution is demonstrated on a theoretical model of an eight-core processor. The obtained results show that it has the potential to be used in future DDR5 systems.This is the peer-reviewed version of the paper: Radonjic, A., 2020. Integer Codes Correcting Single Errors and Detecting Burst Errors Within a Byte. IEEE Transactions on Device and Materials Reliability 20, 748–753. [https://doi.org/10.1109/TDMR.2020.3033511]© 20XX IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Published version: [https://hdl.handle.net/21.15107/rcub_dais_9998

    Communications and information research: Improved space link performance via concatenated forward error correction coding

    Get PDF
    With the development of new advanced instruments for remote sensing applications, sensor data will be generated at a rate that not only requires increased onboard processing and storage capability, but imposes demands on the space to ground communication link and ground data management-communication system. Data compression and error control codes provide viable means to alleviate these demands. Two types of data compression have been studied by many researchers in the area of information theory: a lossless technique that guarantees full reconstruction of the data, and a lossy technique which generally gives higher data compaction ratio but incurs some distortion in the reconstructed data. To satisfy the many science disciplines which NASA supports, lossless data compression becomes a primary focus for the technology development. While transmitting the data obtained by any lossless data compression, it is very important to use some error-control code. For a long time, convolutional codes have been widely used in satellite telecommunications. To more efficiently transform the data obtained by the Rice algorithm, it is required to meet the a posteriori probability (APP) for each decoded bit. A relevant algorithm for this purpose has been proposed which minimizes the bit error probability in the decoding linear block and convolutional codes and meets the APP for each decoded bit. However, recent results on iterative decoding of 'Turbo codes', turn conventional wisdom on its head and suggest fundamentally new techniques. During the past several months of this research, the following approaches have been developed: (1) a new lossless data compression algorithm, which is much better than the extended Rice algorithm for various types of sensor data, (2) a new approach to determine the generalized Hamming weights of the algebraic-geometric codes defined by a large class of curves in high-dimensional spaces, (3) some efficient improved geometric Goppa codes for disk memory systems and high-speed mass memory systems, and (4) a tree based approach for data compression using dynamic programming

    Standard interface definition for avionics data bus systems

    Get PDF
    Data bus for avionics system of space shuttle, noting functions of interface unit, error detection and recovery, redundancy, and bus control philosoph

    Modifying Hamming code and using the replication method to protect memory against triple soft errors

    Get PDF
    As technology scaling increases computer memory’s bit-cell density and reduces the voltage of semiconductors, the number of soft errors due to radiation induced single event upsets (SEU) and multi-bit upsets (MBU) also increases. To address this, error-correcting codes (ECC) can be used to detect and correct soft errors, while x-modular-redundancy improves fault tolerance. This paper presents a technique that provides high error-correction performance, high speed, and low complexity. The proposed technique ensures that only correct values get passed to the system output or are processed in spite of the presence of up to three-bit errors. The Hamming code is modified in order to provide a high probability of MBU detection. In addition, the paper describes the new technique and associated analysis scheme for its implementation. The new technique has been simulated, evaluated, and compared to error correction codes with similar decoding complexity to better understand the overheads required, the gained capabilities to protect data against three-bit errors, and to reduce the misdetection probability and false-detection probability of four-bit errors
    • …
    corecore