1,087 research outputs found

    Modifying Hamming code and using the replication method to protect memory against triple soft errors

    Get PDF
    As technology scaling increases computer memory’s bit-cell density and reduces the voltage of semiconductors, the number of soft errors due to radiation induced single event upsets (SEU) and multi-bit upsets (MBU) also increases. To address this, error-correcting codes (ECC) can be used to detect and correct soft errors, while x-modular-redundancy improves fault tolerance. This paper presents a technique that provides high error-correction performance, high speed, and low complexity. The proposed technique ensures that only correct values get passed to the system output or are processed in spite of the presence of up to three-bit errors. The Hamming code is modified in order to provide a high probability of MBU detection. In addition, the paper describes the new technique and associated analysis scheme for its implementation. The new technique has been simulated, evaluated, and compared to error correction codes with similar decoding complexity to better understand the overheads required, the gained capabilities to protect data against three-bit errors, and to reduce the misdetection probability and false-detection probability of four-bit errors

    MCU Tolerance in SRAMs through Low Redundancy Triple Adjacent Error Correction

    Full text link
    (c) 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.[EN] Static random access memories (SRAMs) are key in electronic systems. They are used not only as standalone devices, but also embedded in application specific integrated circuits. One key challenge for memories is their susceptibility to radiation-induced soft errors that change the value of memory cells. Error correction codes (ECCs) are commonly used to ensure correct data despite soft errors effects in semiconductor memories. Single error correction/double error detection (SEC-DED) codes have been traditionally the preferred choice for data protection in SRAMs. During the last decade, the percentage of errors that affect more than one memory cell has increased substantially, mainly due to multiple cell upsets (MCUs) caused by radiation. The bits affected by these errors are physically close. To mitigate their effects, ECCs that correct single errors and double adjacent errors have been proposed. These codes, known as single error correction/double adjacent error correction (SEC-DAEC), require the same number of parity bits as traditional SEC-DED codes and a moderate increase in the decoder complexity. However, MCUs are not limited to double adjacent errors, because they affect more bits as technology scales. In this brief, new codes that can correct triple adjacent errors and 3-bit burst errors are presented. They have been implemented using a 45-nm library and compared with previous proposals, showing that our codes have better error protection with a moderate overhead and low redundancy.This work was supported in part by the Universitat Politecnica de Valencia, Valencia, Spain, through the DesTT Research Project under Grant SP20120806; in part by the Spanish Ministry of Science and Education under Project AYA-2009-13300-C03; in part by the Arenes Research Project under Grant TIN2012-38308-C02-01; and in part by the Research Project entitled Manufacturable and Dependable Multicore Architectures at Nanoscale within the framework of COST ICT Action under Grant 1103.Saiz-Adalid, L.; Reviriego, P.; Gil, P.; Pontarelli, S.; Maestro, JA. (2015). MCU Tolerance in SRAMs through Low Redundancy Triple Adjacent Error Correction. IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 23(10):2332-2336. https://doi.org/10.1109/TVLSI.2014.2357476S23322336231

    Low-Power Embedded Design Solutions and Low-Latency On-Chip Interconnect Architecture for System-On-Chip Design

    Get PDF
    This dissertation presents three design solutions to support several key system-on-chip (SoC) issues to achieve low-power and high performance. These are: 1) joint source and channel decoding (JSCD) schemes for low-power SoCs used in portable multimedia systems, 2) efficient on-chip interconnect architecture for massive multimedia data streaming on multiprocessor SoCs (MPSoCs), and 3) data processing architecture for low-power SoCs in distributed sensor network (DSS) systems and its implementation. The first part includes a low-power embedded low density parity check code (LDPC) - H.264 joint decoding architecture to lower the baseband energy consumption of a channel decoder using joint source decoding and dynamic voltage and frequency scaling (DVFS). A low-power multiple-input multiple-output (MIMO) and H.264 video joint detector/decoder design that minimizes energy for portable, wireless embedded systems is also designed. In the second part, a link-level quality of service (QoS) scheme using unequal error protection (UEP) for low-power network-on-chip (NoC) and low latency on-chip network designs for MPSoCs is proposed. This part contains WaveSync, a low-latency focused network-on-chip architecture for globally-asynchronous locally-synchronous (GALS) designs and a simultaneous dual-path routing (SDPR) scheme utilizing path diversity present in typical mesh topology network-on-chips. SDPR is akin to having a higher link width but without the significant hardware overhead associated with simple bus width scaling. The last part shows data processing unit designs for embedded SoCs. We propose a data processing and control logic design for a new radiation detection sensor system generating data at or above Peta-bits-per-second level. Implementation results show that the intended clock rate is achieved within the power target of less than 200mW. We also present a digital signal processing (DSP) accelerator supporting configurable MAC, FFT, FIR, and 3-D cross product operations for embedded SoCs. It consumes 12.35mW along with 0.167mm2 area at 333MHz

    Error detecting decimal codes

    Get PDF

    Wavelet-based multi-carrier code division multiple access systems

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Engineering evaluations and studies. Volume 3: Exhibit C

    Get PDF
    High rate multiplexes asymmetry and jitter, data-dependent amplitude variations, and transition density are discussed

    Electronic systems-1. Lecture notes

    Get PDF
    The discipline «Electronic Systems» belongs to the cycle of professional and practical training of bachelors in the educational program «Electronic Components and Systems» is read over one semester (7) and is one of the final subjects of the bachelor's degree. In the process of studying the course, students get acquainted with the informational assessments of the ES; a description of the signals used in different purposes of the ES; methods of their processing, storage and transformation; principles of construction and operation of the ES - the selection, transformation, transmission, reception, registration and display of information. The basics of device design based on programmable logic integrated circuits (FPGA) are considered. Lecture notes contain theoretical information for up to 18 lectures and a list of recommended reading

    On Transmission System Design for Wireless Broadcasting

    Get PDF
    This thesis considers aspects related to the design and standardisation of transmission systems for wireless broadcasting, comprising terrestrial and mobile reception. The purpose is to identify which factors influence the technical decisions and what issues could be better considered in the design process in order to assess different use cases, service scenarios and end-user quality. Further, the necessity of cross-layer optimisation for efficient data transmission is emphasised and means to take this into consideration are suggested. The work is mainly related terrestrial and mobile digital video broadcasting systems but many of the findings can be generalised also to other transmission systems and design processes. The work has led to three main conclusions. First, it is discovered that there are no sufficiently accurate error criteria for measuring the subjective perceived audiovisual quality that could be utilised in transmission system design. Means for designing new error criteria for mobile TV (television) services are suggested and similar work related to other services is recommended. Second, it is suggested that in addition to commercial requirements there should be technical requirements setting the frame work for the design process of a new transmission system. The technical requirements should include the assessed reception conditions, technical quality of service and service functionalities. Reception conditions comprise radio channel models, receiver types and antenna types. Technical quality of service consists of bandwidth, timeliness and reliability. Of these, the thesis focuses on radio channel models and errorcriteria (reliability) as two of the most important design challenges and provides means to optimise transmission parameters based on these. Third, the thesis argues that the most favourable development for wireless broadcasting would be a single system suitable for all scenarios of wireless broadcasting. It is claimed that there are no major technical obstacles to achieve this and that the recently published second generation digital terrestrial television broadcasting system provides a good basis. The challenges and opportunities of a universal wireless broadcasting system are discussed mainly from technical but briefly also from commercial and regulatory aspectSiirretty Doriast

    5th International Probabilistic Workshop: 28-29 November 2007, Ghent, Belgium

    Get PDF
    These are the proceedings of the 5th International Probabilistic Workshop. Even though the 5th anniversary of a conference might not be of such importance, it is quite interesting to note the development of this probabilistic conference. Originally, the series started as the 1st and 2nd Dresdner Probabilistic Symposium, which were launched to present research and applications mainly dealt with at Dresden University of Technology. Since then, the conference has grown to an internationally recognised conference dealing with research on and applications of probabilistic techniques, mainly in the field of structural engineering. Other topics have also been dealt with such as ship safety and natural hazards. Whereas the first conferences in Dresden included about 12 presentations each, the conference in Ghent has attracted nearly 30 presentations. Moving from Dresden to Vienna (University of Natural Resources and Applied Life Sciences) to Berlin (Federal Institute for Material Research and Testing) and then finally to Ghent, the conference has constantly evolved towards a truly international level. This can be seen by the language used. The first two conferences were entirely in the German language. During the conference in Berlin however, the change from the German to English language was especially apparent as some presentations were conducted in German and others in English. Now in Ghent all papers will be presented in English. Participants now, not only come from Europe, but also from other continents. Although the conference will move back to Germany again next year (2008) in Darmstadt, the international concept will remain, since so much work in the field of probabilistic safety evaluations is carried out internationally. In two years (2009) the conference will move to Delft, The Netherlands and probably in 2010 the conference will be held in Szczecin, Poland. Coming back to the present: the editors wish all participants a successful conference in Ghent

    A multi-configuration approach to reliability based structural integrity assessment for ultimate strength

    Get PDF
    Structural Reliability treats uncertainties in structural design systematically, evaluating the levels of safety and serviceability of structures. During the past decades, it has been established as a valuable design tool for the description of the performance of structures, and lately stands as a basis in the background of the most of the modern design standards, aiming to achieve a uniform behaviour within a class of structures. Several methods have been proposed for the estimation of structural reliability, both deterministic (FORM and SORM) and stochastic (Monte Carlo Simulation etc) in nature. Offshore structures should resist complicated and, in most cases, combined environmental phenomena of greatly uncertain magnitude (eg. wind, wave, current, operational loads etc). Failure mechanisms of structural systems and components are expressed through limit state functions, which distinguish a failure and a safe region of operation. For a jacket offshore structure, which comprises of multiple tubular members interconnected in a three dimensional truss configuration, the limit state function should link the actual load or load combination acting on it locally, to the response of each structural member. Cont/d.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore