19,714 research outputs found
Improving reconfigurable systems reliability by combining periodical test and redundancy techniques: a case study
This paper revises and introduces to the field of reconfigurable computer systems, some traditional techniques used in the fields of fault-tolerance and testing of digital circuits. The target area is that of on-board spacecraft electronics, as this class of application is a good candidate for the use of reconfigurable computing technology. Fault tolerant strategies are used in order for the system to adapt itself to the severe conditions found in space. In addition, the paper describes some problems and possible solutions for the use of reconfigurable components, based on programmable logic, in space applications
DeSyRe: on-Demand System Reliability
The DeSyRe project builds on-demand adaptive and reliable Systems-on-Chips (SoCs). As fabrication technology scales down, chips are becoming less reliable, thereby incurring increased power and performance costs for fault tolerance. To make matters worse, power density is becoming a significant limiting factor in SoC design, in general. In the face of such changes in the technological landscape, current solutions for fault tolerance are expected to introduce excessive overheads in future systems. Moreover, attempting to design and manufacture a totally defect and fault-free system, would impact heavily, even prohibitively, the design, manufacturing, and testing costs, as well as the system performance and power consumption. In this context, DeSyRe delivers a new generation of systems that are reliable by design at well-balanced power, performance, and design costs. In our attempt to reduce the overheads of fault-tolerance, only a small fraction of the chip is built to be fault-free. This fault-free part is then employed to manage the remaining fault-prone resources of the SoC. The DeSyRe framework is applied to two medical systems with high safety requirements (measured using the IEC 61508 functional safety standard) and tight power and performance constraints
Smart Power Grid Synchronization With Fault Tolerant Nonlinear Estimation
Effective real-time state estimation is essential for smart grid synchronization, as electricity demand continues to grow, and renewable energy resources increase their penetration into the grid. In order to provide a more reliable state estimation technique to address the problem of bad data in the PMU-based power synchronization, this paper presents a novel nonlinear estimation framework to dynamically track frequency, voltage magnitudes and phase angles. Instead of directly analyzing in abc coordinate frame, symmetrical component transformation is employed to separate the positive, negative, and zero sequence networks. Then, Clarke\u27s transformation is used to transform the sequence networks into the αβ stationary coordinate frame, which leads to system model formulation. A novel fault tolerant extended Kalman filter based real-time estimation framework is proposed for smart grid synchronization with noisy bad data measurements. Computer simulation studies have demonstrated that the proposed fault tolerant extended Kalman filter (FTEKF) provides more accurate voltage synchronization results than the extended Kalman filter (EKF). The proposed approach has been implemented with dSPACE DS1103 and National Instruments CompactRIO hardware platforms. Computer simulation and hardware instrumentation results have shown the potential applications of FTEKF in smart grid synchronization
Microprocessor fault-tolerance via on-the-fly partial reconfiguration
This paper presents a novel approach to exploit FPGA dynamic partial reconfiguration to improve the fault tolerance of complex microprocessor-based systems, with no need to statically reserve area to host redundant components. The proposed method not only improves the survivability of the system by allowing the online replacement of defective key parts of the processor, but also provides performance graceful degradation by executing in software the tasks that were executed in hardware before a fault and the subsequent reconfiguration happened. The advantage of the proposed approach is that thanks to a hardware hypervisor, the CPU is totally unaware of the reconfiguration happening in real-time, and there's no dependency on the CPU to perform it. As proof of concept a design using this idea has been developed, using the LEON3 open-source processor, synthesized on a Virtex 4 FPG
Reliable Linear, Sesquilinear and Bijective Operations On Integer Data Streams Via Numerical Entanglement
A new technique is proposed for fault-tolerant linear, sesquilinear and
bijective (LSB) operations on integer data streams (), such as:
scaling, additions/subtractions, inner or outer vector products, permutations
and convolutions. In the proposed method, the input integer data streams
are linearly superimposed to form numerically-entangled integer data
streams that are stored in-place of the original inputs. A series of LSB
operations can then be performed directly using these entangled data streams.
The results are extracted from the entangled output streams by additions
and arithmetic shifts. Any soft errors affecting any single disentangled output
stream are guaranteed to be detectable via a specific post-computation
reliability check. In addition, when utilizing a separate processor core for
each of the streams, the proposed approach can recover all outputs after
any single fail-stop failure. Importantly, unlike algorithm-based fault
tolerance (ABFT) methods, the number of operations required for the
entanglement, extraction and validation of the results is linearly related to
the number of the inputs and does not depend on the complexity of the performed
LSB operations. We have validated our proposal in an Intel processor (Haswell
architecture with AVX2 support) via fast Fourier transforms, circular
convolutions, and matrix multiplication operations. Our analysis and
experiments reveal that the proposed approach incurs between to
reduction in processing throughput for a wide variety of LSB operations. This
overhead is 5 to 1000 times smaller than that of the equivalent ABFT method
that uses a checksum stream. Thus, our proposal can be used in fault-generating
processor hardware or safety-critical applications, where high reliability is
required without the cost of ABFT or modular redundancy.Comment: to appear in IEEE Trans. on Signal Processing, 201
Layered architecture for quantum computing
We develop a layered quantum computer architecture, which is a systematic
framework for tackling the individual challenges of developing a quantum
computer while constructing a cohesive device design. We discuss many of the
prominent techniques for implementing circuit-model quantum computing and
introduce several new methods, with an emphasis on employing surface code
quantum error correction. In doing so, we propose a new quantum computer
architecture based on optical control of quantum dots. The timescales of
physical hardware operations and logical, error-corrected quantum gates differ
by several orders of magnitude. By dividing functionality into layers, we can
design and analyze subsystems independently, demonstrating the value of our
layered architectural approach. Using this concrete hardware platform, we
provide resource analysis for executing fault-tolerant quantum algorithms for
integer factoring and quantum simulation, finding that the quantum dot
architecture we study could solve such problems on the timescale of days.Comment: 27 pages, 20 figure
- …