4,912 research outputs found
The Art of Fault Injection
Classical greek philosopher considered the foremost virtues to be temperance, justice, courage, and prudence. In this paper we relate these cardinal virtues to the correct methodological approaches that researchers should follow when setting up a fault injection experiment. With this work we try to understand where the "straightforward pathway" lies, in order to highlight those common methodological errors that deeply influence the coherency and the meaningfulness of fault injection experiments. Fault injection is like an art, where the success of the experiments depends on a very delicate balance between modeling, creativity, statistics, and patience
On-Line Instruction-checking in Pipelined Microprocessors
Microprocessors performances have increased by more than five orders of magnitude in the last three decades. As technology scales down, these components become inherently unreliable posing major design and test challenges. This paper proposes an instruction-checking architecture to detect erroneous instruction executions caused by both permanent and transient errors in the internal logic of a microprocessor. Monitoring the correct activation sequence of a set of predefined microprocessor control/status signals allow distinguishing between correctly and not correctly executed instruction
A perturbative probabilistic approach to quantum many-body systems
In the probabilistic approach to quantum many-body systems, the ground-state
energy is the solution of a nonlinear scalar equation written either as a
cumulant expansion or as an expectation with respect to a probability
distribution of the potential and hopping (amplitude and phase) values recorded
during an infinitely lengthy evolution. We introduce a perturbative expansion
of this probability distribution which conserves, at any order, a
multinomial-like structure, typical of uncorrelated systems, but includes,
order by order, the statistical correlations provided by the cumulant
expansion. The proposed perturbative scheme is successfully tested in the case
of pseudo spin 1/2 hard-core boson Hubbard models also when affected by a phase
problem due to an applied magnetic field.Comment: 39 pages, 1 picture, 5 figure
EDACs and test integration strategies for NAND flash memories
Mission-critical applications usually presents several critical issues: the required level of dependability of the whole mission always implies to address different and contrasting dimensions and to evaluate the tradeoffs among them. A mass-memory device is always needed in all mission-critical applications: NAND flash-memories could be used for this goal. Error Detection And Correction (EDAC) techniques are needed to improve dependability of flash-memory devices. However also testing strategies need to be explored in order to provide highly dependable systems. Integrating these two main aspects results in providing a fault-tolerant mass-memory device, but no systematic approach has so far been proposed to consider them as a whole. As a consequence a novel strategy integrating a particular code-based design environment with newly selected testing strategies is presented in this pape
FPGA based remote code integrity verification of programs in distributed embedded systems
The explosive growth of networked embedded systems has made ubiquitous and pervasive computing a reality. However, there are still a number of new challenges to its widespread adoption that include scalability, availability, and, especially, security of software. Among the different challenges in software security, the problem of remote-code integrity verification is still waiting for efficient solutions. This paper proposes the use of reconfigurable computing to build a consistent architecture for generation of attestations (proofs) of code integrity for an executing program as well as to deliver them to the designated verification entity. Remote dynamic update of reconfigurable devices is also exploited to increase the complexity of mounting attacks in a real-word environment. The proposed solution perfectly fits embedded devices that are nowadays commonly equipped with reconfigurable hardware components that are exploited to solve different computational problems
CLERECO, Cross-Layer Early Reliability Evaluation for the Computing cOntinuum, FP7
Advanced multi-functional computing systems realized in forthcoming manufacturing technologies hold the promise of a significant increase in device integration density complemented by an increase in system performance and functionality. However, a dramatic reduction in single device
quality and reliability is also expected.CLERECO research project recognizes early accurate reliability evaluation as one of the most important and challenging tasks throughout the design cycle of computing systems across all domains. In order to continue harvesting the performance and functionality
offerings of technology scaling, we need to dramatically improve current methodologies to evaluate the reliability of the system.On one hand, we need accurate methodologies that reduce the performance and energy tax paid to guarantee correct operation of systems. The rising energy costs needed
to compensate for increasing unpredictability are rapidly becoming unacceptable in today's environment where energy consumption is often the limiting factor on integrated circuit performance. On the other hand, early "budgeting" for reliability has the potential to save significant design
effort and resources and has a profound impact on the TTM of a product. CLERECO addresses early reliability evaluation with a cross-layer approach across different computing disciplines, across computing system layers and across computing market segments to address reliability for the emerging
computing continuum. CLERECO methodology will consider low-level information such as raw failure rates as well as the entire set of hardware and software components of the system that eventually determine the reliability delivered to the end users.The CLERECO project methodology for early
reliability evaluation will be comprehensively assessed and validated in advanced designs from different applications provided by the industrial partners for the full stack of hardware and software layers
Software-Based Self-Test of Set-Associative Cache Memories
Embedded microprocessor cache memories suffer from limited observability and controllability creating problems during in-system tests. This paper presents a procedure to transform traditional march tests into software-based self-test programs for set-associative cache memories with LRU replacement. Among all the different cache blocks in a microprocessor, testing instruction caches represents a major challenge due to limitations in two areas: 1) test patterns which must be composed of valid instruction opcodes and 2) test result observability: the results can only be observed through the results of executed instructions. For these reasons, the proposed methodology will concentrate on the implementation of test programs for instruction caches. The main contribution of this work lies in the possibility of applying state-of-the-art memory test algorithms to embedded cache memories without introducing any hardware or performance overheads and guaranteeing the detection of typical faults arising in nanometer CMOS technologie
A FPGA-Based Reconfigurable Software Architecture for Highly Dependable Systems
Nowadays, systems-on-chip are commonly equipped with reconfigurable hardware. The use of hybrid architectures based on a mixture of general purpose processors and reconfigurable components has gained importance across the scientific community allowing a significant improvement of computational performance. Along with the demand for performance, the great sensitivity of reconfigurable hardware devices to physical defects lead to the request of highly dependable and fault tolerant systems. This paper proposes an FPGA-based reconfigurable software architecture able to abstract the underlying hardware platform giving an homogeneous view of it. The abstraction mechanism is used to implement fault tolerance mechanisms with a minimum impact on the system performanc
Online self-repair of FIR filters
Chip-level failure detection has been a target of research for some time, but today's very deep-submicron technology is forcing such research to move beyond detection. Repair, especially self-repair, has become very important for containing the susceptibility of today's chips. This article introduces a self-repair-solution for the digital FIR filter, one of the key blocks used in DSPs
AFSM-based deterministic hardware TPG
This paper proposes a new approach for designing a cost-effective, on-chip, hardware pattern generator of deterministic test sequences. Given a pre-computed test pattern (obtained by an ATPG tool) with predetermined fault coverage, a hardware Test Pattern Generator (TPG) based on Autonomous Finite State Machines (AFSM) structure is synthesized to generate it. This new approach exploits "don't care" bits of the deterministic test patterns to lower area overhead of the TPG. Simulations using benchmark circuits show that the hardware components cost is considerably less when compared with alternative solution
- …