334 research outputs found
FPGA ARCHITECTURE AND VERIFICATION OF BUILT IN SELF-TEST (BIST) FOR 32-BIT ADDER/SUBTRACTER USING DE0-NANO FPGA AND ANALOG DISCOVERY 2 HARDWARE
The integrated circuit (IC) is an integral part of everyday modern technology, and its application is very attractive to hardware and software design engineers because of its versatility, integration, power consumption, cost, and board area reduction. IC is available in various types such as Field Programming Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), System on Chip (SoC) architecture, Digital Signal Processing (DSP), microcontrollers (Ī¼C), and many more. With technology demand focused on faster, low power consumption, efficient IC application, design engineers are facing tremendous challenges in developing and testing integrated circuits that guaranty functionality, high fault coverage, and reliability as the transistor technology is shrinking to the point where manufacturing defects of ICs are affecting yield which associates with the increased cost of the part. The competitive IC market is pressuring manufactures of ICs to develop and market IC in a relatively quick turnaround which in return requires design and verification engineers to develop an integrated self-test structure that would ensure fault-free and the quality product is delivered on the market. 70-80% of IC design is spent on verification and testing to ensure high quality and reliability for the enduser. To test complex and sophisticated IC designs, the verification engineers must produce laborious and costly test fixtures which affect the cost of the part on the competitive market. To avoid increasing the part cost due to yield and test time to the end-user and to keep up with the competitive market many IC design engineers are deviating from complex external test fixture approach and are focusing on integrating Built-in Self-Test (BIST) or Design for Test
(DFT) techniques onto ICās which would reduce time to market but still guarantee high coverage for the product. Understanding the BIST, the architecture, as well as the application of IC, must be understood before developing IC. The architecture of FPGA is elaborated in this paper followed by several BIST techniques and applications of those BIST relative to FPGA, SoC, analog to digital (ADC), or digital to analog converters (DAC) that are integrated on IC. Paper is concluded with verification of BIST for the 32-bit adder/subtracter designed in Quartus II software using the Analog Discovery 2 module as stimulus and DE0-NANO FPGA board for verification
Watermarking FPGA Bitfile for Intellectual Property Protection
Intellectual property protection (IPP) of hardware designs is the most important requirement for many Field Programmable Gate Array (FPGA) intellectual property (IP) vendors. Digital watermarking has become an innovative technology for IPP in recent years. Existing watermarking techniques have successfully embedded watermark into IP cores. However, many of these techniques share two specific weaknesses: 1) They have extra overhead, and are likely to degrade performance of design; 2) vulnerability to removing attacks. We propose a novel watermarking technique to watermark FPGA bitfile for addressing these weaknesses. Experimental results and analysis show that the proposed technique incurs zero overhead and it is robust against removing attacks
Transient error mitigation by means of approximate logic circuits
MenciĆ³n Internacional en el tĆtulo de doctorThe technological advances in the manufacturing of electronic circuits have allowed to
greatly improve their performance, but they have also increased the sensitivity of electronic
devices to radiation-induced errors. Among them, the most common effects are
the SEEs, i.e., electrical perturbations provoked by the strike of high-energy particles,
which may modify the internal state of a memory element (SEU) or generate erroneous
transient pulses (SET), among other effects. These events pose a threat for the reliability
of electronic circuits, and therefore fault-tolerance techniques must be applied to
deal with them.
The most common fault-tolerance techniques are based in full replication (DWC or
TMR). These techniques are able to cover a wide range of failure mechanisms present
in electronic circuits. However, they suffer from high overheads in terms of area and
power consumption. For this reason, lighter alternatives are often sought at the expense
of slightly reducing reliability for the least critical circuit sections. In this context a new
paradigm of electronic design is emerging, known as approximate computing, which
is based on improving the circuit performance in change of slight modifications of the
intended functionality. This is an interesting approach for the design of lightweight
fault-tolerant solutions, which has not been yet studied in depth.
The main goal of this thesis consists in developing new lightweight fault-tolerant
techniques with partial replication, by means of approximate logic circuits. These
circuits can be designed with great flexibility. This way, the level of protection as
well as the overheads can be adjusted at will depending on the necessities of each
application. However, finding optimal approximate circuits for a given application is
still a challenge.
In this thesis a method for approximate circuit generation is proposed, denoted
as fault approximation, which consists in assigning constant logic values to specific
circuit lines. On the other hand, several criteria are developed to generate the most
suitable approximate circuits for each application, by using this fault approximation
mechanism. These criteria are based on the idea of approximating the least testable
sections of circuits, which allows reducing overheads while minimising the loss of reliability.
Therefore, in this thesis the selection of approximations is linked to testability
measures.
The first criterion for fault selection developed in this thesis uses static testability
measures. The approximations are generated from the results of a fault simulation of
the target circuit, and from a user-specified testability threshold. The amount of approximated
faults depends on the chosen threshold, which allows to generate approximate circuits with different performances. Although this approach was initially intended for
combinational circuits, an extension to sequential circuits has been performed as well,
by considering the flip-flops as both inputs and outputs of the combinational part of
the circuit. The experimental results show that this technique achieves a wide scalability,
and an acceptable trade-off between reliability versus overheads. In addition, its
computational complexity is very low.
However, the selection criterion based in static testability measures has some drawbacks.
Adjusting the performance of the generated approximate circuits by means of
the approximation threshold is not intuitive, and the static testability measures do not
take into account the changes as long as faults are approximated. Therefore, an alternative
criterion is proposed, which is based on dynamic testability measures. With this
criterion, the testability of each fault is computed by means of an implication-based
probability analysis. The probabilities are updated with each new approximated fault,
in such a way that on each iteration the most beneficial approximation is chosen, that
is, the fault with the lowest probability. In addition, the computed probabilities allow
to estimate the level of protection against faults that the generated approximate circuits
provide. Therefore, it is possible to generate circuits which stick to a target error rate.
By modifying this target, circuits with different performances can be obtained. The
experimental results show that this new approach is able to stick to the target error rate
with reasonably good precision. In addition, the approximate circuits generated with
this technique show better performance than with the approach based in static testability
measures. In addition, the fault implications have been reused too in order to
implement a new type of logic transformation, which consists in substituting functionally
similar nodes.
Once the fault selection criteria have been developed, they are applied to different
scenarios. First, an extension of the proposed techniques to FPGAs is performed,
taking into account the particularities of this kind of circuits. This approach has been
validated by means of radiation experiments, which show that a partial replication with
approximate circuits can be even more robust than a full replication approach, because
a smaller area reduces the probability of SEE occurrence. Besides, the proposed
techniques have been applied to a real application circuit as well, in particular to the
microprocessor ARM Cortex M0. A set of software benchmarks is used to generate
the required testability measures. Finally, a comparative study of the proposed approaches
with approximate circuit generation by means of evolutive techniques have
been performed. These approaches make use of a high computational capacity to generate
multiple circuits by trial-and-error, thus reducing the possibility of falling into
local minima. The experimental results demonstrate that the circuits generated with
evolutive approaches are slightly better in performance than the circuits generated with
the techniques here proposed, although with a much higher computational effort.
In summary, several original fault mitigation techniques with approximate logic
circuits are proposed. These approaches are demonstrated in various scenarios, showing
that the scalability and adaptability to the requirements of each application are their
main virtuesLos avances tecnolĆ³gicos en la fabricaciĆ³n de circuitos electrĆ³nicos han permitido mejorar
en gran medida sus prestaciones, pero tambiƩn han incrementado la sensibilidad
de los mismos a los errores provocados por la radiaciĆ³n. Entre ellos, los mĆ”s comunes
son los SEEs, perturbaciones elĆ©ctricas causadas por el impacto de partĆculas de alta
energĆa, que entre otros efectos pueden modificar el estado de los elementos de memoria
(SEU) o generar pulsos transitorios de valor errĆ³neo (SET). Estos eventos suponen
un riesgo para la fiabilidad de los circuitos electrĆ³nicos, por lo que deben ser tratados
mediante tƩcnicas de tolerancia a fallos.
Las tĆ©cnicas de tolerancia a fallos mĆ”s comunes se basan en la replicaciĆ³n completa
del circuito (DWC o TMR). Estas tƩcnicas son capaces de cubrir una amplia variedad
de modos de fallo presentes en los circuitos electrĆ³nicos. Sin embargo, presentan un
elevado sobrecoste en Ɣrea y consumo. Por ello, a menudo se buscan alternativas mƔs
ligeras, aunque no tan efectivas, basadas en una replicaciĆ³n parcial. En este contexto
surge una nueva filosofĆa de diseƱo electrĆ³nico, conocida como computaciĆ³n aproximada,
basada en mejorar las prestaciones de un diseƱo a cambio de ligeras modificaciones
de la funcionalidad prevista. Es un enfoque atractivo y poco explorado para el diseƱo
de soluciones ligeras de tolerancia a fallos.
El objetivo de esta tesis consiste en desarrollar nuevas tƩcnicas ligeras de tolerancia
a fallos por replicaciĆ³n parcial, mediante el uso de circuitos lĆ³gicos aproximados. Estos
circuitos se pueden diseƱar con una gran flexibilidad. De este forma, tanto el nivel de
protecciĆ³n como el sobrecoste se pueden regular libremente en funciĆ³n de los requisitos
de cada aplicaciĆ³n. Sin embargo, encontrar los circuitos aproximados Ć³ptimos para
cada aplicaciĆ³n es actualmente un reto.
En la presente tesis se propone un mƩtodo para generar circuitos aproximados, denominado
aproximaciĆ³n de fallos, consistente en asignar constantes lĆ³gicas a ciertas
lĆneas del circuito. Por otro lado, se desarrollan varios criterios de selecciĆ³n para, mediante
este mecanismo, generar los circuitos aproximados mƔs adecuados para cada
aplicaciĆ³n. Estos criterios se basan en la idea de aproximar las secciones menos testables
del circuito, lo que permite reducir los sobrecostes minimizando la perdida de
fiabilidad. Por tanto, en esta tesis la selecciĆ³n de aproximaciones se realiza a partir de
medidas de testabilidad.
El primer criterio de selecciĆ³n de fallos desarrollado en la presente tesis hace uso de
medidas de testabilidad estƔticas. Las aproximaciones se generan a partir de los resultados
de una simulaciĆ³n de fallos del circuito objetivo, y de un umbral de testabilidad
especificado por el usuario. La cantidad de fallos aproximados depende del umbral escogido, lo que permite generar circuitos aproximados con diferentes prestaciones.
Aunque inicialmente este mƩtodo ha sido concebido para circuitos combinacionales,
tambiĆ©n se ha realizado una extensiĆ³n a circuitos secuenciales, considerando los biestables
como entradas y salidas de la parte combinacional del circuito. Los resultados
experimentales demuestran que esta tƩcnica consigue una buena escalabilidad, y unas
prestaciones de coste frente a fiabilidad aceptables. AdemƔs, tiene un coste computacional
muy bajo.
Sin embargo, el criterio de selecciĆ³n basado en medidas estĆ”ticas presenta algunos
inconvenientes. No resulta intuitivo ajustar las prestaciones de los circuitos aproximados
a partir de un umbral de testabilidad, y las medidas estƔticas no tienen en cuenta los
cambios producidos a medida que se van aproximando fallos. Por ello, se propone un
criterio alternativo de selecciĆ³n de fallos, basado en medidas de testabilidad dinĆ”micas.
Con este criterio, la testabilidad de cada fallo se calcula mediante un anƔlisis de probabilidades
basado en implicaciones. Las probabilidades se actualizan con cada nuevo
fallo aproximado, de forma que en cada iteraciĆ³n se elige la aproximaciĆ³n mĆ”s favorable,
es decir, el fallo con menor probabilidad. AdemƔs, las probabilidades calculadas
permiten estimar la protecciĆ³n frente a fallos que ofrecen los circuitos aproximados
generados, por lo que es posible generar circuitos que se ajusten a una tasa de fallos
objetivo. Modificando esta tasa se obtienen circuitos aproximados con diferentes prestaciones.
Los resultados experimentales muestran que este mƩtodo es capaz de ajustarse
razonablemente bien a la tasa de fallos objetivo. AdemƔs, los circuitos generados
con esta tƩcnica muestran mejores prestaciones que con el mƩtodo basado en medidas
estƔticas. TambiƩn se han aprovechado las implicaciones de fallos para implementar
un nuevo tipo de transformaciĆ³n lĆ³gica, consistente en sustituir nodos funcionalmente
similares.
Una vez desarrollados los criterios de selecciĆ³n de fallos, se aplican a distintos
campos. En primer lugar, se hace una extensiĆ³n de las tĆ©cnicas propuestas para FPGAs,
teniendo en cuenta las particularidades de este tipo de circuitos. Esta tƩcnica se ha validado
mediante experimentos de radiaciĆ³n, los cuales demuestran que una replicaciĆ³n
parcial con circuitos aproximados puede ser incluso mĆ”s robusta que una replicaciĆ³n
completa, ya que un Ɣrea mƔs pequeƱa reduce la probabilidad de SEEs. Por otro lado,
tambiĆ©n se han aplicado las tĆ©cnicas propuestas en esta tesis a un circuito de aplicaciĆ³n
real, el microprocesador ARM Cortex M0, utilizando un conjunto de benchmarks
software para generar las medidas de testabilidad necesarias. Por Ā“Ćŗltimo, se realiza un
estudio comparativo de las tĆ©cnicas desarrolladas con la generaciĆ³n de circuitos aproximados
mediante tƩcnicas evolutivas. Estas tƩcnicas hacen uso de una gran capacidad
de cĆ”lculo para generar mĆŗltiples circuitos mediante ensayo y error, reduciendo la posibilidad
de caer en algĆŗn mĆnimo local. Los resultados confirman que, en efecto, los
circuitos generados mediante tƩcnicas evolutivas son ligeramente mejores en prestaciones
que con las tĆ©cnicas aquĆ propuestas, pero con un coste computacional mucho
mayor.
En definitiva, se proponen varias tĆ©cnicas originales de mitigaciĆ³n de fallos mediante
circuitos aproximados. Se demuestra que estas tƩcnicas tienen diversas aplicaciones,
haciendo de la flexibilidad y adaptabilidad a los requisitos de cada aplicaciĆ³n
sus principales virtudes.Programa Oficial de Doctorado en IngenierĆa ElĆ©ctrica, ElectrĆ³nica y AutomĆ”ticaPresidente: Raoul Velazco.- Secretario: Almudena Lindoso MuƱoz.- Vocal: Jaume Segura Fuste
Delay Measurements and Self Characterisation on FPGAs
This thesis examines new timing measurement methods for self delay characterisation of Field-Programmable Gate Arrays (FPGAs) components and delay measurement of complex circuits
on FPGAs. Two novel measurement techniques based on analysis of a circuit's output failure
rate and transition probability is proposed for accurate, precise and efficient measurement of
propagation delays. The transition probability based method is especially attractive, since
it requires no modifications in the circuit-under-test and requires little hardware resources,
making it an ideal method for physical delay analysis of FPGA circuits.
The relentless advancements in process technology has led to smaller and denser transistors
in integrated circuits. While FPGA users benefit from this in terms of increased hardware
resources for more complex designs, the actual productivity with FPGA in terms of timing
performance (operating frequency, latency and throughput) has lagged behind the potential
improvements from the improved technology due to delay variability in FPGA components
and the inaccuracy of timing models used in FPGA timing analysis. The ability to measure
delay of any arbitrary circuit on FPGA offers many opportunities for on-chip characterisation
and physical timing analysis, allowing delay variability to be accurately tracked and variation-aware optimisations to be developed, reducing the productivity gap observed in today's FPGA
designs.
The measurement techniques are developed into complete self measurement and characterisation platforms in this thesis, demonstrating their practical uses in actual FPGA hardware for
cross-chip delay characterisation and accurate delay measurement of both complex combinatorial and sequential circuits, further reinforcing their positions in solving the delay variability
problem in FPGAs
An empirical evaluation of High-Level Synthesis languages and tools for database acceleration
High Level Synthesis (HLS) languages and tools are emerging as the most promising technique to make FPGAs more accessible to software developers. Nevertheless, picking the most suitable HLS for a certain class of algorithms depends on requirements such as area and throughput, as well as on programmer experience. In this paper, we explore the different trade-offs present when using a representative set of HLS tools in the context of Database Management Systems (DBMS) acceleration. More specifically, we conduct an empirical analysis of four representative frameworks (Bluespec SystemVerilog, Altera OpenCL, LegUp and Chisel) that we utilize to accelerate commonly-used database algorithms such as sorting, the median operator, and hash joins. Through our implementation experience and empirical results for database acceleration, we conclude that the selection of the most suitable HLS depends on a set of orthogonal characteristics, which we highlight for each HLS framework.Peer ReviewedPostprint (authorās final draft
Analysis and Test of the Effects of Single Event Upsets Affecting the Configuration Memory of SRAM-based FPGAs
SRAM-based FPGAs are increasingly relevant in a growing number of safety-critical application fields, ranging from automotive to aerospace. These application fields are characterized by a harsh radiation environment that can cause the occurrence of Single Event Upsets (SEUs) in digital devices. These faults have particularly adverse effects on SRAM-based FPGA systems because not only can they temporarily affect
the behaviour of the system by changing the contents of flip-flops or memories, but they can also permanently change the functionality implemented by the system itself, by changing the content of the configuration memory. Designing safety-critical applications requires accurate methodologies to evaluate the systemās sensitivity to SEUs as early as possible during the design process. Moreover it is necessary to detect the occurrence of SEUs during the system life-time. To this purpose test patterns should be generated during the design process, and then applied to the inputs of the system during its operation. In this thesis we propose a set of software tools that could be used by designers of SRAM-based FPGA safety-critical applications to assess the sensitivity to SEUs of the system and to generate test patterns for in-service testing. The main feature of these tools is that they implement a model of SEUs affecting the configuration bits controlling the logic and routing resources of an FPGA device that has been demonstrated to be much more accurate than the classical stuck-at and open/short models, that are
commonly used in the analysis of faults in digital devices. By keeping this accurate
fault model into account, the proposed tools are more accurate than similar academic and commercial tools today available for the analysis of faults in digital circuits, that do not take into account the features of the FPGA technology..
In particular three tools have been designed and developed: (i) ASSESS: Accurate Simulator of SEuS affecting the configuration memory of SRAM-based FPGAs, a simulator of SEUs affecting the configuration memory of an SRAM-based FPGA system
for the early assessment of the sensitivity to SEUs; (ii) UA2TPG: Untestability Analyzer
and Automatic Test Pattern Generator for SEUs Affecting the Configuration Memory of SRAM-based FPGAs, a static analysis tool for the identification of the untestable SEUs and for the automatic generation of test patterns for in-service testing of the 100% of the testable SEUs; and (iii) GABES: Genetic Algorithm Based Environment for SEU Testing in SRAM-FPGAs, a Genetic Algorithm-based Environment for the generation of an optimized set of test patterns for in-service testing of SEUs. The proposed tools have been applied to some circuits from the ITCā99 benchmark. The results obtained from these experiments have been compared with results
obtained by similar experiments in which we considered the stuck-at fault model, instead
of the more accurate model for SEUs. From the comparison of these experiments we have been able to verify that the proposed software tools are actually more accurate than similar tools today available. In particular the comparison between results obtained using ASSESS with those obtained by fault injection has shown that the proposed fault simulator has an average error of 0:1% and a maximum error of 0:5%, while using a stuck-at fault simulator the average error with respect of the fault injection experiment has been 15:1% with a maximum error of 56:2%. Similarly the comparison between the results obtained using UA2TPG for the accurate SEU model, with the results obtained for stuck-at faults has shown an average difference of untestability of 7:9% with a maximum of 37:4%. Finally the comparison between
fault coverages obtained by test patterns generated for the accurate model of SEUs and the fault coverages obtained by test pattern designed for stuck-at faults, shows that the former detect the 100% of the testable faults, while the latter reach an average fault coverage of 78:9%, with a minimum of 54% and a maximum of 93:16%
Fault Tolerant Electronic System Design
Due to technology scaling, which means reduced transistor size, higher density, lower voltage and more aggressive clock frequency, VLSI devices may become more sensitive against soft errors. Especially for those devices used in safety- and mission-critical applications, dependability and reliability are becoming increasingly important constraints during the development of system on/around them. Other phenomena (e.g., aging and wear-out effects) also have negative impacts on reliability of modern circuits. Recent researches show that even at sea level, radiation particles can still induce soft errors in electronic systems.
On one hand, processor-based system are commonly used in a wide variety of applications, including safety-critical and high availability missions, e.g., in the automotive, biomedical and aerospace domains. In these fields, an error may produce catastrophic consequences. Thus, dependability is a primary target that must be achieved taking into account tight constraints in terms of cost, performance, power and time to market. With standards and regulations (e.g., ISO-26262, DO-254, IEC-61508) clearly specify the targets to be achieved and the methods to prove their achievement, techniques working at system level are particularly attracting.
On the other hand, Field Programmable Gate Array (FPGA) devices are becoming more and more attractive, also in safety- and mission-critical applications due to the high performance, low power consumption and the flexibility for reconfiguration they provide. Two types of FPGAs are commonly used, based on their configuration memory cell technology, i.e., SRAM-based and Flash-based FPGA. For SRAM-based FPGAs, the SRAM cells of the configuration memory highly susceptible to radiation induced effects which can leads to system failure; and for Flash-based FPGAs, even though their non-volatile configuration memory cells are almost immune to Single Event Upsets induced by energetic particles, the floating gate switches and the logic cells in the configuration tiles can still suffer from Single Event Effects when hit by an highly charged particle. So analysis and mitigation techniques for Single Event Effects on FPGAs are becoming increasingly important in the design flow especially when reliability is one of the main requirements
Programmable flexible cores for SoC applications
Tese de mestrado. Engenharia ElectrotƩcnica e de Computadores. Faculdade de Engenharia. Universidade do Porto. 200
- ā¦