1,112 research outputs found
High Quality Delay Testing Scheme for a Self-Timed Microprocessor
RĂSUMĂ La popularitĂ© dâinternet et la quantitĂ© toujours croissante de donnĂ©es qui transitent Ă travers ses terminaux nĂ©cessite dâimportantes infrastructures de serveurs qui consomment Ă©normĂ©ment dâĂ©nergie. Par consĂ©quent, et puisquâune augmentation de la consommation dâĂ©nergie se traduit par une augmentation des coĂ»ts, la demande pour des processeurs efficaces en Ă©nergie est en forte hausse. Une maniĂšre dâaugmenter lâefficacitĂ© Ă©nergĂ©tique des processeurs consiste Ă moduler la frĂ©quence dâopĂ©ration du systĂšme en fonction de la charge de travail. Les processeurs endochrones et asynchrones sont une des solutions mettant en Ćuvre ce principe de modulation de lâactivitĂ© Ă la demande. Cependant, les mĂ©thodes de conception non conventionnelles qui leur sont associĂ©es, en particulier en termes de testabilitĂ© et dâautomation, sont un frein au dĂ©veloppement de ce type de systĂšmes.
Ce travail sâintĂ©resse au dĂ©veloppement dâune mĂ©thode de test de haute qualitĂ© adressĂ©e aux pannes de retards dans une architecture de processeur endochrone spĂ©cifique, appelĂ©e AnARM. La mĂ©thode proposĂ©e consiste Ă dĂ©tecter les pannes Ă faibles retards (PFR) dans lâAnARM en tirant profit des lignes Ă dĂ©lais configurables intĂ©grĂ©es. Ces pannes sont connues pour passer au travers des modĂšles de pannes de retards utilisĂ©s habituellement (les pannes de retards de portes). Ce travail sâintĂ©resse principalement aux PFR qui Ă©chappent Ă la dĂ©tection des pannes de retards de portes mais qui sont suffisamment longues pour provoquer des erreurs dans des conditions normales dâopĂ©ration. Dâautre part, la dĂ©tection de pannes Ă trĂšs faibles retards est Ă©vitĂ©e, autant que possible, afin de limiter le nombre de faux positifs. Pour rĂ©aliser un test de haute qualitĂ©, ce travail propose, dans un premier temps, une mĂ©trique de test dĂ©diĂ©e aux PFR, qui est mieux adaptĂ©e aux circuits endochrones, puis, dans un second temps, une mĂ©thode de test des pannes de retards basĂ©e sur la modulation de la vitesse des lignes Ă dĂ©lais intĂ©grĂ©s, qui sâadapte Ă un jeu de vecteurs de test prĂ©existant.Ce travail prĂ©sente une mĂ©trique de test ciblant les PFR, appelĂ©e pourcentage de marges pondĂ©rĂ©es (PoMP), ainsi quâun nouveau modĂšle de test pour les PFR (appelĂ© test de PFR idĂ©al).----------ABSTRACT
The popularity of the Internet and the huge amount of data that is transfered between devices nowadays requires very powerful servers that demand lots of power. Since higher power consumptions mean more expenses to companies, there is an increase in demand for power eĂżcient processors. One of the ways to increase the power eĂżciency of processors is to adapt the processing speeds and chip activity according the needed computation load. Self-timed or asynchronous processors are one of the solutions that apply this principle of activity on demand. However, their unconventional design methodology introduces several challenges in terms of testability and design automation.
This work focuses on developing a high quality delay test for a specific architecture of self-timed processors called the AnARM. The proposed delay test focuses on catching eËective small-delay defects (SDDs) in the AnARM by taking advantage of built-in configurable delay lines. Those defects are known to escape one of the most commonly used delay fault models (the transition delay fault model). This work mainly focuses on eËective SDDs which can escape transition delay fault testing and are large enough to fail the circuit under normal operating conditions. At the same time, catching very small delay defects is avoided, when possible, to avoid falsely failing functional chips. To build the high quality delay test, this work develops an SDD test quality metric that is better suited for circuits with adaptable speeds. Then, it builds a delay test optimizer that adapts the built-in delay lines speeds to a preexisting at-speed pattern set to create a high quality SDD test.
This work presents a novel SDD test quality metric called the weighted slack percentage (WeSPer), along with a new SDD testing model (named the ideal SDD test model). WeSPer is built to be a flexible metric capable of adapting to the availability of information about the circuit under test and the test environment. Since the AnARM can use multiple test speeds, WeSPer computation takes special care of assessing the effects of test frequency changes on the test quality. Specifically, special care is taken into avoiding overtesting the circuit. Overtesting will cause circuits under test to fail due to defects that are too small to affect the functionality of these circuits in their present state. A computation framework is built to compute WeSPer and compare it with other existing metrics in the literature over a large sets of process-voltage-temperature computation points. Simulations are done on a selected set of known benchmark circuits synthesized in the 28nm FD-SOI technology from STMicroelectronics
AI/ML Algorithms and Applications in VLSI Design and Technology
An evident challenge ahead for the integrated circuit (IC) industry in the
nanometer regime is the investigation and development of methods that can
reduce the design complexity ensuing from growing process variations and
curtail the turnaround time of chip manufacturing. Conventional methodologies
employed for such tasks are largely manual; thus, time-consuming and
resource-intensive. In contrast, the unique learning strategies of artificial
intelligence (AI) provide numerous exciting automated approaches for handling
complex and data-intensive tasks in very-large-scale integration (VLSI) design
and testing. Employing AI and machine learning (ML) algorithms in VLSI design
and manufacturing reduces the time and effort for understanding and processing
the data within and across different abstraction levels via automated learning
algorithms. It, in turn, improves the IC yield and reduces the manufacturing
turnaround time. This paper thoroughly reviews the AI/ML automated approaches
introduced in the past towards VLSI design and manufacturing. Moreover, we
discuss the scope of AI/ML applications in the future at various abstraction
levels to revolutionize the field of VLSI design, aiming for high-speed, highly
intelligent, and efficient implementations
Addressing Manufacturing Challenges in NoC-based ULSI Designs
HernĂĄndez Luz, C. (2012). Addressing Manufacturing Challenges in NoC-based ULSI Designs [Tesis doctoral no publicada]. Universitat PolitĂšcnica de ValĂšncia. https://doi.org/10.4995/Thesis/10251/1669
Fault modelling and accelerated simulation of integrated circuits manufacturing defects under process variation
As silicon manufacturing process scales to and beyond the 65-nm node, process variation can no longer be ignored. The impact of process variation on integrated circuit performance and power has received significant research input. Variation-aware test, on the other hand, is a relatively new research area that is currently receiving attention worldwide.Research has shown that test without considering process variation may lead to loss of test quality. Fault modelling and simulation serve as a backbone of manufacturing test. This thesis is concerned with developing efficient fault modelling techniques and simulation methodologies that take into account the effect of process variation on manufacturing defects with particular emphasis on resistive bridges and resistive opens.The first contribution of this thesis addresses the problem of long computation time required to generate logic fault of resistive bridges under process variation by developing a fast and accurate modelling technique to model logic fault behaviour of resistive bridges.The new technique is implemented by employing two efficient voltage calculation algorithms to calculate the logic threshold voltage of driven gates and critical resistance of a fault-site to enable the computation of bridge logic faults without using SPICE. Simulation results show that the technique is fast (on average 53 times faster) and accurate (worst case is 2.64% error) when compared with HSPICE. The second contribution analyses the complexity of delay fault simulation of resistive bridges to reduce the computation time of delay fault when considering process variation. An accelerated delay fault simulation methodology of resistive bridges is developed by employing a three-step strategy to speed up the calculation of transient gate output voltage which is needed to accurately compute delay faults. Simulation results show that the methodology is on average 17.4 times faster, with 5.2% error in accuracy, when compared with HSPICE. The final contribution presents an accelerated simulation methodology of resistive opens to address the problem of long simulation time of delay fault when considering process variation. The methodology is implemented by using two efficient algorithms to accelerate the computation of transient gate output voltage and timing critical resistance of an open fault-site. Simulation results show that the methodology is on average up to 52 times faster than HSPICE, with 4.2% error in accuracy
A novel deep submicron bulk planar sizing strategy for low energy subthreshold standard cell libraries
Engineering andPhysical Science ResearchCouncil
(EPSRC) and Arm Ltd for providing funding in the form of grants and studentshipsThis work investigates bulk planar deep submicron semiconductor physics in an attempt
to improve standard cell libraries aimed at operation in the subthreshold regime and in
Ultra Wide Dynamic Voltage Scaling schemes. The current state of research in the field is
examined, with particular emphasis on how subthreshold physical effects degrade
robustness, variability and performance. How prevalent these physical effects are in a
commercial 65nm library is then investigated by extensive modeling of a BSIM4.5
compact model. Three distinct sizing strategies emerge, cells of each strategy are laid out
and post-layout parasitically extracted models simulated to determine the
advantages/disadvantages of each. Full custom ring oscillators are designed and
manufactured. Measured results reveal a close correlation with the simulated results, with
frequency improvements of up to 2.75X/2.43X obs erved for RVT/LVT devices
respectively. The experiment provides the first silicon evidence of the improvement
capability of the Inverse Narrow Width Effect over a wide supply voltage range, as well
as a mechanism of additional temperature stability in the subthreshold regime.
A novel sizing strategy is proposed and pursued to determine whether it is able to produce
a superior complex circuit design using a commercial digital synthesis flow. Two 128 bit
AES cores are synthesized from the novel sizing strategy and compared against a third
AES core synthesized from a state-of-the-art subthreshold standard cell library used by
ARM. Results show improvements in energy-per-cycle of up to 27.3% and frequency
improvements of up to 10.25X. The novel subthreshold sizing strategy proves superior
over a temperature range of 0 °C to 85 °C with a nominal (20 °C) improvement in
energy-per-cycle of 24% and frequency improvement of 8.65X.
A comparison to prior art is then performed. Valid cases are presented where the
proposed sizing strategy would be a candidate to produce superior subthreshold circuits
A Review of Bayesian Methods in Electronic Design Automation
The utilization of Bayesian methods has been widely acknowledged as a viable
solution for tackling various challenges in electronic integrated circuit (IC)
design under stochastic process variation, including circuit performance
modeling, yield/failure rate estimation, and circuit optimization. As the
post-Moore era brings about new technologies (such as silicon photonics and
quantum circuits), many of the associated issues there are similar to those
encountered in electronic IC design and can be addressed using Bayesian
methods. Motivated by this observation, we present a comprehensive review of
Bayesian methods in electronic design automation (EDA). By doing so, we hope to
equip researchers and designers with the ability to apply Bayesian methods in
solving stochastic problems in electronic circuits and beyond.Comment: 24 pages, a draft version. We welcome comments and feedback, which
can be sent to [email protected]
Needs, trends, and advances in scintillators for radiographic imaging and tomography
Scintillators are important materials for radiographic imaging and tomography
(RadIT), when ionizing radiations are used to reveal internal structures of
materials. Since its invention by R\"ontgen, RadIT now come in many modalities
such as absorption-based X-ray radiography, phase contrast X-ray imaging,
coherent X-ray diffractive imaging, high-energy X- and ray radiography
at above 1 MeV, X-ray computed tomography (CT), proton imaging and tomography
(IT), neutron IT, positron emission tomography (PET), high-energy electron
radiography, muon tomography, etc. Spatial, temporal resolution, sensitivity,
and radiation hardness, among others, are common metrics for RadIT performance,
which are enabled by, in addition to scintillators, advances in high-luminosity
accelerators and high-power lasers, photodetectors especially CMOS pixelated
sensor arrays, and lately data science. Medical imaging, nondestructive
testing, nuclear safety and safeguards are traditional RadIT applications.
Examples of growing or emerging applications include space, additive
manufacturing, machine vision, and virtual reality or `metaverse'. Scintillator
metrics such as light yield and decay time are correlated to RadIT metrics.
More than 160 kinds of scintillators and applications are presented during the
SCINT22 conference. New trends include inorganic and organic scintillator
heterostructures, liquid phase synthesis of perovskites and m-thick films,
use of multiphysics models and data science to guide scintillator development,
structural innovations such as photonic crystals, nanoscintillators enhanced by
the Purcell effect, novel scintillator fibers, and multilayer configurations.
Opportunities exist through optimization of RadIT with reduced radiation dose,
data-driven measurements, photon/particle counting and tracking methods
supplementing time-integrated measurements, and multimodal RadIT.Comment: 45 pages, 43 Figures, SCINT22 conference overvie
inSense: A Variation and Fault Tolerant Architecture for Nanoscale Devices
Transistor technology scaling has been the driving force in improving the size, speed, and power consumption of digital systems. As devices approach atomic size, however, their reliability and performance are increasingly compromised due to reduced noise margins, difficulties in fabrication, and emergent nano-scale phenomena. Scaled CMOS devices, in particular, suffer from process variations such as random dopant fluctuation (RDF) and line edge roughness (LER), transistor degradation mechanisms such as negative-bias temperature instability (NBTI) and hot-carrier injection (HCI), and increased sensitivity to single event upsets (SEUs). Consequently, future devices may exhibit reduced performance, diminished lifetimes, and poor reliability.
This research proposes a variation and fault tolerant architecture, the inSense architecture, as a circuit-level solution to the problems induced by the aforementioned phenomena. The inSense architecture entails augmenting circuits with introspective and sensory capabilities which are able to dynamically detect and compensate for process variations, transistor degradation, and soft errors. This approach creates ``smart\u27\u27 circuits able to function despite the use of unreliable devices and is applicable to current CMOS technology as well as next-generation devices using new materials and structures. Furthermore, this work presents an automated prototype implementation of the inSense architecture targeted to CMOS devices and is evaluated via implementation in ISCAS \u2785 benchmark circuits. The automated prototype implementation is functionally verified and characterized: it is found that error detection capability (with error windows from 30-400ps) can be added for less than 2\% area overhead for circuits of non-trivial complexity. Single event transient (SET) detection capability (configurable with target set-points) is found to be functional, although it generally tracks the standard DMR implementation with respect to overheads
- âŠ