65 research outputs found
Soft error rate estimation in deep sub-micron CMOS
Soft errors resulting from the impact of charged particles are emerging as a major issue in the design of reliable circuits at deep sub-micron dimensions. In this paper, we model the sensitivity of individual circuit classes to single event upsets using predictive technology models over a range of CMOS device sizes from 90 nm down to 32 nm. Modeling the relative position of particle strikes as injected current pulses of varying amplitude and fall time, we find that the critical charge for each technology is an almost linear function both of the fall time of the injected current and the supply voltage. This simple relationship will simplify the task of estimating circuit-level soft error rate (SER) and support the development of an efficient SER modeling and optimization tool that might eventually be integrated into a high level language design flow
Characterizing the influence of neutron fields in causing single-event effects using portable detectors
The malfunction of semiconductor devices caused by cosmic rays is known as Single Event Effects(SEEs).
In the atmosphere, secondary neutrons are the dominant particles causing this effect. The neutron flux density in atmosphere is very low. For a good statistical certainty, millions of device hours are required to measure the event rate of a device in the natural environment. Event rates obtained in such testings are accurate.
To reduce the cost and time of getting the event rate, a device is normally taken to artificial accelerated neutron beams to measure its sensitivity to neutrons. Comparing the flux density of the beam and the flux density of a location in the atmosphere, the real time event rate can be predicted by the event rate obtained. This testing method was standardized as the neutron accelerated soft error rate (ASER) testing in JEDEC JESD89A standard.
However, several life testings indicated that the neutron flux density predictions given by the accelerated testings can have large errors. Up to a factor of 2 discrepancy was reported in the literature. One of the major error sources is the equivalence of the absolute neutron flux density in the atmosphere and in accelerated beam.
This thesis proposes an alternative accelerated method of predicting the real-time neutron error rate by using proxy devices. This method can avoid the error introduced by the uncertainty in the neutron flux density.
The Imaging Single Event Effect Monitor (ISEEM) is one of the proxy devices. It is the instrument originally developed by Z. Török and his co-workers in the University of Central Lancashire. A CCD was used as the sensitive element to detect neutrons. A large amount of data sets acquired by Török were used in this work. A re-engineered ISEEM has been developed in this work to improve ISEEM performance in life testings. Theoretical models have been developed to analyze the response of ISEEM in a wide range of neutron facilities and natural environment. The agreement of the measured and calculated cross-sections are within the error quoted by facilities. Because of the alpha contamination and primary proton direct ionization effects, performance of ISEEM in life testings appeared to be weak
Fast and accurate SER estimation for large combinational blocks in early stages of the design
Soft Error Rate (SER) estimation is an important challenge for integrated circuits because of the increased vulnerability brought by technology scaling. This paper presents a methodology to estimate in early stages of the design the susceptibility of combinational circuits to particle strikes. In the core of the framework lies MASkIt , a novel approach that combines signal probabilities with technology characterization to swiftly compute the logical, electrical, and timing masking effects of the circuit under study taking into account all input combinations and pulse widths at once. Signal probabilities are estimated applying a new hybrid approach that integrates heuristics along with selective simulation of reconvergent subnetworks. The experimental results validate our proposed technique, showing a speedup of two orders of magnitude in comparison with traditional fault injection estimation with an average estimation error of 5 percent. Finally, we analyze the vulnerability of the Decoder, Scheduler, ALU, and FPU of an out-of-order, superscalar processor design.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness and Feder Funds under grant TIN2013-44375-R, by the Generalitat de Catalunya under grant FI-DGR 2016, and by the FP7 program of the EU under contract FP7-611404 (CLERECO).Peer ReviewedPostprint (author's final draft
INVESTIGATING THE EFFECTS OF SINGLE-EVENT UPSETS IN STATIC AND DYNAMIC REGISTERS
Radiation-induced single-event upsets (SEUs) pose a serious threat to the reliability of registers. The existing SEU analyses for static CMOS registers focus on the circuit-level impact and may underestimate the pertinent SEU information provided through node analysis. This thesis proposes SEU node analysis to evaluate the sensitivity of static registers and apply the obtained node information to improve the robustness of the register through selective node hardening (SNH) technique. Unlike previous hardening techniques such as the Triple Modular Redundancy (TMR) and the Dual Interlocked Cell (DICE) latch, the SNH method does not introduce larger area overhead. Moreover, this thesis also explores the impact of SEUs in dynamic flip-flops, which are appealing for the design of high-performance microprocessors. Previous work either uses the approaches for static flip-flops to evaluate SEU effects in dynamic flip-flops or overlook the SEU injected during the precharge phase. In this thesis, possible SEU sensitive nodes in dynamic flip-flops are re-examined and their window of vulnerability (WOV) is extended. Simulation results for SEU analysis in non-hardened dynamic flip-flops reveal that the last 55.3 % of the precharge time and a 100% evaluation time are affected by SEUs
Characterizing the influence of neutron fields in causing single-event effects using portable detectors
The malfunction of semiconductor devices caused by cosmic rays is known as Single Event Effects(SEEs). In the atmosphere, secondary neutrons are the dominant particles causing this effect. The neutron flux density in atmosphere is very low. For a good statistical certainty, millions of device hours are required to measure the event rate of a device in the natural environment. Event rates obtained in such testings are accurate. To reduce the cost and time of getting the event rate, a device is normally taken to artificial accelerated neutron beams to measure its sensitivity to neutrons. Comparing the flux density of the beam and the flux density of a location in the atmosphere, the real time event rate can be predicted by the event rate obtained. This testing method was standardized as the neutron accelerated soft error rate (ASER) testing in JEDEC JESD89A standard. However, several life testings indicated that the neutron flux density predictions given by the accelerated testings can have large errors. Up to a factor of 2 discrepancy was reported in the literature. One of the major error sources is the equivalence of the absolute neutron flux density in the atmosphere and in accelerated beam. This thesis proposes an alternative accelerated method of predicting the real-time neutron error rate by using proxy devices. This method can avoid the error introduced by the uncertainty in the neutron flux density. The Imaging Single Event Effect Monitor (ISEEM) is one of the proxy devices. It is the instrument originally developed by Z. Török and his co-workers in the University of Central Lancashire. A CCD was used as the sensitive element to detect neutrons. A large amount of data sets acquired by Török were used in this work. A re-engineered ISEEM has been developed in this work to improve ISEEM performance in life testings. Theoretical models have been developed to analyze the response of ISEEM in a wide range of neutron facilities and natural environment. The agreement of the measured and calculated cross-sections are within the error quoted by facilities. Because of the alpha contamination and primary proton direct ionization effects, performance of ISEEM in life testings appeared to be weak.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Recommended from our members
IC design for reliability
textAs the feature size of integrated circuits goes down to the nanometer scale,
transient and permanent reliability issues are becoming a significant concern for circuit
designers. Traditionally, the reliability issues were mostly handled at the device level as a
device engineering problem. However, the increasing severity of reliability challenges
and higher error rates due to transient upsets favor higher-level design for reliability
(DFR). In this work, we develop several methods for DFR at the circuit level.
A major source of transient errors is the single event upset (SEU). SEUs are
caused by high-energy particles present in the cosmic rays or emitted by radioactive
contaminants in the chip packaging materials. When these particles hit a N+/P+ depletion
region of an MOS transistor, they may generate a temporary logic fault. Depending on
where the MOS transistor is located and what state the circuit is at, an SEU may result in
a circuit-level error. We analyze SEUs both in combinational logic and memories
(SRAM). For combinational logic circuit, we propose FASER, a Fast Analysis tool of
Soft ERror susceptibility for cell-based designs. The efficiency of FASER is achieved
through its static and vector-less nature. In order to evaluate the impact of SEU on SRAM, a theory for estimating dynamic noise margins is developed analytically. The
results allow predicting the transient error susceptibility of an SRAM cell using a closedform
expression.
Among the many permanent failure mechanisms that include time-dependent
oxide breakdown (TDDB), electro-migration (EM), hot carrier effect (HCE), and
negative bias temperature instability (NBTI), NBTI has recently become important.
Therefore, the main focus of our work is NBTI. NBTI occurs when the gate of PMOS is
negatively biased. The voltage stress across the gate generates interface traps, which
degrade the threshold voltage of PMOS. The degraded PMOS may eventually fail to meet
timing requirement and cause functional errors. NBTI becomes severe at elevated
temperatures. In this dissertation, we propose a NBTI degradation model that takes into
account the temperature variation on the chip and gives the accurate estimation of the
degraded threshold voltage.
In order to account for the degradation of devices, traditional design methods add
guard-bands to ensure that the circuit will function properly during its lifetime. However,
the worst-case based guard-bands lead to significant penalty in performance. In this
dissertation, we propose an effective macromodel-based reliability tracking and
management framework, based on a hybrid network of on-chip sensors, consisting of
temperature sensors and ring oscillators. The model is concerned specifically with NBTIinduced
transistor aging. The key feature of our work, in contrast to the traditional
tracking techniques that rely solely on direct measurement of the increase of threshold
voltage or circuit delay, is an explicit macromodel which maps operating temperature to
circuit degradation (the increase of circuit delay). The macromodel allows for costeffective
tracking of reliability using temperature sensors and is also essential for
enabling the control loop of the reliability management system. The developed methods improve the over-conservatism of the device-level, worstcase
reliability estimation techniques. As the severity of reliability challenges continue to
grow with technology scaling, it will become more important for circuit designers/CAD
tools to be equipped with the developed methods.Electrical and Computer Engineerin
Soft Error Rate Analysis in Combinatorial Logic
We develop a simple model that computes the probability that a strike at the output of a gate has an impact in any output by traversing the circuits backwards from the outputs and gaining information about the logical masking using signal probabilities. The model is validated with fault injectio
Numerical Methods and Smolyak Quadrature for Nonlinear Stochastic Partial Differential Equations
We describe methods for the numerical solution of nonlinear problems with stochastic uncertainties in the operator, boundary conditions, and right hand side. First, we compute statistics of the solution directly as high-dimensional integrals and compare their evaluation by sparse (Smolyak) quadrature and Monte Carlo integration. Subsequently, we employ a Galerkin method to obtain an expansion of the solution in a stochastic ansatz-space. This requires the numerical evaluation of the residual, which is again a high-dimensional integral, and we show that sparse quadrature may be an efficient technique for this. The large nonlinear system resulting from the Galerkin conditions is solved by quasi-Newton methods. Finally, we alternatively compute the expansion of the solution by direct orthogonal projection onto stochastic ansatz-functions. We apply the methods to a prototype nonlinear groundwater-flow situation (pressure equation).Wir beschreiben numerische Verfahren zur Lösung stationärer nichtlinearer Probleme mit stochastischen Unsicherheiten im Operator, in den Randbedingungen und in den Lasten. Dabei vergleichen wir hochdimensionale (Smolyak-) Quadraturverfahren mit Monte Carlo-Integrationsverfahren zur direkten Berechnung von Statistiken der Lösung. Zusätzlich berechnen wir die Lösung mit einem Galerkin-Verfahren in einem Raum stochastischer Ansatzfunktionen und untersuchen die Auswertung der im Residuum auftretenden hochdimensionalen Integrale. Für das entstehende große gekoppelte nichtlineare Gleichungssystem stellen wir einen effizienten Löser vor. Schließlich berechnen wir die Lösung durch direkte Projektion auf den orthogonalen stochastischen Ansatzraum und vergleichen die beschriebenen Lösungsverfahren anhand von Modellproblemen. Die Verfahren wenden wir auf ein prototypisches Grundwasserfluss-Problem an
The effects of ionising radiation on implantable MOS electronic devices
Space exploration and the rapid growth of the satellite communications industry has
promoted substantial research into the effects of ionising radiation on modem electronic
technology. The enabling electronics and computer processing has seen a commensurate
growth in the use of radiation for diagnostic and therapeutic purposes in medicine.
Numerous studies exist in both these fields but an analysis combining the fields of study
to ascertain the effects of radiation on medically implantable electronics is lacking.
A review of significant ground level radiation sources is presented with particular
emphasis on the medical environment. Mechanisms of permanent and transient ionising
radiation damage to Metal Oxide Semiconductors are summarised. Three significant
sources of radiation are classified as having the ability to damage or alter the behavior
of implantable electronics; Secondary neutron cosmic radiation, alpha particle radiation
from the device packaging and therapeutic doses of high energy radiation.
With respect to cosmic radiation, the most sensitive circuit structure within a typical
microcomputer architecture is the Random Access Memory(RAM). A theoretical model
which predicts the susceptibility of a RAM cell to single event upsets from secondary
cosmic ray neutrons is presented. A previously unreported method for calculating the
collection efficiency term in the upset model has been derived along with an extension
of the model to enable estimation of multiple bit upset rates.
An Implantable Cardioverter Defibrillator is used as a case example to demonstrate
model applicability and test against clinical experience. The model correlates well with
clinical experience and is consistent with the expected geographical variations of the
secondary cosmic ray neutron flux. This is the first clinical data set obtained indicating
the effects of cosmic radiation on implantable devices. Importantly, it may be used to
predict the susceptibility of future implantable device designs to cosmic radiation.
The model is also used as a basis for developing radiation hardened circuit techniques
and system design. A review of methods to radiation harden electronics to single event
upsets is used to recommend methods applicable to the low power/small area
constraints of implantable systems
Latent class models and latent transition models for dietary pattern analysis
Dietary patterns (DP) are used to study the effects of overall diet on health outcomes as opposed to the effects of individual nutrients or foods. DP are empirically derived mostly using factor and cluster analysis. Latent class models (LCM) have been shown empirically to be more appropriate to derive DP than cluster analysis, but they have not been compared yet to those derived by factor analysis. We derive DP using LCM and factor analysis on food-items, test how well the resulting classes are characterized by the factor scores, and compare subjects' direct classification from LCM versus two a posteriori classifications from factor scores: one possible classification using tertiles and a two-step classification using LCM on previously derived factor scores. In order to study changes in dietary patterns over time, we propose using latent transition models to study change as characterized by the movement between discrete dietary patterns. Latent transition models directly classify subjects into mutually exclusive DP at each time point and allow predictors for class membership and for probabilities of changing classes over time. There are several challenges particular to DP analysis: a large ([greater than or equal to]80) number of food-items, non-standard mixture distributions (continuous with a mass point at zero for non-consumption), and typical assumptions (conditional independence given the class and time point, time-invariant conditional responses, and invariant transition probabilities) may not be realistic. We compare performance, capabilities and flexibility between two software packages (Mplus and a user's derived procedure in SAS) that allow fitting latent transition models. A key decision involved when deriving DP is whether or not to collapse the primary dietary data into a smaller number of items called food groups. Advantages for collapsing include dimension reduction and decreasing the number of non-consumers to reduce the mass-point at zero. However, not collapsing helps our understanding of which combinations of specific foods are consumed. Further, food-grouping may have an impact on the association between DP and health outcomes. We explore via a Monte Carlo simulation study whether food-grouping makes a difference when deriving DP using LCM. Methods are illustrated using data from the Pregnancy, Infection and Nutrition (PIN) Study
- …