125 research outputs found
Testing of leakage current failure in ASIC devices exposed to total ionizing dose environment using design for testability techniques
Due to the advancements in technology, electronic devices have been relied upon to operate under harsh conditions. Radiation is one of the main causes of different failures of the electronics devices. According to the operation environment, the sources of the radiation can be terrestrial or extra-terrestrial. For terrestrial the devices can be used in nuclear reactors or biomedical devices where the radiation is man-made. While for the extra- terrestrial, the devices can be used in satellites, the international space station or spaceships, where the radiation comes from various sources like the Sun. According to the operation environment the effects of radiation differ. These effects falls under two categories, total ionizing dose effect (TID) and single event effects (SEEs). TID effects can be affect the delay and leakage current of CMOS circuits negatively. The affects can therefore hinder the integrated circuits\u27 operation. Before the circuits are used, particularly in critical radiation heavy applications like military and space, testing under radiation must be done to avoid any failures during operation. The standard in testing electronic devices is generating worst case test vectors (WCTVs) and under radiation using these vectors the circuits are tested. However, the generation of these WCTVs have been very challenging so this approach is rarely used for TIDs effects. Design for testability (DFT) have been widely used in the industry for digital circuits testing applications. DFT is usually used with automatic test patterns generation software to generate test vectors against fault models of manufacturer defects for application specific integrated circuit (ASIC.) However, it was never used to generate test vectors for leakage current testing induced in ASICs exposed to TID radiation environment. The purpose of the thesis is to use DFT to identify WCTVs for leakage current failures in sequential circuits for ASIC devices exposed to TID. A novel methodology was devised to identify these test vectors. The methodology is validated and compared to previous non DFT methods. The methodology is proven to overcome the limitation of previous methodologies
Algorithms for Power Aware Testing of Nanometer Digital ICs
At-speed testing of deep-submicron digital very large scale integrated (VLSI) circuits
has become mandatory to catch small delay defects. Now, due to continuous shrinking
of complementary metal oxide semiconductor (CMOS) transistor feature size, power
density grows geometrically with technology scaling. Additionally, power dissipation
inside a digital circuit during the testing phase (for test vectors under all fault models
(Potluri, 2015)) is several times higher than its power dissipation during the normal
functional phase of operation. Due to this, the currents that flow in the power grid during
the testing phase, are much higher than what the power grid is designed for (the
functional phase of operation). As a result, during at-speed testing, the supply grid
experiences unacceptable supply IR-drop, ultimately leading to delay failures during
at-speed testing. Since these failures are specific to testing and do not occur during
functional phase of operation of the chip, these failures are usually referred to false
failures, and they reduce the yield of the chip, which is undesirable.
In nanometer regime, process parameter variations has become a major problem.
Due to the variation in signalling delays caused by these variations, it is important to
perform at-speed testing even for stuck faults, to reduce the test escapes (McCluskey
and Tseng, 2000; Vorisek et al., 2004). In this context, the problem of excessive peak
power dissipation causing false failures, that was addressed previously in the context of
at-speed transition fault testing (Saxena et al., 2003; Devanathan et al., 2007a,b,c), also
becomes prominent in the context of at-speed testing of stuck faults (Maxwell et al.,
1996; McCluskey and Tseng, 2000; Vorisek et al., 2004; Prabhu and Abraham, 2012;
Potluri, 2015; Potluri et al., 2015). It is well known that excessive supply IR-drop during
at-speed testing can be kept under control by minimizing switching activity during
testing (Saxena et al., 2003). There is a rich collection of techniques proposed in the past
for reduction of peak switching activity during at-speed testing of transition/delay faults
ii
in both combinational and sequential circuits. As far as at-speed testing of stuck faults
are concerned, while there were some techniques proposed in the past for combinational
circuits (Girard et al., 1998; Dabholkar et al., 1998), there are no techniques concerning
the same for sequential circuits. This thesis addresses this open problem. We
propose algorithms for minimization of peak switching activity during at-speed testing
of stuck faults in sequential digital circuits under the combinational state preservation
scan (CSP-scan) architecture (Potluri, 2015; Potluri et al., 2015). First, we show that,
under this CSP-scan architecture, when the test set is completely specified, the peak
switching activity during testing can be minimized by solving the Bottleneck Traveling
Salesman Problem (BTSP). This mapping of peak test switching activity minimization
problem to BTSP is novel, and proposed for the first time in the literature.
Usually, as circuit size increases, the percentage of don’t cares in the test set increases.
As a result, test vector ordering for any arbitrary filling of don’t care bits
is insufficient for producing effective reduction in switching activity during testing of
large circuits. Since don’t cares dominate the test sets for larger circuits, don’t care
filling plays a crucial role in reducing switching activity during testing. Taking this
into consideration, we propose an algorithm, XStat, which is capable of performing test
vector ordering while preserving don’t care bits in the test vectors, following which, the
don’t cares are filled in an intelligent fashion for minimizing input switching activity,
which effectively minimizes switching activity inside the circuit (Girard et al., 1998).
Through empirical validation on benchmark circuits, we show that XStat minimizes
peak switching activity significantly, during testing.
Although XStat is a very powerful heuristic for minimizing peak input-switchingactivity,
it will not guarantee optimality. To address this issue, we propose an algorithm
that uses Dynamic Programming to calculate the lower bound for a given sequence
of test vectors, and subsequently uses a greedy strategy for filling don’t cares in this
sequence to achieve this lower bound, thereby guaranteeing optimality. This algorithm,
which we refer to as DP-fill in this thesis, provides the globally optimal solution for
minimizing peak input-switching-activity and also is the best known in the literature
for minimizing peak input-switching-activity during testing. The proof of optimality of
DP-fill in minimizing peak input-switching-activity is also provided in this thesis
AI/ML Algorithms and Applications in VLSI Design and Technology
An evident challenge ahead for the integrated circuit (IC) industry in the
nanometer regime is the investigation and development of methods that can
reduce the design complexity ensuing from growing process variations and
curtail the turnaround time of chip manufacturing. Conventional methodologies
employed for such tasks are largely manual; thus, time-consuming and
resource-intensive. In contrast, the unique learning strategies of artificial
intelligence (AI) provide numerous exciting automated approaches for handling
complex and data-intensive tasks in very-large-scale integration (VLSI) design
and testing. Employing AI and machine learning (ML) algorithms in VLSI design
and manufacturing reduces the time and effort for understanding and processing
the data within and across different abstraction levels via automated learning
algorithms. It, in turn, improves the IC yield and reduces the manufacturing
turnaround time. This paper thoroughly reviews the AI/ML automated approaches
introduced in the past towards VLSI design and manufacturing. Moreover, we
discuss the scope of AI/ML applications in the future at various abstraction
levels to revolutionize the field of VLSI design, aiming for high-speed, highly
intelligent, and efficient implementations
Fast jitter tolerance testing for high-speed serial links in post-silicon validation
Post-silicon electrical validation of high-speed input/output (HSIO) links is a critical process for product qualification schedules of high-performance computer platforms under current aggressive time-to-market (TTM) commitments. Improvements in signaling methods, circuits, and process technologies have allowed HSIO data rates to scale well beyond 10 Gb/s. Noise and EM effects can create multiple signal integrity problems, which are aggravated by continuously faster bus technologies. The goal of post-silicon validation for HSIO links is to ensure design robustness of both receiver (Rx) and transmitter (Tx) circuitry in real system environments. One of the most common ways to evaluate the performance of a HSIO link is to characterize the Rx jitter tolerance (JTOL) performance by measuring the bit error rate (BER) of the link under worst stressing conditions. However, JTOL testing is extremely time-consuming when executed at specification BER considering manufacturing process, voltage, and temperature (PVT) test coverage. In order to significantly accelerate this process, we propose a novel approach for JTOL testing based on an efficient direct search optimization methodology. Our approach exploits the fast execution of a modified golden section search with a high BER, while overcoming the lack of correlation between different BERs by performing a downward linear search at the actual target BER until no errors are found. Our proposed methodology is validated in a realistic industrial server post-silicon validation platform for three different computer HSIO links: SATA, USB3, and PCIe3.ITESO, A.C
Fault Tolerant Electronic System Design
Due to technology scaling, which means reduced transistor size, higher density, lower voltage and more aggressive clock frequency, VLSI devices may become more sensitive against soft errors. Especially for those devices used in safety- and mission-critical applications, dependability and reliability are becoming increasingly important constraints during the development of system on/around them. Other phenomena (e.g., aging and wear-out effects) also have negative impacts on reliability of modern circuits. Recent researches show that even at sea level, radiation particles can still induce soft errors in electronic systems.
On one hand, processor-based system are commonly used in a wide variety of applications, including safety-critical and high availability missions, e.g., in the automotive, biomedical and aerospace domains. In these fields, an error may produce catastrophic consequences. Thus, dependability is a primary target that must be achieved taking into account tight constraints in terms of cost, performance, power and time to market. With standards and regulations (e.g., ISO-26262, DO-254, IEC-61508) clearly specify the targets to be achieved and the methods to prove their achievement, techniques working at system level are particularly attracting.
On the other hand, Field Programmable Gate Array (FPGA) devices are becoming more and more attractive, also in safety- and mission-critical applications due to the high performance, low power consumption and the flexibility for reconfiguration they provide. Two types of FPGAs are commonly used, based on their configuration memory cell technology, i.e., SRAM-based and Flash-based FPGA. For SRAM-based FPGAs, the SRAM cells of the configuration memory highly susceptible to radiation induced effects which can leads to system failure; and for Flash-based FPGAs, even though their non-volatile configuration memory cells are almost immune to Single Event Upsets induced by energetic particles, the floating gate switches and the logic cells in the configuration tiles can still suffer from Single Event Effects when hit by an highly charged particle. So analysis and mitigation techniques for Single Event Effects on FPGAs are becoming increasingly important in the design flow especially when reliability is one of the main requirements
Optimal Don’t Care Filling for Minimizing Peak Toggles During At-Speed Stuck-At Testing
Due to the increase in manufacturing/environmental uncertainties in the nanometer regime, testing digital chips under different operating conditions becomes mandatory. Traditionally, stuck-at tests were applied at slow speed to detect structural defects and transition fault tests were applied at-speed to detect delay defects. Recently, it was shown that certain cell-internal defects can only be detected using at-speed stuck-at testing. Stuck-at test patterns are power hungry, thereby causing excessive voltage droop on the power grid, delaying the test response, and finally leading to false delay failures on the tester. This motivates the need for peak power minimization during at-speed stuck-at testing. In this article, we use input toggle minimization as a means to minimize a circuit’s power dissipation during at-speed stuck-at testing under the Combinational State Preservation scan (CSP-scan) Design-For-Testability (DFT) scheme. For circuits whose test sets are dominated by don’t cares, this article maps the problem of optimal X-filling for peak input toggle minimization to a variant of the interval coloring problem and proposes a Dynamic Programming (DP) algorithm (DP-fill) for the same along with a theoretical proof for its optimality. For circuits whose test sets are not dominated by don’t cares, we propose a max scatter Hamiltonian path algorithm, which ensures that the ordering is done such that the don’t cares are evenly distributed in the final ordering of test cubes, thereby leading to better input toggle savings than DP-fill. The proposed algorithms, when experimented on ITC99 benchmarks, produced peak power savings of up to 48% over the best-known algorithms in literature. We have also pruned the solutions thus obtained using Greedy and Simulated Annealing strategies with iterative 1-bit neighborhood to validate our idea of optimal input toggle minimization as an effective technique for minimizing peak power dissipation during at-speed stuck-at testing
Coherent Receiver Arrays for Astronomy and Remote Sensing
Monolithic Millimeter-wave Integrated Circuits (MMICs) provide a level of integration that makes possible
the construction of large focal plane arrays of radio-frequency detectors—effectively the first “Radio
Cameras”—and these will revolutionize radio-frequency observations with single dishes, interferometers,
spectrometers, and spacecraft over the next two decades. The key technological advances have been
made at the Jet Propulsion Laboratory (JPL) in collaboration with the Northrop Grumman Corporation
(NGC). Although dramatic progress has been made in the last decade in several important areas, including
(i) packaging that enables large coherent detector arrays, (ii) extending the performance of amplifiers
to much higher frequencies, and (iii) reducing room-temperature noise at high frequencies, funding to
develop MMIC performance at cryo-temperatures and at frequencies below 150GHz has dropped nearly
to zero over the last five years. This has severely hampered the advance of the field. Moreover, because
of the high visibility of < 150GHz cryogenic detectors in astrophysics and cosmology, lack of progress in
this area has probably had a disproportionate impact on perceptions of the potential of coherent detectors
in general.
One of the prime objectives of the Keck Institute for Space Studies (KISS) is to select crucial areas of
technological development in their embryonic stages, when relatively modest funding can have a highly
significant impact by catalyzing collaborations between key institutions world-wide, supporting in-depth
studies of the current state and potential of emerging technologies, and prototyping development of key
components—all potentially leading to strong agency follow-on funding.
The KISS large program “Coherent Instrumentation for Cosmic Microwave Background Observations”
was initiated in order to investigate the scientific potential and technical feasibility of these “Radio
Cameras.” This opens up the possibility of bringing support to this embryonic area of detector development
at a critical phase during which KISS can catalyze and launch a coherent, coordinated, worldwide
effort on the development of MMIC Arrays. A number of key questions, regarding (i) the importance and
breadth of the scientific drivers, (ii) realistic limits on sensitivity, (iii) the potential of miniaturization into
receiver “modules,” and (iv) digital signal processing, needed to be studied carefully before embarking on
a major MMIC Array development effort led by Caltech/JPL/NGC and supported by KISS, in the hope
of attracting adequate subsequent government funding. For this purpose a large study was undertaken
under the sponsorship and aegis of KISS. The study began with a workshop in Pasadena on “MMIC
Array Receivers and Spectrographs” (July 21–25, 2008)1, immediately after an international conference
“CMB Component Separation and the Physics of Foregrounds” (July 14–18, 2008)2 that was organized in
conjunction with the MMIC workshop. There was then an eight-month study period, culminating in a
final “MMIC 2Workshop” (March 23–27, 2009).3 These workshops were very well attended, and brought
together the major international groups and scientists in the field of coherent radio-frequency detector
arrays. A notable aspect of the workshops is that they were well attended by young scientists—there
are many graduate students and post-doctoral fellows coming into this area. The two workshops focused
both on detailed discussions of key areas of interest and on the writing of this report. They were
conducted in a spirit of full and impartial scrutiny of the pros and cons of MMICs, in order to make an
objective assessment of their potential. It serves no useful purpose to pursue lines of technology development
based on unrealistic and over-optimistic projections. This is crucially important for KISS, Caltech,
and JPL which can only have real impact if they deliver on the promise of the technologies they develop.
A broad range of opinions was evident at the start of the first workshop, but in the end a strong consensus
was achieved on the most important questions that had emerged. This report reflects the workshop
deliberations and that consensus.
The key scientific drivers for the development of the MMIC technology are: (i) large angular-scale Bmode
polarization observations of the cosmic microwave background—here MMICs are one of two key
technologies under development at JPL, both of which are primary detectors on the recently-launched
Planck mission; (ii) large-field spectroscopic surveys of the Galaxy and nearby galaxies at high spectral
resolution, and of galaxy clusters at low resolution; (iii) wide-field imaging via deployment as focal plane
arrays on interferometers; (iv) remote sensing of the atmosphere and Earth; and (v) wide-field imaging in
planetary missions. These science drivers are discussed in the report.
The most important single outcome of the workshops, and a sine qua non of this whole program,
is that consensus was reached that it should be possible to reduce the noise of individual HEMTs or
MMICs operating at cryogenic temperatures to less than three times the quantum limit at frequencies up
to 150 GHz, by working closely with a foundry (in this case NGC) and providing rapid feedback on the
performance of the devices they are fabricating, thus enabling tests of the effects of small changes in the
design of these transistors. This kind of partnership has been very successful in the past, but can now be
focused more intensively on cryogenic performance by carrying out tests of MMIC wafers, including tests
on a cryogenic probe station. It was felt that a properly outfitted university laboratory dedicated to this
testing and optimization would be an important element in this program, which would include MMIC
designs, wafer runs, and a wide variety of tests of MMIC performance at cryogenic temperatures.
This Study identified eight primary areas of technology development, including the one singled out
above, which must be actively pursued in order to exploit the full potential of MMIC Arrays in a timely
fashion:
1. Reduce the noise levels of individual transistors and MMICs to three times the quantum limit or
lower at cryogenic temperatures at frequencies up to 150 GHz.
2. Integrate high-performing MMICs into the building blocks of large arrays without loss of performance.
Currently factors of two in both noise and bandwidth are lost at this step.
3. Develop high performance, low mass, inexpensive feed arrays.
4. Develop robust interconnects and wiring that allow easy fabrication and integration of large arrays.
5. Develop mass production techniques suitable for arrays of differing sizes.
6. Reduce mass and power. (Requirements will differ widely with application. In the realm of planetary
instruments, this is often the most important single requirement.)
7. Develop planar orthomode transducers with low crosstalk and broad bandwidth.
8. Develop high power and high efficiency MMIC amplifiers for LO chains, etc.
Another important outcome of the two workshops was that a number of new collaborations were
forged between leading groups worldwide with the object of focusing on the development of MMIC
arrays
An advanced Framework for efficient IC optimization based on analytical models engine
En base als reptes sorgits a conseqüència de l'escalat de la tecnologia, la present tesis desenvolupa i analitza un conjunt d'eines orientades a avaluar la sensibilitat a la propagació d'esdeveniments SET en circuits microelectrònics. S'han proposant varies mètriques de propagació de SETs considerant l'impacto dels emmascaraments lògic, elèctric i combinat lògic-elèctric. Aquestes mètriques proporcionen una via d'anàlisi per quantificar tant les regions més susceptibles a propagar SETs com les sortides més susceptibles de rebre'ls. S'ha desenvolupat un conjunt d'algorismes de cerca de camins sensibilitzables altament adaptables a múltiples aplicacions, un sistema lògic especific i diverses tècniques de simplificació de circuits. S'ha demostrat que el retard d'un camí donat depèn dels vectors de sensibilització aplicats a les portes que formen part del mateix, essent aquesta variació de retard comparable a la atribuïble a les variacions paramètriques del proces.En base a los desafíos surgidos a consecuencia del escalado de la tecnología, la presente tesis desarrolla y analiza un conjunto de herramientas orientadas a evaluar la sensibilidad a la propagación de eventos SET en circuitos microelectrónicos. Se han propuesto varias métricas de propagación de SETs considerando el impacto de los enmascaramientos lógico, eléctrico y combinado lógico-eléctrico. Estas métricas proporcionan una vía de análisis para cuantificar tanto las regiones más susceptibles a propagar eventos SET como las salidas más susceptibles a recibirlos. Ha sido desarrollado un conjunto de algoritmos de búsqueda de caminos sensibilizables altamente adaptables a múltiples aplicaciones, un sistema lógico especifico y diversas técnicas de simplificación de circuitos. Se ha demostrado que el retardo de un camino dado depende de los vectores de sensibilización aplicados a las puertas que forman parte del mismo, siendo esta variación de retardo comparable a la atribuible a las variaciones paramétricas del proceso.Based on the challenges arising as a result of technology scaling, this thesis develops and evaluates a complete framework for SET propagation sensitivity. The framework comprises a number of processing tools capable of handling circuits with high complexity in an efficient way. Various SET propagation metrics have been proposed considering the impact of logic, electric and combined logic-electric masking. Such metrics provide a valuable vehicle to grade either in-circuit regions being more susceptible of propagating SETs toward the circuit outputs or circuit outputs more susceptible to produce SET. A quite efficient and customizable true path finding algorithm with a specific logic system has been constructed and its efficacy demonstrated on large benchmark circuits. It has been shown that the delay of a path depends on the sensitization vectors applied to the gates within the path. In some cases, this variation is comparable to the one caused by process parameters variation
- …