1,585 research outputs found

    Fast and accurate SER estimation for large combinational blocks in early stages of the design

    Get PDF
    Soft Error Rate (SER) estimation is an important challenge for integrated circuits because of the increased vulnerability brought by technology scaling. This paper presents a methodology to estimate in early stages of the design the susceptibility of combinational circuits to particle strikes. In the core of the framework lies MASkIt , a novel approach that combines signal probabilities with technology characterization to swiftly compute the logical, electrical, and timing masking effects of the circuit under study taking into account all input combinations and pulse widths at once. Signal probabilities are estimated applying a new hybrid approach that integrates heuristics along with selective simulation of reconvergent subnetworks. The experimental results validate our proposed technique, showing a speedup of two orders of magnitude in comparison with traditional fault injection estimation with an average estimation error of 5 percent. Finally, we analyze the vulnerability of the Decoder, Scheduler, ALU, and FPU of an out-of-order, superscalar processor design.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness and Feder Funds under grant TIN2013-44375-R, by the Generalitat de Catalunya under grant FI-DGR 2016, and by the FP7 program of the EU under contract FP7-611404 (CLERECO).Peer ReviewedPostprint (author's final draft

    Analysis and Design of Resilient VLSI Circuits

    Get PDF
    The reliable operation of Integrated Circuits (ICs) has become increasingly difficult to achieve in the deep sub-micron (DSM) era. With continuously decreasing device feature sizes, combined with lower supply voltages and higher operating frequencies, the noise immunity of VLSI circuits is decreasing alarmingly. Thus, VLSI circuits are becoming more vulnerable to noise effects such as crosstalk, power supply variations and radiation-induced soft errors. Among these noise sources, soft errors (or error caused by radiation particle strikes) have become an increasingly troublesome issue for memory arrays as well as combinational logic circuits. Also, in the DSM era, process variations are increasing at an alarming rate, making it more difficult to design reliable VLSI circuits. Hence, it is important to efficiently design robust VLSI circuits that are resilient to radiation particle strikes and process variations. The work presented in this dissertation presents several analysis and design techniques with the goal of realizing VLSI circuits which are tolerant to radiation particle strikes and process variations. This dissertation consists of two parts. The first part proposes four analysis and two design approaches to address radiation particle strikes. The analysis techniques for the radiation particle strikes include: an approach to analytically determine the pulse width and the pulse shape of a radiation induced voltage glitch in combinational circuits, a technique to model the dynamic stability of SRAMs, and a 3D device-level analysis of the radiation tolerance of voltage scaled circuits. Experimental results demonstrate that the proposed techniques for analyzing radiation particle strikes in combinational circuits and SRAMs are fast and accurate compared to SPICE. Therefore, these analysis approaches can be easily integrated in a VLSI design flow to analyze the radiation tolerance of such circuits, and harden them early in the design flow. From 3D device-level analysis of the radiation tolerance of voltage scaled circuits, several non-intuitive observations are made and correspondingly, a set of guidelines are proposed, which are important to consider to realize radiation hardened circuits. Two circuit level hardening approaches are also presented to harden combinational circuits against a radiation particle strike. These hardening approaches significantly improve the tolerance of combinational circuits against low and very high energy radiation particle strikes respectively, with modest area and delay overheads. The second part of this dissertation addresses process variations. A technique is developed to perform sensitizable statistical timing analysis of a circuit, and thereby improve the accuracy of timing analysis under process variations. Experimental results demonstrate that this technique is able to significantly reduce the pessimism due to two sources of inaccuracy which plague current statistical static timing analysis (SSTA) tools. Two design approaches are also proposed to improve the process variation tolerance of combinational circuits and voltage level shifters (which are used in circuits with multiple interacting power supply domains), respectively. The variation tolerant design approach for combinational circuits significantly improves the resilience of these circuits to random process variations, with a reduction in the worst case delay and low area penalty. The proposed voltage level shifter is faster, requires lower dynamic power and area, has lower leakage currents, and is more tolerant to process variations, compared to the best known previous approach. In summary, this dissertation presents several analysis and design techniques which significantly augment the existing work in the area of resilient VLSI circuit design

    MOCAST 2021

    Get PDF
    The 10th International Conference on Modern Circuit and System Technologies on Electronics and Communications (MOCAST 2021) will take place in Thessaloniki, Greece, from July 5th to July 7th, 2021. The MOCAST technical program includes all aspects of circuit and system technologies, from modeling to design, verification, implementation, and application. This Special Issue presents extended versions of top-ranking papers in the conference. The topics of MOCAST include:Analog/RF and mixed signal circuits;Digital circuits and systems design;Nonlinear circuits and systems;Device and circuit modeling;High-performance embedded systems;Systems and applications;Sensors and systems;Machine learning and AI applications;Communication; Network systems;Power management;Imagers, MEMS, medical, and displays;Radiation front ends (nuclear and space application);Education in circuits, systems, and communications

    Automated detection and analysis of fluorescence changes evoked by molecular signalling

    Get PDF
    Fluorescent dyes and genetically encoded fluorescence indicators (GEFI) are common tools for visualizing concentration changes of specific ions and messenger molecules during intra- as well as intercellular communication. While fluorescent dyes have to be directly loaded into target cells and function only transiently, the expression of GEFIs can be controlled in a cell and time-specific fashion, even allowing long-term analysis in living organisms. Dye and GEFI based fluorescence fluctuations, recorded using advanced imaging technologies, are the foundation for the analysis of physiological molecular signaling. Analyzing the plethora of complex fluorescence signals is a laborious and time-consuming task. An automated analysis of fluorescent signals circumvents user bias and time constraints. However, it requires to overcome several challenges, including correct estimation of fluorescence fluctuations at basal concentrations of messenger molecules, detection and extraction of events themselves, proper segmentation of neighboring events as well as tracking of propagating events. Moreover, event detection algorithms need to be sensitive enough to accurately capture localized and low amplitude events exhibiting a limited spatial extent. This thesis presents three novel algorithms, PBasE, CoRoDe and KalEve, for the automated analysis of fluorescence events, developed to overcome the aforementioned challenges. The algorithms are integrated into a graphical application called MSparkles, specifically designed for the analysis of fluorescence signals, developed in MATLAB. The capabilities of the algorithms are demonstrated by analyzing astroglial Ca2+ events, recorded in anesthetized and awake mice, visualized using genetically encoded Ca2+ indicators (GECIs) GCaMP3 as well as GCaMP5. The results were compared to those obtained by other software packages. In addition, the analysis of neuronal Na+ events recorded in acute brain slices using SBFI-AM serve to indicate the putatively broad application range of the presented algorithms. Finally, due to increasing evidence of the pivotal role of astrocytes in neurodegenerative diseases such as epilepsy, a metric to assess the synchronous occurrence of fluorescence events is introduced. In a proof-of-principle analysis, this metric is used to correlate astroglial Ca2+ events with EEG measurementsFluoreszenzfarbstoffe und genetisch kodierte Fluoreszenzindikatoren (GEFI) sind gängige Werkzeuge zur Visualisierung von Konzentrationsänderungen bestimmter Ionen und Botenmoleküle der intra- sowie interzellulären Kommunikation. Während Fluoreszenzfarbstoffe direkt in die Zielzellen eingebracht werden müssen und nur über einen begrenzten Zeitraum funktionieren, kann die Expression von GEFIs zell- und zeitspezifisch gesteuert werden, was darüber hinaus Langzeitanalysen in lebenden Organismen ermöglicht. Farbstoff- und GEFI-basierte Fluoreszenzfluktuationen, die mit Hilfe moderner bildgebender Verfahren aufgezeichnet werden, bilden die Grundlage für die Analyse physiologischer molekularer Kommunikation. Die Analyse einer großen Zahl komplexer Fluoreszenzsignale ist jedoch eine schwierige und zeitaufwändige Aufgabe. Eine automatisierte Analyse ist dagegen weniger zeitaufwändig und unabhängig von der Voreingenommenheit des Anwenders. Allerdings müssen hierzu mehrere Herausforderungen bewältigt werden. Unter anderem die korrekte Schätzung von Fluoreszenzschwankungen bei Basalkonzentrationen von Botenmolekülen, die Detektion und Extraktion von Signalen selbst, die korrekte Segmentierung benachbarter Signale sowie die Verfolgung sich ausbreitender Signale. Darüber hinaus müssen die Algorithmen zur Signalerkennung empfindlich genug sein, um lokalisierte Signale mit geringer Amplitude sowie begrenzter räumlicher Ausdehnung genau zu erfassen. In dieser Arbeit werden drei neue Algorithmen, PBasE, CoRoDe und KalEve, für die automatische Extraktion und Analyse von Fluoreszenzsignalen vorgestellt, die entwickelt wurden, um die oben genannten Herausforderungen zu bewältigen. Die Algorithmen sind in eine grafische Anwendung namens MSparkles integriert, die speziell für die Analyse von Fluoreszenzsignalen entwickelt und in MATLAB implementiert wurde. Die Fähigkeiten der Algorithmen werden anhand der Analyse astroglialer Ca2+-Signale demonstriert, die in narkotisierten sowie wachen Mäusen aufgezeichnet und mit den genetisch kodierten Ca2+-Indikatoren (GECIs) GCaMP3 und GCaMP5 visualisiert wurden. Erlangte Ergebnisse werden anschließend mit denen anderer Softwarepakete verglichen. Darüber hinaus dient die Analyse neuronaler Na+-Signale, die in akuten Hirnschnitten mit SBFI-AM aufgezeichnet wurden, dazu, den breiten Anwendungsbereich der Algorithmen aufzuzeigen. Zu guter Letzt wird aufgrund der zunehmenden Indizien auf die zentrale Rolle von Astrozyten bei neurodegenerativen Erkrankungen wie Epilepsie eine Metrik zur Bewertung des synchronen Auftretens fluoreszenter Signale eingeführt. In einer Proof-of-Principle-Analyse wird diese Metrik verwendet, um astrogliale Ca2+-Signale mit EEG-Messungen zu korrelieren

    EUV-induced Plasma, Electrostatics and Particle Contamination Control

    Get PDF

    Phase 1 of the automated array assembly task of the low cost silicon solar array project

    Get PDF
    The results of a study of process variables and solar cell variables are presented. Interactions between variables and their effects upon control ranges of the variables are identified. The results of a cost analysis for manufacturing solar cells are discussed. The cost analysis includes a sensitivity analysis of a number of cost factors

    SYSTEM-LEVEL APPROACHES FOR IMPROVING PERFORMANCE OF CANTILEVER-BASED CHEMICAL SENSORS

    Get PDF
    This work presents the development of different technologies and techniques for enhancing the performance of cantilever-based MEMS chemical sensors. The developed methods address specifically the sensor metrics of sensitivity, selectivity, and stability. Different techniques for improving the quality and uniformity of deposited sorbent polymer films onto MEMS-based micro-cantilever chemical sensors are presented. A novel integrated recess structure for constraining the sorbent polymer layer to a fixed volume with uniform thickness was developed. The recess structure is used in conjunction with localized polymer deposition techniques, such as inkjet printing and spray coating using shadow masking, to deposit controlled, uniform sorbent layers onto specific regions of chemical sensors, enhancing device performance. The integrated recess structure enhances the stability of a cantilever-based sensor by constraining the deposited polymer layers away from high-strain regions of the device, reducing Q-factor degradation. Additionally, the integrated recess structure enhances the sensitivity of the sensor by replacing chemically-inert silicon mass with ‘active’ sorbent polymer mass. Finally, implementation of localized polymer deposition enables the use of sensor arrays, where each sensor in the array is coated with a different sorbent, leading to improved selectivity. In addition, transient signal generation and analysis for mass-sensitive chemical sensing of volatile organic compounds (VOCs) in the gas phase is investigated. It is demonstrated that transient signal analysis can be employed to enhance the selectivity of individual sensors leading to improved analyte discrimination. As an example, elements of a simple alcohol series and elements of a simple aromatic ring series are distinguished with a single sensor (i.e. without an array) based solely on sorption transients. Transient signals are generated by the rapid switching of mechanical valves, and also by thermal methods. Thermally-generated transients utilize a novel sensor design which incorporates integrated heating units onto the cantilever and enables transient signal generation without the need for an external fluidic system. It is expected that the thermal generation of transient signals will allow for future operation in a pulsed mode configuration, leading to reduced drift and enhanced stability without the need for a reference device. Finally, A MEMS-based micro thermal pre-concentration (µTPC) system for improving sensor sensitivity and selectivity is presented. The µTPC enhances sensor sensitivity by amplifying low-level chemical concentrations, and is designed to enable coarse pre-filtering (e.g. for injection into a GC system) by means of arrayed and individually-addressable µTPC devices. The system implements a suspended membrane geometry, enhancing thermal isolation and enabling high temperature elevations even for low levels of heating power. The membranes have a large surface area-to-volume ratio but low thermal mass (and therefore, low thermal time constant), with arrays of 3-D high aspect-ratio features formed via DRIE of silicon. Integrated onto the membrane are sets of diffused resistors designed for performing thermal desorption (via joule heating) and for measuring the temperature elevation of the device due to the temperature-dependent resistivity of doped silicon. The novel system features integrated real-time chemical sensing technology, which allows for reduced sampling time and a reduced total system dead volume of approximately 10 µL. The system is capable of operating in both a traditional flow-through configuration and also a diffusion-based quasi-static configuration, which requires no external fluidic flow system, thereby enabling novel measurement methods and applications. The ability to operate without a forced-flow fluidic system is a distinct advantage and can considerably enhance the portability of a sensing system, facilitating deployment on mobile airborne platforms as well as long-term monitoring stations in remote locations. Initial tests of the system have demonstrated a pre-concentration factor of 50% for toluene.Ph.D

    Multilevel Modeling, Formal Analysis, and Characterization of Single Event Transients Propagation in Digital Systems

    Get PDF
    RÉSUMÉ La croissance exponentielle du nombre de transistors par puce a apporté des progrès considérables aux performances et fonctionnalités des dispositifs semi-conducteurs avec une miniaturisation des dimensions physiques ainsi qu’une augmentation de vitesse. De nos jours, les appareils électroniques utilisés dans un large éventail d’applications telles que les systèmes de divertissement personnels, l’industrie automobile, les systèmes électroniques médicaux, et le secteur financier ont changé notre façon de vivre. Cependant, des études récentes ont démontré que le rétrécissement permanent de la taille des transistors qui s’approchent des dimensions nanométriques fait surgir des défis majeurs. La réduction de la fiabilité au sens large (c.-à-d., la capacité à fournir la fonction attendue) est l’un d’entre eux. Lorsqu’un système est conçu avec une technologie avancée, on s’attend à ce qu’ il connaît plus de défaillances dans sa durée de vie. De telles défaillances peuvent avoir des conséquences graves allant des pertes financières aux pertes humaines. Les erreurs douces induites par la radiation, qui sont apparues d’abord comme une source de panne plutôt exotique causant des anomalies dans les satellites, sont devenues l’un des problèmes les plus difficiles qui influencent la fiabilité des systèmes microélectroniques modernes, y compris les dispositifs terrestres. Dans le secteur médical par exemple, les erreurs douces ont été responsables de l’échec et du rappel de plusieurs stimulateurs cardiaques implantables. En fonction du transistor affecté lors de la fabrication, le passage d’une particule peut induire des perturbations isolées qui se manifestent comme un basculement du contenu d’une cellule de mémoire (c.-à-d., Single Event Upsets (SEU)) ou un changement temporaire de la sortie (sous forme de bruit) dans la logique combinatoire (c.-à-d., Single Event Transients (SETs)). Les SEU ont été largement étudiés au cours des trois dernières décennies, car ils étaient considérés comme la cause principale des erreurs douces. Néanmoins, des études expérimentales ont montré qu’avec plus de miniaturisation technologique, la contribution des SET au taux d’erreurs douces est remarquable et qu’elle peut même dépasser celui des SEU dans les systèmes à haute fréquence [1], [2]. Afin de minimiser l’impact des erreurs douces, l’effet des SET doit être modélisé, prédit et atténué. Toutefois, malgré les progrès considérables accomplis dans la vérification fonctionnelle des circuits numériques, il y a eu très peu de progrès en matiàre de vérification non-fonctionnelle (par exemple, l’analyse des erreurs douces). Ceci est dû au fait que la modélisation et l’analyse des propriétés non-fonctionnelles des SET pose un grand défi. Cela est lié à la nature aléatoire des défauts et à la difficulté de modéliser la variation de leurs caractéristiques lorsqu’ils se propagent.----------ABSTRACT The exponential growth in the number of transistors per chip brought tremendous progress in the performance and the functionality of semiconductor devices associated with reduced physical dimensions and higher speed. Electronic devices used in a wide range of applications such as personal entertainment systems, automotive industry, medical electronic systems, and financial sector changed the way we live nowadays. However, recent studies reveal that further downscaling of the transistor size at nano-scale technology leads to major challenges. Reliability (i.e., ability to provide intended functionality) is one of them, where a system designed in nano-scale nodes is expected to experience more failures in its lifetime than if it was designed using larger technology node size. Such failures can lead to serious conséquences ranging from financial losses to even loss of human life. Soft errors induced by radiation, which were initially considered as a rather exotic failure mechanism causing anomalies in satellites, have become one of the most challenging issues that impact the reliability of modern microelectronic systems, including devices at terrestrial altitudes. For instance, in the medical industry, soft errors have been responsible of the failure and recall of many implantable cardiac pacemakers. Depending on the affected transistor in the design, a particle strike can manifest as a bit flip in a state element (i.e., Single Event Upset (SEU)) or temporally change the output of a combinational gate (i.e., Single Event Transients (SETs)). Initially, SEUs have been widely studied over the last three decades as they were considered to be the main source of soft errors. However, recent experiments show that with further technology downscaling, the contribution of SETs to the overall soft error rate is remarkable and in high frequency systems, it might exceed that of SEUs [1], [2]. In order to minimize the impact of soft errors, the impact of SETs needs to be modeled, predicted, and mitigated. However, despite considerable progress towards developing efficient methodologies for the functional verification of digital designs, advances in non-functional verification (e.g., soft error analysis) have been lagging. This is due to the fact that the modeling and analysis of non-functional properties related to SETs is very challenging. This can be related to the random nature of these faults and the difficulty of modeling the variation in its characteristics while propagating. Moreover, many details about the design structure and the SETs characteristics may not be available at high abstraction levels. Thus, in high level analysis, many assumptions about the SETs behavior are usually made, which impacts the accuracy of the generated results. Consequently, the lowcost detection of soft errors due to SETs is very challenging and requires more sophisticated techniques
    corecore