26 research outputs found

    Domino Logic Testing Systems and Methods

    Get PDF
    A domino logic test circuit includes a dynamic node, a precharge device for charging the dynamic node, and an output inverter for inverting an output of the dynamic node. A logic network is coupled to the dynamic node for discharging the dynamic node in accordance with logic. A footer device enables and disables the logic network. A keeper device is coupled to the dynamic node for retaining a charge state of the dynamic node while awaiting the logic network to operate in accordance with the logic. A test mode selection device is coupled to the dynamic node and is configured to enable a latch in the test mode. A phase selection device is configured to receive at least a wait signal and to enable selection of at least a precharge phase for charging the dynamic node to a voltage level, a write phase for generating a value to the latch based on the logic and the voltage level of the dynamic node, and a wait phase for enabling reading the value. The selection is based, at least partially, on the wait signal state

    Integrated circuit outlier identification by multiple parameter correlation

    Get PDF
    Semiconductor manufacturers must ensure that chips conform to their specifications before they are shipped to customers. This is achieved by testing various parameters of a chip to determine whether it is defective or not. Separating defective chips from fault-free ones is relatively straightforward for functional or other Boolean tests that produce a go/no-go type of result. However, making this distinction is extremely challenging for parametric tests. Owing to continuous distributions of parameters, any pass/fail threshold results in yield loss and/or test escapes. The continuous advances in process technology, increased process variations and inaccurate fault models all make this even worse. The pass/fail thresholds for such tests are usually set using prior experience or by a combination of visual inspection and engineering judgment. Many chips have parameters that exceed certain thresholds but pass Boolean tests. Owing to the imperfect nature of tests, to determine whether these chips (called "outliers") are indeed defective is nontrivial. To avoid wasted investment in packaging or further testing it is important to screen defective chips early in a test flow. Moreover, if seemingly strange behavior of outlier chips can be explained with the help of certain process parameters or by correlating additional test data, such chips can be retained in the test flow before they are proved to be fatally flawed. In this research, we investigate several methods to identify true outliers (defective chips, or chips that lead to functional failure) from apparent outliers (seemingly defective, but fault-free chips). The outlier identification methods in this research primarily rely on wafer-level spatial correlation, but also use additional test parameters. These methods are evaluated and validated using industrial test data. The potential of these methods to reduce burn-in is discussed

    Cross-Layer Optimization for Power-Efficient and Robust Digital Circuits and Systems

    Full text link
    With the increasing digital services demand, performance and power-efficiency become vital requirements for digital circuits and systems. However, the enabling CMOS technology scaling has been facing significant challenges of device uncertainties, such as process, voltage, and temperature variations. To ensure system reliability, worst-case corner assumptions are usually made in each design level. However, the over-pessimistic worst-case margin leads to unnecessary power waste and performance loss as high as 2.2x. Since optimizations are traditionally confined to each specific level, those safe margins can hardly be properly exploited. To tackle the challenge, it is therefore advised in this Ph.D. thesis to perform a cross-layer optimization for digital signal processing circuits and systems, to achieve a global balance of power consumption and output quality. To conclude, the traditional over-pessimistic worst-case approach leads to huge power waste. In contrast, the adaptive voltage scaling approach saves power (25% for the CORDIC application) by providing a just-needed supply voltage. The power saving is maximized (46% for CORDIC) when a more aggressive voltage over-scaling scheme is applied. These sparsely occurred circuit errors produced by aggressive voltage over-scaling are mitigated by higher level error resilient designs. For functions like FFT and CORDIC, smart error mitigation schemes were proposed to enhance reliability (soft-errors and timing-errors, respectively). Applications like Massive MIMO systems are robust against lower level errors, thanks to the intrinsically redundant antennas. This property makes it applicable to embrace digital hardware that trades quality for power savings.Comment: 190 page

    The 1992 4th NASA SERC Symposium on VLSI Design

    Get PDF
    Papers from the fourth annual NASA Symposium on VLSI Design, co-sponsored by the IEEE, are presented. Each year this symposium is organized by the NASA Space Engineering Research Center (SERC) at the University of Idaho and is held in conjunction with a quarterly meeting of the NASA Data System Technology Working Group (DSTWG). One task of the DSTWG is to develop new electronic technologies that will meet next generation electronic data system needs. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The NASA SERC is proud to offer, at its fourth symposium on VLSI design, presentations by an outstanding set of individuals from national laboratories, the electronics industry, and universities. These speakers share insights into next generation advances that will serve as a basis for future VLSI design

    Τεχνικές ανίχνευσης και διόρθωσης λαθών χρονισμού για αυξημένη αξιοπιστία ολοκληρωμένων κυκλωμάτων σε νανομετρικές τεχνολογίες

    Get PDF
    Η κλιμάκωση της τεχνολογίας καθιστά ιδιαίτερα σημαντική την επίδραση των λαθών χρονισμού στα ολοκληρωμένα κυκλώματα μεγάλης πολυπλοκότητας και υψηλής συχνότητας. Οι διακυμάνσεις της κατασκευαστικής διαδικασίας, της τάσης και της θερμοκρασίας οδηγούν σε μεγάλες αποκλίσεις στις καθυστερήσεις, σε επίπεδο συστήματος, οι οποίες υπονομεύουν την αξιοπιστία των κυκλωμάτων. Επίσης, η αλληλεπίδραση μεταξύ των σημάτων, οι διαταραχές στην τροφοδοσία ισχύος και η αντιστατική/επαγωγική πτώση της τάσης στην τροφοδοσία, επηρεάζουν την απόδοση των συστημάτων, αυξάνοντας την συνολική επίπτωση των λαθών χρονισμού. Επιπρόσθετα, μηχανισμοί γήρανσης προκαλούν σταδιακή μείωση της ταχύτητας των κυκλωμάτων κατά τη διάρκεια της λειτουργίας τους. Υπό αυτές τις συνθήκες, είναι προφανές ότι οι τεχνικές που παρέχουν ανεκτικότητα σε λάθη χρονισμού καθίστανται αναγκαίες καθώς προσφέρουν ανθεκτικότητα έναντι των σφαλμάτων χρονισμού και ικανοποιούν τις προδιαγραφές αξιοπιστίας των συστημάτων. Στo πλαίσιο της διατριβής παρουσιάζονται τρεις τεχνικές ταυτόχρονης εν λειτουργία ανίχνευσης και διόρθωσης λαθών χρονισμού οι οποίες συμβάλλουν στην αξιοπιστία των κυκλωμάτων. Με σκοπό την αξιολόγησή τους, οι τρεις τεχνικές εφαρμόστηκαν σε έναν μικροεπεξεργαστή MIPS R2000 32bit με αρχιτεκτονική δομής διοχέτευσης. Τα πειραματικά αποτελέσματα δείχνουν ότι οι προτεινόμενες τεχνικές ανιχνεύουν και διορθώνουν τα επαγόμενα λάθη χρονισμού με χαμηλό κόστος στην κατανάλωση ισχύος και την επιφάνεια πυριτίου.As technology scales down, timing errors are a real concern in high complexity and high frequency integrated circuits. Process, Voltage and Temperature variations lead to large spreads in delay, at the system level, which undermine circuit’s reliability. Moreover, crosstalk, power supply disturbances and resistive IR-drop or inductance IL-drop affect circuit performance increasing the overall impact of timing errors. In addition, aging mechanisms cause gradual speed degradation of the designs over their service life. In this context, it is evident that timing error tolerance techniques are becoming necessary to provide robustness against timing violations and meet system reliability requirements. This thesis presents three concurrent on-line timing error tolerance techniques which enhance circuit’s reliability. To validate the three techniques, they have been applied in the design of a 32-bits MIPS R2000 pipeline microprocessor. The experimental results show that the proposed techniques detect and correct the generated timing errors efficiently with low power consumption and low silicon area overhead

    Adaptive Distributed Architectures for Future Semiconductor Technologies.

    Full text link
    Year after year semiconductor manufacturing has been able to integrate more components in a single computer chip. These improvements have been possible through systematic shrinking in the size of its basic computational element, the transistor. This trend has allowed computers to progressively become faster, more efficient and less expensive. As this trend continues, experts foresee that current computer designs will face new challenges, in utilizing the minuscule devices made available by future semiconductor technologies. Today's microprocessor designs are not fit to overcome these challenges, since they are constrained by their inability to handle component failures by their lack of adaptability to a wide range of custom modules optimized for specific applications and by their limited design modularity. The focus of this thesis is to develop original computer architectures, that can not only survive these new challenges, but also leverage the vast number of transistors available to unlock better performance and efficiency. The work explores and evaluates new software and hardware techniques to enable the development of novel adaptive and modular computer designs. The thesis first explores an infrastructure to quantitatively assess the fallacies of current systems and their inadequacy to operate on unreliable silicon. In light of these findings, specific solutions are then proposed to strengthen digital system architectures, both through hardware and software techniques. The thesis culminates with the proposal of a radically new architecture design that can fully adapt dynamically to operate on the hardware resources available on chip, however limited or abundant those may be.PHDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/102405/1/apellegr_1.pd

    Optical fibre distributed access transmission systems (OFDATS)

    Full text link

    Vorhersagbares und zur Laufzeit adaptierbares On-Chip Netzwerk für gemischt kritische Echtzeitsysteme

    Get PDF
    The industry of safety-critical and dependable embedded systems calls for even cheaper, high performance platforms that allow flexibility and an efficient verification of safety and real-time requirements. To cope with the increasing complexity of interconnected functions and to reduce the cost and power consumption of the system, multicore systems are used to efficiently integrate different processing units in the same chip. Networks-on-chip (NoCs), as a modular interconnect, are used as a promising solution for such multiprocessor systems on chip (MPSoCs), due to their scalability and performance. For safety-critical systems, a major goal is the avoidance of hazards. For this, safety-critical systems are qualified or even certified to prove the correctness of the functioning under all possible cases. A predictable behaviour of the NoC can help to ease the qualification process of the system. To achieve the required predictability, designers have two classes of solutions: quality of service mechanisms and (formal) analysis. For mixed-criticality systems, isolation and analysis approaches must be combined to efficiently achieve the desired predictability. Traditional NoC analysis and architecture concepts tackle only a subpart of the challenges: they focus on either performance or predictability. Existing, predictable NoCs are deemed too expensive and inflexible to host a variety of applications with opposing constraints. And state-of-the-art analyses neglect certain platform properties to verify the behaviour. Together this leads to a high over-provisioning of the hardware resources as well as adverse impacts on system performance, and on the flexibility of the system. In this work we tackle these challenges and develop a predictable and runtime-adaptable NoC architecture that efficiently integrates mixed-critical applications with opposing constraints. Additionally, we present a modelling and analysis framework for NoCs that accounts for backpressure. This framework enables to evaluate the performance and reliability early at design time. Hence, the designer can assess multiple design decisions by using abstract models and formal approaches.Die Industrie der sicherheitskritischen und zuverlässigen eingebetteten Systeme verlangt nach noch günstigeren, leistungsfähigeren Plattformen, welche Flexibilität und eine effiziente Überprüfung der Sicherheits- und Echtzeitanforderungen ermöglichen. Um der zunehmenden Komplexität der zunehmend vernetzten Funktionen gerecht zu werden und die Kosten und den Stromverbrauch eines Systems zu reduzieren, werden Mehrkern-Systeme eingesetzt. On-Chip Netzwerke werden aufgrund ihrer Skalierbarkeit und Leistung als vielversprechende Lösung für solch Mehrkern-Systeme eingesetzt. Bei sicherheitskritischen Systemen ist die Vermeidung von Gefahren ein wesentliches Ziel. Dazu werden sicherheitskritische Systeme qualifiziert oder zertifiziert, um die Funktionsfähigkeit in allen möglichen Fällen nachzuweisen. Ein vorhersehbares Verhalten des on-Chip Netzwerks kann dabei helfen, den Qualifizierungsprozess des Systems zu erleichtern. Um die erforderliche Vorhersagbarkeit zu erreichen, gibt es zwei Klassen von Lösungen: Quality of Service Mechanismen und (formale) Analyse. Für Systeme mit gemischter Relevanz müssen Isolationsmechanismen und Analyseansätze kombiniert werden, um die gewünschte Vorhersagbarkeit effizient zu erreichen. Traditionelle Analyse- und Architekturkonzepte für on-Chip Netzwerke lösen nur einen Teil dieser Herausforderungen: sie konzentrieren sich entweder auf Leistung oder Vorhersagbarkeit. Existierende vorhersagbare on-Chip Netzwerke werden als zu teuer und unflexibel erachtet, um eine Vielzahl von Anwendungen mit gegensätzlichen Anforderungen zu integrieren. Und state-of-the-art Analysen vernachlässigen bzw. vereinfachen bestimmte Plattformeigenschaften, um das Verhalten überprüfen zu können. Dies führt zu einer hohen Überbereitstellung der Hardware-Ressourcen als auch zu negativen Auswirkungen auf die Systemleistung und auf die Flexibilität des Systems. In dieser Arbeit gehen wir auf diese Herausforderungen ein und entwickeln eine vorhersehbare und zur Laufzeit anpassbare Architektur für on-Chip Netzwerke, welche gemischt-kritische Anwendungen effizient integriert. Zusätzlich stellen wir ein Modellierungs- und Analyseframework für on-Chip Netzwerke vor, das den Paketrückstau berücksichtigt. Dieses Framework ermöglicht es, Designentscheidungen anhand abstrakter Modelle und formaler Ansätze frühzeitig beurteilen
    corecore