45 research outputs found

    E-QED: Electrical Bug Localization During Post-Silicon Validation Enabled by Quick Error Detection and Formal Methods

    Full text link
    During post-silicon validation, manufactured integrated circuits are extensively tested in actual system environments to detect design bugs. Bug localization involves identification of a bug trace (a sequence of inputs that activates and detects the bug) and a hardware design block where the bug is located. Existing bug localization practices during post-silicon validation are mostly manual and ad hoc, and, hence, extremely expensive and time consuming. This is particularly true for subtle electrical bugs caused by unexpected interactions between a design and its electrical state. We present E-QED, a new approach that automatically localizes electrical bugs during post-silicon validation. Our results on the OpenSPARC T2, an open-source 500-million-transistor multicore chip design, demonstrate the effectiveness and practicality of E-QED: starting with a failed post-silicon test, in a few hours (9 hours on average) we can automatically narrow the location of the bug to (the fan-in logic cone of) a handful of candidate flip-flops (18 flip-flops on average for a design with ~ 1 Million flip-flops) and also obtain the corresponding bug trace. The area impact of E-QED is ~2.5%. In contrast, deter-mining this same information might take weeks (or even months) of mostly manual work using traditional approaches

    Methods for Robust and Energy-Efficient Microprocessor Architectures

    Get PDF
    Σήμερα, η εξέλιξη της τεχνολογίας επιτρέπει τη βελτίωση τριών βασικών στοιχείων της σχεδίασης των επεξεργαστών: αυξημένες επιδόσεις, χαμηλότερη κατανάλωση ισχύος και χαμηλότερο κόστος παραγωγής του τσιπ, ενώ οι σχεδιαστές επεξεργαστών έχουν επικεντρωθεί στην παραγωγή επεξεργαστών με περισσότερες λειτουργίες σε χαμηλότερο κόστος. Οι σημερινοί επεξεργαστές είναι πολύ ταχύτεροι και διαθέτουν εξελιγμένες λειτουργικές μονάδες συγκριτικά με τους προκατόχους τους, ωστόσο, καταναλώνουν αρκετά μεγάλη ενέργεια. Τα ποσά ηλεκτρικής ισχύος που καταναλώνονται, και η επακόλουθη έκλυση θερμότητας, αυξάνονται παρά τη μείωση του μεγέθους των τρανζίστορ. Αναπτύσσοντας όλο και πιο εξελιγμένους μηχανισμούς και λειτουργικές μονάδες για την αύξηση της απόδοσης και βελτίωση της ενέργειας, σε συνδυασμό με τη μείωση του μεγέθους των τρανζίστορ, οι επεξεργαστές έχουν γίνει εξαιρετικά πολύπλοκα συστήματα, καθιστώντας τη διαδικασία της επικύρωσής τους σημαντική πρόκληση για τη βιομηχανία ολοκληρωμένων κυκλωμάτων. Συνεπώς, οι κατασκευαστές επεξεργαστών αφιερώνουν επιπλέον χρόνο, προϋπολογισμό και χώρο στο τσιπ για να διασφαλίσουν ότι οι επεξεργαστές θα λειτουργούν σωστά κατά τη διάθεσή τους στη αγορά. Για τους λόγους αυτούς, η εργασία αυτή παρουσιάζει νέες μεθόδους για την επιτάχυνση και τη βελτίωση της φάσης της επικύρωσης, καθώς και για τη βελτίωση της ενεργειακής απόδοσης των σύγχρονων επεξεργαστών. Στο πρώτο μέρος της διατριβής προτείνονται δύο διαφορετικές μέθοδοι για την επικύρωση του επεξεργαστή, οι οποίες συμβάλλουν στην επιτάχυνση αυτής της διαδικασίας και στην αποκάλυψη σπάνιων σφαλμάτων στους μηχανισμούς μετάφρασης διευθύνσεων των σύγχρονων επεξεργαστών. Και οι δύο μέθοδοι καθιστούν ευκολότερη την ανίχνευση και τη διάγνωση σφαλμάτων, και επιταχύνουν την ανίχνευση του σφάλματος κατά τη φάση της επικύρωσης. Στο δεύτερο μέρος της διατριβής παρουσιάζεται μια λεπτομερής μελέτη χαρακτηρισμού των περιθωρίων τάσης σε επίπεδο συστήματος σε δύο σύγχρονους ARMv8 επεξεργαστές. Η μελέτη του χαρακτηρισμού προσδιορίζει τα αυξημένα περιθώρια τάσης που έχουν προκαθοριστεί κατά τη διάρκεια κατασκευής του κάθε μεμονωμένου πυρήνα του επεξεργαστή και αναλύει τυχόν απρόβλεπτες συμπεριφορές που μπορεί να προκύψουν σε συνθήκες μειωμένης τάσης. Για την μελέτη και καταγραφή της συμπεριφοράς του συστήματος υπό συνθήκες μειωμένης τάσης, παρουσιάζεται επίσης σε αυτή τη διατριβή μια απλή και ενοποιημένη συνάρτηση: η συνάρτηση πυκνότητας-σοβαρότητας. Στη συνέχεια, παρουσιάζεται αναλυτικά η ανάπτυξη ειδικά σχεδιασμένων προγραμμάτων (micro-viruses) τα οποία υποβάλουν της θεμελιώδεις δομές του επεξεργαστή σε μεγάλο φορτίο εργασίας. Αυτά τα προγράμματα στοχεύουν στην γρήγορη αναγνώριση των ασφαλών περιθωρίων τάσης. Τέλος, πραγματοποιείται ο χαρακτηρισμός των περιθωρίων τάσης σε εκτελέσεις πολλαπλών πυρήνων, καθώς επίσης και σε διαφορετικές συχνότητες, και προτείνεται ένα πρόγραμμα το οποίο εκμεταλλεύεται όλες τις διαφορετικές πτυχές του προβλήματος της κατανάλωσης ενέργειας και παρέχει μεγάλη εξοικονόμηση ενέργειας διατηρώντας παράλληλα υψηλά επίπεδα απόδοσης. Αυτή η μελέτη έχει ως στόχο τον εντοπισμό και την ανάλυση της σχέσης μεταξύ ενέργειας και απόδοσης σε διαφορετικούς συνδυασμούς τάσης και συχνότητας, καθώς και σε διαφορετικό αριθμό νημάτων/διεργασιών που εκτελούνται στο σύστημα, αλλά και κατανομής των προγραμμάτων στους διαθέσιμους πυρήνες.Technology scaling has enabled improvements in the three major design optimization objectives: increased performance, lower power consumption, and lower die cost, while system design has focused on bringing more functionality into products at lower cost. While today's microprocessors, are much faster and much more versatile than their predecessors, they also consume much power. As operating frequency and integration density increase, the total chip power dissipation increases. This is evident from the fact that due to the demand for increased functionality on a single chip, more and more transistors are being packed on a single die and hence, the switching frequency increases in every technology generation. However, by developing aggressive and sophisticated mechanisms to boost performance and to enhance the energy efficiency in conjunction with the decrease of the size of transistors, microprocessors have become extremely complex systems, making the microprocessor verification and manufacturing testing a major challenge for the semiconductor industry. Manufacturers, therefore, choose to spend extra effort, time, budget and chip area to ensure that the delivered products are operating correctly. To meet high-dependability requirements, manufacturers apply a sequence of verification tasks throughout the entire life-cycle of the microprocessor to ensure the correct functionality of the microprocessor chips from the various types of errors that may occur after the products are released to the market. To this end, this work presents novel methods for ensuring the correctness of the microprocessor during the post-silicon validation phase and for improving the energy efficiency requirements of modern microprocessors. These methods can be applied during the prototyping phase of the microprocessors or after their release to the market. More specifically, in the first part of the thesis, we present and describe two different ISA-independent software-based post-silicon validation methods, which contribute to formalization and modeling as well as the acceleration of the post-silicon validation process and expose difficult-to-find bugs in the address translation mechanisms (ATM) of modern microprocessors. Both methods improve the detection and diagnosis of a hardware design bug in the ATM structures and significantly accelerate the bug detection during the post-silicon validation phase. In the second part of the thesis we present a detailed system-level voltage scaling characterization study for two state-of-the-art ARMv8-based multicore CPUs. We present an extensive characterization study which identifies the pessimistic voltage guardbands (the increased voltage margins set by the manufacturer) of each individual microprocessor core and analyze any abnormal behavior that may occur in off-nominal voltage conditions. Towards the formalization of the any abnormal behavior we also present a simple consolidated function; the Severity function, which aggregates the effects of reduced voltage operation. We then introduce the development of dedicated programs (diagnostic micro-viruses) that aim to accelerate the time-consuming voltage margins characterization studies by stressing the fundamental hardware components. Finally, we present a comprehensive exploration of how two server-grade systems behave in different frequency and core allocation configurations beyond nominal voltage operation in multicore executions. This analysis aims (1) to identify the best performance per watt operation points, (2) to reveal how and why the different core allocation options affect the energy consumption, and (3) to enhance the default Linux scheduler to take task allocation decisions for balanced performance and energy efficiency

    Study and Benchmarking of modern computing architectures

    Get PDF
    Modern computing architectures change rapidly and exhibit high levels of complexity and heterogeneity. For example, the Barcelona Supercomputing Center (BSC) will have one general purpose cluster and three smaller clusters with Emerging Technologies like Many Core architecture from Intel, Power from IBM and ARMv8 from Fujitsu. These are technologies currently being developed to accelerate the arrival of the new generation of pre-exascale supercomputers. We have analysed these architectures in order to gain knowledge and experience to get the best performance possible from them running HPC applications that exploit the strengths of each architecture. Additionally, we have run a benchmark suite on each available computer architecture with the purpose of characterize them. In order to complement the benchmark results, we have decided to analyse the execution of each benchmark with a performance analysis tool. There already exist several performance analysis tools which cover different performance areas like hardware events counting, performance simulation, traces, etc. However, these tools are dependent of being ported to modern computing architectures and they normally have different usages for different architectures. We propose a solution based on the perf_event interface, which is included on the Linux kernel, that allows us to analyse the benchmark execution on the different modern computing architectures and it allows users easily to get performance information of their applications. The information that we provide with our solution are a sampling trace of hardware events, a profiling report and monitoring information of CPU and memory. Our solution enhances the performance results of other tools with extra features that help a better understanding of the bottlenecks of applications and/or architectures, and so, benchmarking. In addition, our proposal can be used in any system with Linux

    PowerCool: Simulation of Cooling and Powering of 3D MPSoCs with Integrated Flow Cell Arrays

    Get PDF
    Integrated Flow-Cell Arrays (FCAs) represent a combination of integrated liquid cooling and on-chip power generation, converting chemical energy of the flowing electrolyte solutions to electrical energy. The FCA technology provides a promising way to address both heat removal and power delivery issues in 3D Multiprocessor Systems-on-Chips (MPSoCs). In this paper we motivate the benefits of FCA in 3D MPSoCs via a qualitative analysis and explore the capabilities of the proposed technology using our extended PowerCool simulator. PowerCool is a tool that performs combined compact thermal and electrochemical simulation of 3D MPSoCs with inter-tier FCA-based cooling and power generation. We validate our electrochemical model against experimental data obtained using a micro-scale FCA, and extend PowerCool with a compact thermal model (3D-ICE) and subthreshold leakage estimation. We show the sensitivity of the FCA cooling and power generation on the design-time (FCA geometry) and run-time (fluid inlet temperature, flow rate) parameters. Our results show that we can optimize the FCA to keep maximum chip temperature below 95 °C for an average chip power consumption of 50 W/cm2 while generating up to 3.6 W per cm2 of chip area

    Design methodology and productivity improvement in high speed VLSI circuits

    Get PDF
    2017 Spring.Includes bibliographical references.To view the abstract, please see the full text of the document

    ControlPULP: A RISC-V On-Chip Parallel Power Controller for Many-Core HPC Processors with FPGA-Based Hardware-In-The-Loop Power and Thermal Emulation

    Full text link
    High-Performance Computing (HPC) processors are nowadays integrated Cyber-Physical Systems demanding complex and high-bandwidth closed-loop power and thermal control strategies. To efficiently satisfy real-time multi-input multi-output (MIMO) optimal power requirements, high-end processors integrate an on-die power controller system (PCS). While traditional PCSs are based on a simple microcontroller (MCU)-class core, more scalable and flexible PCS architectures are required to support advanced MIMO control algorithms for managing the ever-increasing number of cores, power states, and process, voltage, and temperature variability. This paper presents ControlPULP, an open-source, HW/SW RISC-V parallel PCS platform consisting of a single-core MCU with fast interrupt handling coupled with a scalable multi-core programmable cluster accelerator and a specialized DMA engine for the parallel acceleration of real-time power management policies. ControlPULP relies on FreeRTOS to schedule a reactive power control firmware (PCF) application layer. We demonstrate ControlPULP in a power management use-case targeting a next-generation 72-core HPC processor. We first show that the multi-core cluster accelerates the PCF, achieving 4.9x speedup compared to single-core execution, enabling more advanced power management algorithms within the control hyper-period at a shallow area overhead, about 0.1% the area of a modern HPC CPU die. We then assess the PCS and PCF by designing an FPGA-based, closed-loop emulation framework that leverages the heterogeneous SoCs paradigm, achieving DVFS tracking with a mean deviation within 3% the plant's thermal design power (TDP) against a software-equivalent model-in-the-loop approach. Finally, we show that the proposed PCF compares favorably with an industry-grade control algorithm under computational-intensive workloads.Comment: 33 pages, 11 figure

    Decompose and Conquer: Addressing Evasive Errors in Systems on Chip

    Full text link
    Modern computer chips comprise many components, including microprocessor cores, memory modules, on-chip networks, and accelerators. Such system-on-chip (SoC) designs are deployed in a variety of computing devices: from internet-of-things, to smartphones, to personal computers, to data centers. In this dissertation, we discuss evasive errors in SoC designs and how these errors can be addressed efficiently. In particular, we focus on two types of errors: design bugs and permanent faults. Design bugs originate from the limited amount of time allowed for design verification and validation. Thus, they are often found in functional features that are rarely activated. Complete functional verification, which can eliminate design bugs, is extremely time-consuming, thus impractical in modern complex SoC designs. Permanent faults are caused by failures of fragile transistors in nano-scale semiconductor manufacturing processes. Indeed, weak transistors may wear out unexpectedly within the lifespan of the design. Hardware structures that reduce the occurrence of permanent faults incur significant silicon area or performance overheads, thus they are infeasible for most cost-sensitive SoC designs. To tackle and overcome these evasive errors efficiently, we propose to leverage the principle of decomposition to lower the complexity of the software analysis or the hardware structures involved. To this end, we present several decomposition techniques, specific to major SoC components. We first focus on microprocessor cores, by presenting a lightweight bug-masking analysis that decomposes a program into individual instructions to identify if a design bug would be masked by the program's execution. We then move to memory subsystems: there, we offer an efficient memory consistency testing framework to detect buggy memory-ordering behaviors, which decomposes the memory-ordering graph into small components based on incremental differences. We also propose a microarchitectural patching solution for memory subsystem bugs, which augments each core node with a small distributed programmable logic, instead of including a global patching module. In the context of on-chip networks, we propose two routing reconfiguration algorithms that bypass faulty network resources. The first computes short-term routes in a distributed fashion, localized to the fault region. The second decomposes application-aware routing computation into simple routing rules so to quickly find deadlock-free, application-optimized routes in a fault-ridden network. Finally, we consider general accelerator modules in SoC designs. When a system includes many accelerators, there are a variety of interactions among them that must be verified to catch buggy interactions. To this end, we decompose such inter-module communication into basic interaction elements, which can be reassembled into new, interesting tests. Overall, we show that the decomposition of complex software algorithms and hardware structures can significantly reduce overheads: up to three orders of magnitude in the bug-masking analysis and the application-aware routing, approximately 50 times in the routing reconfiguration latency, and 5 times on average in the memory-ordering graph checking. These overhead reductions come with losses in error coverage: 23% undetected bug-masking incidents, 39% non-patchable memory bugs, and occasionally we overlook rare patterns of multiple faults. In this dissertation, we discuss the ideas and their trade-offs, and present future research directions.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147637/1/doowon_1.pd
    corecore