71 research outputs found

    Innovative Techniques for Testing and Diagnosing SoCs

    Get PDF
    We rely upon the continued functioning of many electronic devices for our everyday welfare, usually embedding integrated circuits that are becoming even cheaper and smaller with improved features. Nowadays, microelectronics can integrate a working computer with CPU, memories, and even GPUs on a single die, namely System-On-Chip (SoC). SoCs are also employed on automotive safety-critical applications, but need to be tested thoroughly to comply with reliability standards, in particular the ISO26262 functional safety for road vehicles. The goal of this PhD. thesis is to improve SoC reliability by proposing innovative techniques for testing and diagnosing its internal modules: CPUs, memories, peripherals, and GPUs. The proposed approaches in the sequence appearing in this thesis are described as follows: 1. Embedded Memory Diagnosis: Memories are dense and complex circuits which are susceptible to design and manufacturing errors. Hence, it is important to understand the fault occurrence in the memory array. In practice, the logical and physical array representation differs due to an optimized design which adds enhancements to the device, namely scrambling. This part proposes an accurate memory diagnosis by showing the efforts of a software tool able to analyze test results, unscramble the memory array, map failing syndromes to cell locations, elaborate cumulative analysis, and elaborate a final fault model hypothesis. Several SRAM memory failing syndromes were analyzed as case studies gathered on an industrial automotive 32-bit SoC developed by STMicroelectronics. The tool displayed defects virtually, and results were confirmed by real photos taken from a microscope. 2. Functional Test Pattern Generation: The key for a successful test is the pattern applied to the device. They can be structural or functional; the former usually benefits from embedded test modules targeting manufacturing errors and is only effective before shipping the component to the client. The latter, on the other hand, can be applied during mission minimally impacting on performance but is penalized due to high generation time. However, functional test patterns may benefit for having different goals in functional mission mode. Part III of this PhD thesis proposes three different functional test pattern generation methods for CPU cores embedded in SoCs, targeting different test purposes, described as follows: a. Functional Stress Patterns: Are suitable for optimizing functional stress during I Operational-life Tests and Burn-in Screening for an optimal device reliability characterization b. Functional Power Hungry Patterns: Are suitable for determining functional peak power for strictly limiting the power of structural patterns during manufacturing tests, thus reducing premature device over-kill while delivering high test coverage c. Software-Based Self-Test Patterns: Combines the potentiality of structural patterns with functional ones, allowing its execution periodically during mission. In addition, an external hardware communicating with a devised SBST was proposed. It helps increasing in 3% the fault coverage by testing critical Hardly Functionally Testable Faults not covered by conventional SBST patterns. An automatic functional test pattern generation exploiting an evolutionary algorithm maximizing metrics related to stress, power, and fault coverage was employed in the above-mentioned approaches to quickly generate the desired patterns. The approaches were evaluated on two industrial cases developed by STMicroelectronics; 8051-based and a 32-bit Power Architecture SoCs. Results show that generation time was reduced upto 75% in comparison to older methodologies while increasing significantly the desired metrics. 3. Fault Injection in GPGPU: Fault injection mechanisms in semiconductor devices are suitable for generating structural patterns, testing and activating mitigation techniques, and validating robust hardware and software applications. GPGPUs are known for fast parallel computation used in high performance computing and advanced driver assistance where reliability is the key point. Moreover, GPGPU manufacturers do not provide design description code due to content secrecy. Therefore, commercial fault injectors using the GPGPU model is unfeasible, making radiation tests the only resource available, but are costly. In the last part of this thesis, we propose a software implemented fault injector able to inject bit-flip in memory elements of a real GPGPU. It exploits a software debugger tool and combines the C-CUDA grammar to wisely determine fault spots and apply bit-flip operations in program variables. The goal is to validate robust parallel algorithms by studying fault propagation or activating redundancy mechanisms they possibly embed. The effectiveness of the tool was evaluated on two robust applications: redundant parallel matrix multiplication and floating point Fast Fourier Transform

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    On Energy Efficient Computing Platforms

    Get PDF
    In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms. As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects. As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency. With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption. Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.Siirretty Doriast

    Techniques for Improving Security and Trustworthiness of Integrated Circuits

    Get PDF
    The integrated circuit (IC) development process is becoming increasingly vulnerable to malicious activities because untrusted parties could be involved in this IC development flow. There are four typical problems that impact the security and trustworthiness of ICs used in military, financial, transportation, or other critical systems: (i) Malicious inclusions and alterations, known as hardware Trojans, can be inserted into a design by modifying the design during GDSII development and fabrication. Hardware Trojans in ICs may cause malfunctions, lower the reliability of ICs, leak confidential information to adversaries or even destroy the system under specifically designed conditions. (ii) The number of circuit-related counterfeiting incidents reported by component manufacturers has increased significantly over the past few years with recycled ICs contributing the largest percentage of the total reported counterfeiting incidents. Since these recycled ICs have been used in the field before, the performance and reliability of such ICs has been degraded by aging effects and harsh recycling process. (iii) Reverse engineering (RE) is process of extracting a circuit’s gate-level netlist, and/or inferring its functionality. The RE causes threats to the design because attackers can steal and pirate a design (IP piracy), identify the device technology, or facilitate other hardware attacks. (iv) Traditional tools for uniquely identifying devices are vulnerable to non-invasive or invasive physical attacks. Securing the ID/key is of utmost importance since leakage of even a single device ID/key could be exploited by an adversary to hack other devices or produce pirated devices. In this work, we have developed a series of design and test methodologies to deal with these four challenging issues and thus enhance the security, trustworthiness and reliability of ICs. The techniques proposed in this thesis include: a path delay fingerprinting technique for detection of hardware Trojans, recycled ICs, and other types counterfeit ICs including remarked, overproduced, and cloned ICs with their unique identifiers; a Built-In Self-Authentication (BISA) technique to prevent hardware Trojan insertions by untrusted fabrication facilities; an efficient and secure split manufacturing via Obfuscated Built-In Self-Authentication (OBISA) technique to prevent reverse engineering by untrusted fabrication facilities; and a novel bit selection approach for obtaining the most reliable bits for SRAM-based physical unclonable function (PUF) across environmental conditions and silicon aging effects

    Thermal Issues in Testing of Advanced Systems on Chip

    Full text link

    Low-Power Embedded Design Solutions and Low-Latency On-Chip Interconnect Architecture for System-On-Chip Design

    Get PDF
    This dissertation presents three design solutions to support several key system-on-chip (SoC) issues to achieve low-power and high performance. These are: 1) joint source and channel decoding (JSCD) schemes for low-power SoCs used in portable multimedia systems, 2) efficient on-chip interconnect architecture for massive multimedia data streaming on multiprocessor SoCs (MPSoCs), and 3) data processing architecture for low-power SoCs in distributed sensor network (DSS) systems and its implementation. The first part includes a low-power embedded low density parity check code (LDPC) - H.264 joint decoding architecture to lower the baseband energy consumption of a channel decoder using joint source decoding and dynamic voltage and frequency scaling (DVFS). A low-power multiple-input multiple-output (MIMO) and H.264 video joint detector/decoder design that minimizes energy for portable, wireless embedded systems is also designed. In the second part, a link-level quality of service (QoS) scheme using unequal error protection (UEP) for low-power network-on-chip (NoC) and low latency on-chip network designs for MPSoCs is proposed. This part contains WaveSync, a low-latency focused network-on-chip architecture for globally-asynchronous locally-synchronous (GALS) designs and a simultaneous dual-path routing (SDPR) scheme utilizing path diversity present in typical mesh topology network-on-chips. SDPR is akin to having a higher link width but without the significant hardware overhead associated with simple bus width scaling. The last part shows data processing unit designs for embedded SoCs. We propose a data processing and control logic design for a new radiation detection sensor system generating data at or above Peta-bits-per-second level. Implementation results show that the intended clock rate is achieved within the power target of less than 200mW. We also present a digital signal processing (DSP) accelerator supporting configurable MAC, FFT, FIR, and 3-D cross product operations for embedded SoCs. It consumes 12.35mW along with 0.167mm2 area at 333MHz

    Case Study: First-Time Success ASIC Design Methodology Applied to a Multi-Processor System-on-Chip

    Get PDF
    Achieving first-time success is crucial in the ASIC design league considering the soaring cost, tight time-to-market window, and competitive business environment. One key factor in ensuring first-time success is a well-defined ASIC design methodology. Here we propose a novel ASIC design methodology that has been proven for the RUMPS401 (Rahman University Multi-Processor System 401) Multiprocessor System-on-Chip (MPSoC) project. The MPSoC project is initiated by Universiti Tunku Abdul Rahman (UTAR) VLSI design center. The proposed methodology includes the use of Universal Verification Methodology (UVM). The use of electronic design automation (EDA) software during each step of the design methodology is also presented. The first-time success RUMPS401 demonstrates the use of the proposed ASIC design methodology and the good of using one. Especially this project is carried on in educational environment that is even more limited in budget, resources and know-how, compared to the business and industrial counterparts. Here a novel ASIC design methodology that is tailored to first-time success MPSoC is presented

    Design-for-Test and Test Optimization Techniques for TSV-based 3D Stacked ICs

    Get PDF
    <p>As integrated circuits (ICs) continue to scale to smaller dimensions, long interconnects</p><p>have become the dominant contributor to circuit delay and a significant component of</p><p>power consumption. In order to reduce the length of these interconnects, 3D integration</p><p>and 3D stacked ICs (3D SICs) are active areas of research in both academia and industry.</p><p>3D SICs not only have the potential to reduce average interconnect length and alleviate</p><p>many of the problems caused by long global interconnects, but they can offer greater design</p><p>flexibility over 2D ICs, significant reductions in power consumption and footprint in</p><p>an era of mobile applications, increased on-chip data bandwidth through delay reduction,</p><p>and improved heterogeneous integration.</p><p>Compared to 2D ICs, the manufacture and test of 3D ICs is significantly more complex.</p><p>Through-silicon vias (TSVs), which constitute the dense vertical interconnects in a</p><p>die stack, are a source of additional and unique defects not seen before in ICs. At the same</p><p>time, testing these TSVs, especially before die stacking, is recognized as a major challenge.</p><p>The testing of a 3D stack is constrained by limited test access, test pin availability,</p><p>power, and thermal constraints. Therefore, efficient and optimized test architectures are</p><p>needed to ensure that pre-bond, partial, and complete stack testing are not prohibitively</p><p>expensive.</p><p>Methods of testing TSVs prior to bonding continue to be a difficult problem due to test</p><p>access and testability issues. Although some built-in self-test (BIST) techniques have been</p><p>proposed, these techniques have numerous drawbacks that render them impractical. In this dissertation, a low-cost test architecture is introduced to enable pre-bond TSV test through</p><p>TSV probing. This has the benefit of not needing large analog test components on the die,</p><p>which is a significant drawback of many BIST architectures. Coupled with an optimization</p><p>method described in this dissertation to create parallel test groups for TSVs, test time for</p><p>pre-bond TSV tests can be significantly reduced. The pre-bond probing methodology is</p><p>expanded upon to allow for pre-bond scan test as well, to enable both pre-bond TSV and</p><p>structural test to bring pre-bond known-good-die (KGD) test under a single test paradigm.</p><p>The addition of boundary registers on functional TSV paths required for pre-bond</p><p>probing results in an increase in delay on inter-die functional paths. This cost of test</p><p>architecture insertion can be a significant drawback, especially considering that one benefit</p><p>of 3D integration is that critical paths can be partitioned between dies to reduce their delay.</p><p>This dissertation derives a retiming flow that is used to recover the additional delay added</p><p>to TSV paths by test cell insertion.</p><p>Reducing the cost of test for 3D-SICs is crucial considering that more tests are necessary</p><p>during 3D-SIC manufacturing. To reduce test cost, the test architecture and test</p><p>scheduling for the stack must be optimized to reduce test time across all necessary test</p><p>insertions. This dissertation examines three paradigms for 3D integration - hard dies, firm</p><p>dies, and soft dies, that give varying degrees of control over 2D test architectures on each</p><p>die while optimizing the 3D test architecture. Integer linear programming models are developed</p><p>to provide an optimal 3D test architecture and test schedule for the dies in the 3D</p><p>stack considering any or all post-bond test insertions. Results show that the ILP models</p><p>outperform other optimization methods across a range of 3D benchmark circuits.</p><p>In summary, this dissertation targets testing and design-for-test (DFT) of 3D SICs.</p><p>The proposed techniques enable pre-bond TSV and structural test while maintaining a</p><p>relatively low test cost. Future work will continue to enable testing of 3D SICs to move</p><p>industry closer to realizing the true potential of 3D integration.</p>Dissertatio

    Solutions pour l'auto-adaptation des systèmes sans fil

    Get PDF
    The current demand on ubiquitous connectivity imposes stringent requirements on the fabrication of Radio-Frequency (RF) circuits. Designs are consequently transferred to the most advanced CMOS technologies that were initially introduced to improve digital performance. In addition, as technology scales down, RF circuits are more and more susceptible to a lot of variations during their lifetime, as manufacturing process variability, temperature, environmental conditions, aging… As a result, the usual worst-case circuit design is leading to sub-optimal conditions, in terms of power and/or performance most of the time for the circuit. In order to counteract these variations, increasing the performances and also reduce power consumption, adaptation strategies must be put in place.More importantly, the fabrication process introduces more and more performance variability, which can have a dramatic impact on the fabrication yield. That is why RF designs are not easily fabricated in the most advanced CMOS technologies, as 32nm or 22nm nodes for instance. In this context, the performances of RF circuits need to be calibrated after fabrication so as to take these variations into account and recover yield loss.This thesis work is presenting on a post-fabrication calibration technique for RF circuits. This technique is performed during production testing with minimum extra cost, which is critical since the cost of test can be comparable to the cost of fabrication concerning RF circuits and cannot be further raised. Calibration is enabled by equipping the circuit with tuning knobs and sensors. Optimal tuning knob identification is achieved in one-shot based on a single test step that involves measuring the sensor outputs once. For this purpose, we rely on variation-aware sensors which provide measurements that remain invariant under tuning knob changes. As an auxiliary benefit, the variation-aware sensors are non-intrusive and totally transparent to the circuit.Our proposed methodology has first been demonstrated with simulation data with an RF power amplifier as a case study. Afterwards, a silicon demonstrator has then been fabricated in a 65nm technology in order to fully demonstrate the methodology. The fabricated dataset of circuits is extracted from typical and corner wafers. This feature is very important since corner circuits are the worst design cases and therefore the most difficult to calibrate. In our case, corner circuits represent more than the two third of the overall dataset and the calibration can still be proven. In details, fabrication yield based on 3 sigma performance specifications is increased from 21% to 93%. This is a major performance of the technique, knowing that worst case circuits are very rare in industrial fabrication.La demande courante de connectivité instantanée impose un cahier des charges très strict sur la fabrication des circuits Radio-Fréquences (RF). Les circuits doivent donc être transférées vers les technologies les plus avancées, initialement introduites pour augmenter les performances des circuits purement numériques. De plus, les circuits RF sont soumis à de plus en plus de variations et cette sensibilité s’accroît avec l’avancées des technologies. Ces variations sont par exemple les variations du procédé de fabrication, la température, l’environnement, le vieillissement… Par conséquent, la méthode classique de conception de circuits “pire-cas” conduit à une utilisation non-optimale du circuit dans la vaste majorité des conditions, en termes de performances et/ou de consommation. Ces variations doivent donc être compensées, en utilisant des techniques d’adaptation.De manière plus importante encore, le procédé de fabrication des circuits introduit de plus en plus de variabilité dans les performances des circuits, ce qui a un impact important sur le rendement de fabrication des circuits. Pour cette raison, les circuits RF sont difficilement fabriqués dans les technologies CMOS les plus avancées comme les nœuds 32nm ou 22nm. Dans ce contexte, les performances des circuits RF doivent êtres calibrées après fabrication pour prendre en compte ces variations et retrouver un haut rendement de fabrication.Ce travail de these présente une méthode de calibration post-fabrication pour les circuits RF. Cette méthodologie est appliquée pendant le test de production en ajoutant un minimum de coût, ce qui est un point essentiel car le coût du test est aujourd’hui déjà comparable au coût de fabrication d’un circuit RF et ne peut être augmenté d’avantage. Par ailleurs, la puissance consommée est aussi prise en compte pour que l’impact de la calibration sur la consommation soit minimisé. La calibration est rendue possible en équipant le circuit avec des nœuds de réglages et des capteurs. L’identification de la valeur de réglage optimale du circuit est obtenue en un seul coup, en testant les performances RF une seule et unique fois. Cela est possible grâce à l’utilisation de capteurs de variations du procédé de fabrication qui sont invariants par rapport aux changements des nœuds de réglage. Un autre benefice de l’utilisation de ces capteurs de variation sont non-intrusifs et donc totalement transparents pour le circuit sous test. La technique de calibration a été démontrée sur un amplificateur de puissance RF utilisé comme cas d’étude. Une première preuve de concept est développée en utilisant des résultats de simulation.Un démonstrateur en silicium a ensuite été fabriqué en technologie 65nm pour entièrement démontrer le concept de calibration. L’ensemble des puces fabriquées a été extrait de trois types de wafer différents, avec des transistors aux performances lentes, typiques et rapides. Cette caractéristique est très importante car elle nous permet de considérer des cas de procédé de fabrication extrêmes qui sont les plus difficiles à calibrer. Dans notre cas, ces circuits représentent plus des deux tiers des puces à disposition et nous pouvons quand même prouver notre concept de calibration. Dans le détails, le rendement de fabrication passe de 21% avant calibration à plus de 93% après avoir appliqué notre méthodologie. Cela constitue une performance majeure de notre méthodologie car les circuits extrêmes sont très rares dans une fabrication industrielle
    • …
    corecore