317 research outputs found

    DP-fill: a dynamic programming approach to X-filling for minimizing peak test power in scan tests

    Get PDF
    At-speed testing is crucial to catch small delay defects that occur during the manufacture of high performance digital chips. Launch-Off-Capture (LOC) and Launch-Off-Shift (LOS) are two prevalently used schemes for this purpose. LOS scheme achieves higher fault coverage while consuming lesser test time over LOC scheme, but dissipates higher power during the capture phase of the at-speed test. Excessive IR-drop during capture phase on the power grid causes false delay failures leading to significant yield reduction that is unwarranted. As reported in literature, an intelligent filling of don't care bits (X-filling) in test cubes has yielded significant power reduction. Given that the tests output by automatic test pattern generation (ATPG) tools for big circuits have large number of don't care bits, the X-filling technique is very effective for them. Assuming that the design for testability (DFT) scheme preserves the state of the combinational logic between capture phases of successive patterns, this paper maps the problem of optimal X-filling for peak power minimization during LOS scheme to a variant of interval coloring problem and proposes a dynamic programming (DP) algorithm for the same along with a theoretical proof for its optimality. To the best of our knowledge, this is the first ever reported X-filling algorithm that is optimal. The proposed algorithm when experimented on ITC99 benchmarks produced peak power savings of up to 34% over the best known low power X-filling algorithm for LOS testing. Interestingly, it is observed that the power savings increase with the size of the circuit

    Network-on-Chip

    Get PDF
    Addresses the Challenges Associated with System-on-Chip Integration Network-on-Chip: The Next Generation of System-on-Chip Integration examines the current issues restricting chip-on-chip communication efficiency, and explores Network-on-chip (NoC), a promising alternative that equips designers with the capability to produce a scalable, reusable, and high-performance communication backbone by allowing for the integration of a large number of cores on a single system-on-chip (SoC). This book provides a basic overview of topics associated with NoC-based design: communication infrastructure design, communication methodology, evaluation framework, and mapping of applications onto NoC. It details the design and evaluation of different proposed NoC structures, low-power techniques, signal integrity and reliability issues, application mapping, testing, and future trends. Utilizing examples of chips that have been implemented in industry and academia, this text presents the full architectural design of components verified through implementation in industrial CAD tools. It describes NoC research and developments, incorporates theoretical proofs strengthening the analysis procedures, and includes algorithms used in NoC design and synthesis. In addition, it considers other upcoming NoC issues, such as low-power NoC design, signal integrity issues, NoC testing, reconfiguration, synthesis, and 3-D NoC design. This text comprises 12 chapters and covers: The evolution of NoC from SoC—its research and developmental challenges NoC protocols, elaborating flow control, available network topologies, routing mechanisms, fault tolerance, quality-of-service support, and the design of network interfaces The router design strategies followed in NoCs The evaluation mechanism of NoC architectures The application mapping strategies followed in NoCs Low-power design techniques specifically followed in NoCs The signal integrity and reliability issues of NoC The details of NoC testing strategies reported so far The problem of synthesizing application-specific NoCs Reconfigurable NoC design issues Direction of future research and development in the field of NoC Network-on-Chip: The Next Generation of System-on-Chip Integration covers the basic topics, technology, and future trends relevant to NoC-based design, and can be used by engineers, students, and researchers and other industry professionals interested in computer architecture, embedded systems, and parallel/distributed systems

    Quantifiable Assurance: From IPs to Platforms

    Get PDF
    Hardware vulnerabilities are generally considered more difficult to fix than software ones because they are persistent after fabrication. Thus, it is crucial to assess the security and fix the vulnerabilities at earlier design phases, such as Register Transfer Level (RTL) and gate level. The focus of the existing security assessment techniques is mainly twofold. First, they check the security of Intellectual Property (IP) blocks separately. Second, they aim to assess the security against individual threats considering the threats are orthogonal. We argue that IP-level security assessment is not sufficient. Eventually, the IPs are placed in a platform, such as a system-on-chip (SoC), where each IP is surrounded by other IPs connected through glue logic and shared/private buses. Hence, we must develop a methodology to assess the platform-level security by considering both the IP-level security and the impact of the additional parameters introduced during platform integration. Another important factor to consider is that the threats are not always orthogonal. Improving security against one threat may affect the security against other threats. Hence, to build a secure platform, we must first answer the following questions: What additional parameters are introduced during the platform integration? How do we define and characterize the impact of these parameters on security? How do the mitigation techniques of one threat impact others? This paper aims to answer these important questions and proposes techniques for quantifiable assurance by quantitatively estimating and measuring the security of a platform at the pre-silicon stages. We also touch upon the term security optimization and present the challenges for future research directions

    Innovative Techniques for Testing and Diagnosing SoCs

    Get PDF
    We rely upon the continued functioning of many electronic devices for our everyday welfare, usually embedding integrated circuits that are becoming even cheaper and smaller with improved features. Nowadays, microelectronics can integrate a working computer with CPU, memories, and even GPUs on a single die, namely System-On-Chip (SoC). SoCs are also employed on automotive safety-critical applications, but need to be tested thoroughly to comply with reliability standards, in particular the ISO26262 functional safety for road vehicles. The goal of this PhD. thesis is to improve SoC reliability by proposing innovative techniques for testing and diagnosing its internal modules: CPUs, memories, peripherals, and GPUs. The proposed approaches in the sequence appearing in this thesis are described as follows: 1. Embedded Memory Diagnosis: Memories are dense and complex circuits which are susceptible to design and manufacturing errors. Hence, it is important to understand the fault occurrence in the memory array. In practice, the logical and physical array representation differs due to an optimized design which adds enhancements to the device, namely scrambling. This part proposes an accurate memory diagnosis by showing the efforts of a software tool able to analyze test results, unscramble the memory array, map failing syndromes to cell locations, elaborate cumulative analysis, and elaborate a final fault model hypothesis. Several SRAM memory failing syndromes were analyzed as case studies gathered on an industrial automotive 32-bit SoC developed by STMicroelectronics. The tool displayed defects virtually, and results were confirmed by real photos taken from a microscope. 2. Functional Test Pattern Generation: The key for a successful test is the pattern applied to the device. They can be structural or functional; the former usually benefits from embedded test modules targeting manufacturing errors and is only effective before shipping the component to the client. The latter, on the other hand, can be applied during mission minimally impacting on performance but is penalized due to high generation time. However, functional test patterns may benefit for having different goals in functional mission mode. Part III of this PhD thesis proposes three different functional test pattern generation methods for CPU cores embedded in SoCs, targeting different test purposes, described as follows: a. Functional Stress Patterns: Are suitable for optimizing functional stress during I Operational-life Tests and Burn-in Screening for an optimal device reliability characterization b. Functional Power Hungry Patterns: Are suitable for determining functional peak power for strictly limiting the power of structural patterns during manufacturing tests, thus reducing premature device over-kill while delivering high test coverage c. Software-Based Self-Test Patterns: Combines the potentiality of structural patterns with functional ones, allowing its execution periodically during mission. In addition, an external hardware communicating with a devised SBST was proposed. It helps increasing in 3% the fault coverage by testing critical Hardly Functionally Testable Faults not covered by conventional SBST patterns. An automatic functional test pattern generation exploiting an evolutionary algorithm maximizing metrics related to stress, power, and fault coverage was employed in the above-mentioned approaches to quickly generate the desired patterns. The approaches were evaluated on two industrial cases developed by STMicroelectronics; 8051-based and a 32-bit Power Architecture SoCs. Results show that generation time was reduced upto 75% in comparison to older methodologies while increasing significantly the desired metrics. 3. Fault Injection in GPGPU: Fault injection mechanisms in semiconductor devices are suitable for generating structural patterns, testing and activating mitigation techniques, and validating robust hardware and software applications. GPGPUs are known for fast parallel computation used in high performance computing and advanced driver assistance where reliability is the key point. Moreover, GPGPU manufacturers do not provide design description code due to content secrecy. Therefore, commercial fault injectors using the GPGPU model is unfeasible, making radiation tests the only resource available, but are costly. In the last part of this thesis, we propose a software implemented fault injector able to inject bit-flip in memory elements of a real GPGPU. It exploits a software debugger tool and combines the C-CUDA grammar to wisely determine fault spots and apply bit-flip operations in program variables. The goal is to validate robust parallel algorithms by studying fault propagation or activating redundancy mechanisms they possibly embed. The effectiveness of the tool was evaluated on two robust applications: redundant parallel matrix multiplication and floating point Fast Fourier Transform

    Identification and Reduction of Scattered Light Noise in LIGO

    Get PDF
    We ushered into a new era of gravitational wave astronomy in 2015 when Advanced LIGO gravitational wave detectors in Livingston, Louisiana and Hanford, Washington observed a gravitational wave signal from the merger of binary black holes. The first detected GW150914 was a part of first Observing run (O1) and since then there have been a total of 3 Observing runs. Advanced Virgo detector in Cascina, Italy joined the efforts in the third Observing run (O3) which spanned from April 1, 2019, to March 27, 2020. It was split into O3a and O3b with a month long break between them, during October 2019, for commissioning upgrades. The first half of the run, O3a, from April 1, 2019, to October 1, 2019, resulted in detection of 39 gravitational-wave events with false alaram rate (FAR) Thegravitationalwavedataqualityishurtbyenvironmentalorinstrumentalnoiseartifactsinthedata.Theseshortdurationnoisetransientscanmaskormimicagravitationalwave.Identificationoftransientnoisecoupling,whichmayleadtoareducedrateofnoiseisthusofprimaryconcern.ThisdissertationfocusesonmyworkduringO3onidentifyingandreducingnoisetransientsassociatedwithscatteredlightinthedetector.LightscatteringadverselyaffectstheLIGOdataqualityandislinkedtomultipleretractionsofgravitationalwavesignals.Thenoiseimpactsthedetectorsensitivityinthe The gravitational wave data quality is hurt by environmental or instrumental noise artifacts in the data. These short duration noise transients can mask or mimic a gravitational wave. Identification of transient noise coupling, which may lead to a reduced rate of noise is thus of primary concern. This dissertation focuses on my work during O3 on identifying and reducing noise transients associated with scattered light in the detector. Light scattering adversely affects the LIGO data quality and is linked to multiple retractions of gravitational wave signals. The noise impacts the detector sensitivity in the 10 - 150$ Hz frequency band critical to the discovery of collision of compact objects, especially heavier black holes. Scattered light noise rate is correlated with an increase in ground motion near the detectors. During O3, two different populations of transients due to light scattering: \textit{Slow Scattering} and \textit{Fast Scattering} were observed. In this dissertation, I document my research that led to the identification of Slow Scattering noise couplings in the detector. This was followed by instrument hardware changes resulting in noise mitigation. This dissertation also discusses transient noise data quality studies I performed during and after O3. These studies shed light on environmental or instrumental correlation with the transient noise in the detector. Improved noise characterization is a significant step that can lead to the recognition of noise couplings in the detector and consequent reduction, which is one of the main objectives of detector characterization. Finally, I examine the importance of Machine Learning (ML) in gravitational-wave data analysis and discuss my work on training an ML algorithm to identify Fast Scattering noise in the data. I also discuss how this identification led to an improved understanding of the Fast Scattering noise and its dependence on ground motion in two different frequency bands

    Virgo Detector Characterization and Data Quality: results from the O3 run

    Full text link
    The Advanced Virgo detector has contributed with its data to the rapid growth of the number of detected gravitational-wave (GW) signals in the past few years, alongside the two Advanced LIGO instruments. First during the last month of the Observation Run 2 (O2) in August 2017 (with, most notably, the compact binary mergers GW170814 and GW170817), and then during the full Observation Run 3 (O3): an 11-months data taking period, between April 2019 and March 2020, that led to the addition of about 80 events to the catalog of transient GW sources maintained by LIGO, Virgo and now KAGRA. These discoveries and the manifold exploitation of the detected waveforms require an accurate characterization of the quality of the data, such as continuous study and monitoring of the detector noise sources. These activities, collectively named {\em detector characterization and data quality} or {\em DetChar}, span the whole workflow of the Virgo data, from the instrument front-end hardware to the final analyses. They are described in details in the following article, with a focus on the results achieved by the Virgo DetChar group during the O3 run. Concurrently, a companion article describes the tools that have been used by the Virgo DetChar group to perform this work.Comment: 57 pages, 18 figures. To be submitted to Class. and Quantum Grav. This is the "Results" part of preprint arXiv:2205.01555 [gr-qc] which has been split into two companion articles: one about the tools and methods, the other about the analyses of the O3 Virgo dat

    Virgo Detector Characterization and Data Quality during the O3 run

    Full text link
    The Advanced Virgo detector has contributed with its data to the rapid growth of the number of detected gravitational-wave signals in the past few years, alongside the two LIGO instruments. First, during the last month of the Observation Run 2 (O2) in August 2017 (with, most notably, the compact binary mergers GW170814 and GW170817) and then during the full Observation Run 3 (O3): an 11 months data taking period, between April 2019 and March 2020, that led to the addition of about 80 events to the catalog of transient gravitational-wave sources maintained by LIGO, Virgo and KAGRA. These discoveries and the manifold exploitation of the detected waveforms require an accurate characterization of the quality of the data, such as continuous study and monitoring of the detector noise. These activities, collectively named {\em detector characterization} or {\em DetChar}, span the whole workflow of the Virgo data, from the instrument front-end to the final analysis. They are described in details in the following article, with a focus on the associated tools, the results achieved by the Virgo DetChar group during the O3 run and the main prospects for future data-taking periods with an improved detector.Comment: 86 pages, 33 figures. This paper has been divided into two articles which supercede it and have been posted to arXiv on October 2022. Please use these new preprints as references: arXiv:2210.15634 (tools and methods) and arXiv:2210.15633 (results from the O3 run

    Virgo detector characterization and data quality: results from the O3 run

    Get PDF
    The Advanced Virgo detector has contributed with its data to the rapid growth of the number of detected GW signals in the past few years, alongside the two Advanced LIGO instruments. First during the last month of the Observation Run 2 (O2) in August 2017 (with, most notably, the compact binary mergers GW170814 and GW170817), and then during the full Observation Run 3 (O3): an 11 months data taking period, between April 2019 and March 2020, that led to the addition of 79 events to the catalog of transient GW sources maintained by LIGO, Virgo and now KAGRA. These discoveries and the manifold exploitation of the detected waveforms benefit from an accurate characterization of the quality of the data, such as continuous study and monitoring of the detector noise sources. These activities, collectively named detector characterization and data quality or DetChar, span the whole workflow of the Virgo data, from the instrument front-end hardware to the final analyses. They are described in detail in the following article, with a focus on the results achieved by the Virgo DetChar group during the O3 run. Concurrently, a companion article describes the tools that have been used by the Virgo DetChar group to perform this work

    Delay Measurements and Self Characterisation on FPGAs

    No full text
    This thesis examines new timing measurement methods for self delay characterisation of Field-Programmable Gate Arrays (FPGAs) components and delay measurement of complex circuits on FPGAs. Two novel measurement techniques based on analysis of a circuit's output failure rate and transition probability is proposed for accurate, precise and efficient measurement of propagation delays. The transition probability based method is especially attractive, since it requires no modifications in the circuit-under-test and requires little hardware resources, making it an ideal method for physical delay analysis of FPGA circuits. The relentless advancements in process technology has led to smaller and denser transistors in integrated circuits. While FPGA users benefit from this in terms of increased hardware resources for more complex designs, the actual productivity with FPGA in terms of timing performance (operating frequency, latency and throughput) has lagged behind the potential improvements from the improved technology due to delay variability in FPGA components and the inaccuracy of timing models used in FPGA timing analysis. The ability to measure delay of any arbitrary circuit on FPGA offers many opportunities for on-chip characterisation and physical timing analysis, allowing delay variability to be accurately tracked and variation-aware optimisations to be developed, reducing the productivity gap observed in today's FPGA designs. The measurement techniques are developed into complete self measurement and characterisation platforms in this thesis, demonstrating their practical uses in actual FPGA hardware for cross-chip delay characterisation and accurate delay measurement of both complex combinatorial and sequential circuits, further reinforcing their positions in solving the delay variability problem in FPGAs
    corecore