777 research outputs found

    Scalable Analysis, Verification and Design of IC Power Delivery

    Get PDF
    Due to recent aggressive process scaling into the nanometer regime, power delivery network design faces many challenges that set more stringent and specific requirements to the EDA tools. For example, from the perspective of analysis, simulation efficiency for large grids must be improved and the entire network with off-chip models and nonlinear devices should be able to be analyzed. Gated power delivery networks have multiple on/off operating conditions that need to be fully verified against the design requirements. Good power delivery network designs not only have to save the wiring resources for signal routing, but also need to have the optimal parameters assigned to various system components such as decaps, voltage regulators and converters. This dissertation presents new methodologies to address these challenging problems. At first, a novel parallel partitioning-based approach which provides a flexible network partitioning scheme using locality is proposed for power grid static analysis. In addition, a fast CPU-GPU combined analysis engine that adopts a boundary-relaxation method to encompass several simulation strategies is developed to simulate power delivery networks with off-chip models and active circuits. These two proposed analysis approaches can achieve scalable simulation runtime. Then, for gated power delivery networks, the challenge brought by the large verification space is addressed by developing a strategy that efficiently identifies a number of candidates for the worst-case operating condition. The computation complexity is reduced from O(2^N) to O(N). At last, motivated by a proposed two-level hierarchical optimization, this dissertation presents a novel locality-driven partitioning scheme to facilitate divide-and-conquer-based scalable wire sizing for large power delivery networks. Simultaneous sizing of multiple partitions is allowed which leads to substantial runtime improvement. Moreover, the electric interactions between active regulators/converters and passive networks and their influences on key system design specifications are analyzed comprehensively. With the derived design insights, the system-level co-design of a complete power delivery network is facilitated by an automatic optimization flow. Results show significant performance enhancement brought by the co-design

    Design and Analysis of Power Distribution Networks in VLSI Circuits.

    Full text link
    Rapidly switching currents of the on-chip devices can cause fluctuations in the supply voltage which can be classified as IR and Ldi/dt drops. The voltage fluctuations in a supply network can inject noise in a circuit which may lead to functional failures of the design. Power supply integrity verification is, therefore, a critical concern in high-performance designs. Also, with decreasing supply voltages, gate-delay is becoming increasingly sensitive to supply voltage variation. With ever-diminishing clock periods, accurate analysis of the impact of supply voltage on circuit performance has also become critical. Increasing power consumption and clock frequency have exacerbated the Ldi/dt drop in every new technology generation. The Ldi/dt drop has become the dominant portion of the overall supply-drop in high performance designs. On-die passive decap, which has traditionally been used for suppressing Ldi/dt, has become expensive due to its area and leakage power overhead. This has created an urgent need for novel circuit techniques to suppress the Ldi/dt drop in power distribution networks. We provide accurate algorithmic solutions for determining the worst-case supply-drop and the impact of supply noise on circuit performance. We propose a path-based and a block-based approach for computing the maximum circuit delay under power supply fluctuations. We also propose an early-mode supply-drop estimation approach and a statistical approach for power grid analysis. All the proposed approaches are vectorless and account for both IR and Ldi/dt drops. We also propose a performance-aware decoupling capacitance allocation technique which uses timing slacks to drive the optimization. Finally, we present analog as well as all-digital circuit techniques for inductive supply noise suppression. The proposed all-digital circuit techniques were implemented in a test-chip, fabricated in a 0.13µm CMOS process. Measurements on the test-chip demonstrate a reduction in the supply fluctuations by 57% for a ramp loads and by 75% during resonance. We also present a low-power, all-digital on-chip oscilloscope for accurate measurement of supply noise. Supply noise measurements obtained from the on-chip oscilloscope were validated to conform well to those obtained from a traditional supply-drop monitor and direct on-chip probing.Ph.D.Electrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/58508/1/spant_1.pd

    Parametric Yield of VLSI Systems under Variability: Analysis and Design Solutions

    Get PDF
    Variability has become one of the vital challenges that the designers of integrated circuits encounter. variability becomes increasingly important. Imperfect manufacturing process manifest itself as variations in the design parameters. These variations and those in the operating environment of VLSI circuits result in unexpected changes in the timing, power, and reliability of the circuits. With scaling transistor dimensions, process and environmental variations become significantly important in the modern VLSI design. A smaller feature size means that the physical characteristics of a device are more prone to these unaccounted-for changes. To achieve a robust design, the random and systematic fluctuations in the manufacturing process and the variations in the environmental parameters should be analyzed and the impact on the parametric yield should be addressed. This thesis studies the challenges and comprises solutions for designing robust VLSI systems in the presence of variations. Initially, to get some insight into the system design under variability, the parametric yield is examined for a small circuit. Understanding the impact of variations on the yield at the circuit level is vital to accurately estimate and optimize the yield at the system granularity. Motivated by the observations and results, found at the circuit level, statistical analyses are performed, and solutions are proposed, at the system level of abstraction, to reduce the impact of the variations and increase the parametric yield. At the circuit level, the impact of the supply and threshold voltage variations on the parametric yield is discussed. Here, a design centering methodology is proposed to maximize the parametric yield and optimize the power-performance trade-off under variations. In addition, the scaling trend in the yield loss is studied. Also, some considerations for design centering in the current and future CMOS technologies are explored. The investigation, at the circuit level, suggests that the operating temperature significantly affects the parametric yield. In addition, the yield is very sensitive to the magnitude of the variations in supply and threshold voltage. Therefore, the spatial variations in process and environmental variations make it necessary to analyze the yield at a higher granularity. Here, temperature and voltage variations are mapped across the chip to accurately estimate the yield loss at the system level. At the system level, initially the impact of process-induced temperature variations on the power grid design is analyzed. Also, an efficient verification method is provided that ensures the robustness of the power grid in the presence of variations. Then, a statistical analysis of the timing yield is conducted, by taking into account both the process and environmental variations. By considering the statistical profile of the temperature and supply voltage, the process variations are mapped to the delay variations across a die. This ensures an accurate estimation of the timing yield. In addition, a method is proposed to accurately estimate the power yield considering process-induced temperature and supply voltage variations. This helps check the robustness of the circuits early in the design process. Lastly, design solutions are presented to reduce the power consumption and increase the timing yield under the variations. In the first solution, a guideline for floorplaning optimization in the presence of temperature variations is offered. Non-uniformity in the thermal profiles of integrated circuits is an issue that impacts the parametric yield and threatens chip reliability. Therefore, the correlation between the total power consumption and the temperature variations across a chip is examined. As a result, floorplanning guidelines are proposed that uses the correlation to efficiently optimize the chip's total power and takes into account the thermal uniformity. The second design solution provides an optimization methodology for assigning the power supply pads across the chip for maximizing the timing yield. A mixed-integer nonlinear programming (MINLP) optimization problem, subject to voltage drop and current constraint, is efficiently solved to find the optimum number and location of the pads

    CubeSat Data Transmission and Storage Throughput Optimization Through the Use of a Zynq SoC Based CubeSat Science Instrument Interface Electronics Board

    Get PDF
    The CubeSat standard sprang from the desire to create a satellite standard that would open the doors for universities and other lower budget research institutions by making it more feasible to get their work into space. Since then, many other institutions and industries have been adopting variations on the standard for their own use. As more people are seeking out to use the CubeSat standard as their main bus, the standards and practices of the community have grown and expanded and with this growth, new challenges have been created. One such challenge is the bandwidth limitation in the RF-downlink. When carrying payloads requiring what might seem to be a relatively small (science data) bandwidth requirement (on the order of thousands of bps), the RF-link to ground is overloaded. Many approaches in the past have been put forth to help alleviate this issue, unfortunately, none have been fully adopted. This paper presents a solution that takes advantage of new technology yet to be fully exploited in space applications. The key to the solution lies in removing the bandwidth requirements by enabling onboard post-data processing and compression. In order to achieve the high computational needs, while minimizing power consumption, a Xilinx Zynq-7000 SoC is used, creating a highly-programmable, open integration device. This report outlines the design, fabrication and testing of this solution. The completion of the Zynq Processing System CubeSat Science Instrument Interface Electronics Board (or ZPS-Board), ultimately demonstrates the feasibility of this solution. Additionally, this research is funded by NASA’s JPL, with secondary motives for the creating of a space application Zynq-7000 SoC based product. Upon successful completion of the ZPS-Board, the product creates a platform for JPL to perform environmental testing in order to study the effects and performance characteristics of the Zynq in space applications

    Structured, technology independent VLSI design

    Get PDF
    Journal ArticleRapid advancement in new semiconductor technologies has created a need for the design of existing integrated circuits using these new technologies. These new technologies are required to provide improved performance, smaller feature sizes and lower costs. The conversion of an integrated circuit from an existing technology to a new technology, however, is very difficulty with existing CAD tools. In this research, we have concentrated on developing a structured, technology independent VLSI design methodology, with the goal of theoretically quantifying technology independence and systematically performing technology transformation. We have identified the nature of the problems, using techniques developed during our past research, within the context of particular semiconductor technologies such as CMOS and GaAs technologies

    Architectural Support for High-Performance, Power-Efficient and Secure Multiprocessor Systems

    Get PDF
    High performance systems have been widely adopted in many fields and the demand for better performance is constantly increasing. And the need of powerful yet flexible systems is also increasing to meet varying application requirements from diverse domains. Also, power efficiency in high performance computing has been one of the major issues to be resolved. The power density of core components becomes significantly higher, and the fraction of power supply in total management cost is dominant. Providing dependability is also a main concern in large-scale systems since more hardware resources can be abused by attackers. Therefore, designing high-performance, power-efficient and secure systems is crucial to provide adequate performance as well as reliability to users. Adhering to using traditional design methodologies for large-scale computing systems has a limit to meet the demand under restricted resource budgets. Interconnecting a large number of uniprocessor chips to build parallel processing systems is not an efficient solution in terms of performance and power. Chip multiprocessor (CMP) integrates multiple processing cores and caches on a chip and is thought of as a good alternative to previous design trends. In this dissertation, we deal with various design issues of high performance multiprocessor systems based on CMP to achieve both performance and power efficiency while maintaining security. First, we propose a fast and secure off-chip interconnects through minimizing network overheads and providing an efficient security mechanism. Second, we propose architectural support for fast and efficient memory protection in CMP systems, making the best use of the characteristics in CMP environments and multi-threaded workloads. Third, we propose a new router design for network-on-chip (NoC) based on a new memory technique. We introduce hybrid input buffers that use both SRAM and STT-MRAM for better performance as well as power efficiency. Simulation results show that the proposed schemes improve the performance of off-chip networks through reducing the message size by 54% on average. Also, the schemes diminish the overheads of bounds checking operations, thus enhancing the overall performance by 11% on average. Adopting hybrid buffers in NoC routers contributes to increasing the network throughput up to 21%

    Performance and Memory Space Optimizations for Embedded Systems

    Get PDF
    Embedded systems have three common principles: real-time performance, low power consumption, and low price (limited hardware). Embedded computers use chip multiprocessors (CMPs) to meet these expectations. However, one of the major problems is lack of efficient software support for CMPs; in particular, automated code parallelizers are needed. The aim of this study is to explore various ways to increase performance, as well as reducing resource usage and energy consumption for embedded systems. We use code restructuring, loop scheduling, data transformation, code and data placement, and scratch-pad memory (SPM) management as our tools in different embedded system scenarios. The majority of our work is focused on loop scheduling. Main contributions of our work are: We propose a memory saving strategy that exploits the value locality in array data by storing arrays in a compressed form. Based on the compressed forms of the input arrays, our approach automatically determines the compressed forms of the output arrays and also automatically restructures the code. We propose and evaluate a compiler-directed code scheduling scheme, which considers both parallelism and data locality. It analyzes the code using a locality parallelism graph representation, and assigns the nodes of this graph to processors.We also introduce an Integer Linear Programming based formulation of the scheduling problem. We propose a compiler-based SPM conscious loop scheduling strategy for array/loop based embedded applications. The method is to distribute loop iterations across parallel processors in an SPM-conscious manner. The compiler identifies potential SPM hits and misses, and distributes loop iterations such that the processors have close execution times. We present an SPM management technique using Markov chain based data access. We propose a compiler directed integrated code and data placement scheme for 2-D mesh based CMP architectures. Using a Code-Data Affinity Graph (CDAG) to represent the relationship between loop iterations and array data, it assigns the sets of loop iterations to processing cores and sets of data blocks to on-chip memories. We present a memory bank aware dynamic loop scheduling scheme for array intensive applications.The goal is to minimize the number of memory banks needed for executing the group of loop iterations

    Network-on-Chip

    Get PDF
    Addresses the Challenges Associated with System-on-Chip Integration Network-on-Chip: The Next Generation of System-on-Chip Integration examines the current issues restricting chip-on-chip communication efficiency, and explores Network-on-chip (NoC), a promising alternative that equips designers with the capability to produce a scalable, reusable, and high-performance communication backbone by allowing for the integration of a large number of cores on a single system-on-chip (SoC). This book provides a basic overview of topics associated with NoC-based design: communication infrastructure design, communication methodology, evaluation framework, and mapping of applications onto NoC. It details the design and evaluation of different proposed NoC structures, low-power techniques, signal integrity and reliability issues, application mapping, testing, and future trends. Utilizing examples of chips that have been implemented in industry and academia, this text presents the full architectural design of components verified through implementation in industrial CAD tools. It describes NoC research and developments, incorporates theoretical proofs strengthening the analysis procedures, and includes algorithms used in NoC design and synthesis. In addition, it considers other upcoming NoC issues, such as low-power NoC design, signal integrity issues, NoC testing, reconfiguration, synthesis, and 3-D NoC design. This text comprises 12 chapters and covers: The evolution of NoC from SoC—its research and developmental challenges NoC protocols, elaborating flow control, available network topologies, routing mechanisms, fault tolerance, quality-of-service support, and the design of network interfaces The router design strategies followed in NoCs The evaluation mechanism of NoC architectures The application mapping strategies followed in NoCs Low-power design techniques specifically followed in NoCs The signal integrity and reliability issues of NoC The details of NoC testing strategies reported so far The problem of synthesizing application-specific NoCs Reconfigurable NoC design issues Direction of future research and development in the field of NoC Network-on-Chip: The Next Generation of System-on-Chip Integration covers the basic topics, technology, and future trends relevant to NoC-based design, and can be used by engineers, students, and researchers and other industry professionals interested in computer architecture, embedded systems, and parallel/distributed systems
    • …
    corecore