3,129 research outputs found

    Effect of Clock and Power Gating on Power Distribution Network Noise in 2D and 3D Integrated Circuits

    Get PDF
    In this work, power supply noise contribution, at a particular node on the power grid, from clock/power gated blocks is maximized at particular time and the synthetic gating patterns of the blocks that result in the maximum noise is obtained for the interval 0 to target time. We utilize wavelet based analysis as wavelets are a natural way of characterizing the time-frequency behavior of the power grid. The gating patterns for the blocks and the maximum supply noise at the Point of Interest at the specified target time obtained via a Linear Programming (LP) formulation (clock gating) and Genetic Algorithm based problem formulation (Power Gating)

    Quantifying the relationship between the power delivery network and architectural policies in a 3D-stacked memory device

    Get PDF
    pre-printMany of the pins on a modern chip are used for power delivery. If fewer pins were used to supply the same current, the wires and pins used for power delivery would have to carry larger currents over longer distances. This results in an "IR-drop" problem, where some of the voltage is dropped across the long resistive wires making up the power delivery network, and the eventual circuits experience fluctuations in their supplied voltage. The same problem also manifests if the pin count is the same, but the current draw is higher. IR-drop can be especially problematic in 3D DRAM devices because (i) low cost (few pins and TSVs) is a high priority, (ii) 3D-stacking increases current draw within the package without providing proportionate room for more pins, and (iii) TSVs add to the resistance of the power delivery net-work. This paper is the first to characterize the relationship be- tween the power delivery network and the maximum sup ported activity in a 3D-stacked DRAM memory device. The design of the power delivery network determines if some banks can handle less activity than others. It also deter-mines the combinations of bank activities that are permissible. Both of these attributes can feed into architectural policies. For example, if some banks can handle more activities than others, the architecture benefits by placing data from high-priority threads or data from frequently accessed pages into those banks. The memory controller can also derive higher performance if it schedules requests to specific combinations of banks that do not violate the IR-drop constraint

    Safety-aware Semi-end-to-end Coordinated Decision Model for Voltage Regulation in Active Distribution Network

    Full text link
    Prediction plays a vital role in the active distribution network voltage regulation under the high penetration of photovoltaics. Current prediction models aim at minimizing individual prediction errors but overlook their collective impacts on downstream decision-making. Hence, this paper proposes a safety-aware semi-end-to-end coordinated decision model to bridge the gap from the downstream voltage regulation to the upstream multiple prediction models in a coordinated differential way. The semi-end-to-end model maps the input features to the optimal var decisions via prediction, decision-making, and decision-evaluating layers. It leverages the neural network and the second-order cone program (SOCP) to formulate the stochastic PV/load predictions and the var decision-making/evaluating separately. Then the var decision quality is evaluated via the weighted sum of the power loss for economy and the voltage violation penalty for safety, denoted by regulation loss. Based on the regulation loss and prediction errors, this paper proposes the hybrid loss and hybrid stochastic gradient descent algorithm to back-propagate the gradients of the hybrid loss with respect to multiple predictions for enhancing decision quality. Case studies verify the effectiveness of the proposed model with lower power loss for economy and lower voltage violation rate for safety awareness

    Revamping Timing Error Resilience to Tackle Choke Points at NTC

    Get PDF
    The growing market of portable devices and smart wearables has contributed to innovation and development of systems with longer battery-life. While Near Threshold Computing (NTC) systems address the need for longer battery-life, they have certain limitations. NTC systems are prone to be significantly affected by variations in the fabrication process, commonly called process variation (PV). This dissertation explores an intriguing effect of PV, called choke points. Choke points are especially important due to their multifarious influence on the functional correctness of an NTC system. This work shows why novel research is required in this direction and proposes two techniques to resolve the problems created by choke points, while maintaining the reduced power needs

    Scalable Bilevel Optimization for Generating Maximally Representative OPF Datasets

    Full text link
    New generations of power systems, containing high shares of renewable energy resources, require improved data-driven tools which can swiftly adapt to changes in system operation. Many of these tools, such as ones using machine learning, rely on high-quality training datasets to construct probabilistic models. Such models should be able to accurately represent the system when operating at its limits (i.e., operating with a high degree of ``active constraints"). However, generating training datasets that accurately represent the many possible combinations of these active constraints is a particularly challenging task, especially within the realm of nonlinear AC Optimal Power Flow (OPF), since most active constraints cannot be enforced explicitly. Using bilevel optimization, this paper introduces a data collection routine that sequentially solves for OPF solutions which are ``optimally far" from previously acquired voltage, power, and load profile data points. The routine, termed RAMBO, samples critical data close to a system's boundaries much more effectively than a random sampling benchmark. Simulated test results are collected on the 30-, 57-, and 118-bus PGLib test cases

    Revamping Timing Error Resilience to Tackle Choke Points at NTC

    Get PDF
    The growing market of portable devices and smart wearables has contributed to innovation and development of systems with longer battery-life. While Near Threshold Computing (NTC) systems address the need for longer battery-life, they have certain limitations. NTC systems are prone to be significantly affected by variations in the fabrication process, commonly called process variation (PV). This dissertation explores an intriguing effect of PV, called choke points. Choke points are especially important due to their multifarious influence on the functional correctness of an NTC system. This work shows why novel research is required in this direction and proposes two techniques to resolve the problems created by choke points, while maintaining the reduced power needs

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations
    corecore