17,796 research outputs found

    A market-based transmission planning for HVDC grid—case study of the North Sea

    Get PDF
    There is significant interest in building HVDC transmission to carry out transnational power exchange and deliver cheaper electricity from renewable energy sources which are located far from the load centers. This paper presents a market-based approach to solve a long-term TEP for meshed VSC-HVDC grids that connect regional markets. This is in general a nonlinear, non-convex large-scale optimization problem with high computational burden, partly due to the many combinations of wind and load that become possible. We developed a two-step iterative algorithm that first selects a subset of operating hours using a clustering technique, and then seeks to maximize the social welfare of all regions and minimize the investment capital of transmission infrastructure subject to technical and economic constraints. The outcome of the optimization is an optimal grid design with a topology and transmission capacities that results in congestion revenue paying off investment by the end the project's economic lifetime. Approximations are made to allow an analytical solution to the problem and demonstrate that an HVDC pricing mechanism can be consistent with an AC counterpart. The model is used to investigate development of the offshore grid in the North Sea. Simulation results are interpreted in economic terms and show the effectiveness of our proposed two-step approach

    PowerPlanningDL: Reliability-Aware Framework for On-Chip Power Grid Design using Deep Learning

    Full text link
    With the increase in the complexity of chip designs, VLSI physical design has become a time-consuming task, which is an iterative design process. Power planning is that part of the floorplanning in VLSI physical design where power grid networks are designed in order to provide adequate power to all the underlying functional blocks. Power planning also requires multiple iterative steps to create the power grid network while satisfying the allowed worst-case IR drop and Electromigration (EM) margin. For the first time, this paper introduces Deep learning (DL)-based framework to approximately predict the initial design of the power grid network, considering different reliability constraints. The proposed framework reduces many iterative design steps and speeds up the total design cycle. Neural Network-based multi-target regression technique is used to create the DL model. Feature extraction is done, and the training dataset is generated from the floorplans of some of the power grid designs extracted from the IBM processor. The DL model is trained using the generated dataset. The proposed DL-based framework is validated using a new set of power grid specifications (obtained by perturbing the designs used in the training phase). The results show that the predicted power grid design is closer to the original design with minimal prediction error (~2%). The proposed DL-based approach also improves the design cycle time with a speedup of ~6X for standard power grid benchmarks.Comment: Published in proceedings of IEEE/ACM Design, Automation and Test in Europe Conference (DATE) 2020, 6 page

    Space power distribution system technology. Volume 1: Reference EPS design

    Get PDF
    The multihundred kilowatt electrical power aspects of a mannable space platform in low Earth orbit is analyzed from a cost and technology viewpoint. At the projected orbital altitudes, Shuttle launch and servicing are technically and economically viable. Power generation is specified as photovoltaic consistent with projected planning. The cost models and trades are based upon a zero interest rate (the government taxes concurrently as required), constant dollars (1980), and costs derived in the first half of 1980. Space platform utilization of up to 30 years is evaluated to fully understand the impact of resupply and replacement as satellite missions are extended. Such lifetimes are potentially realizable with Shuttle servicing capability and are economically desirable

    Distributed Generation as Voltage Support for Single Wire Earth Return Systems

    Get PDF
    Key issues for distributed generation (DG) inclusion in a distribution system include operation, control, protection, harmonics, and transients. This paper analyzes two of the main issues: operation and control for DG installation. Inclusion of DG in distribution networks has the potential to adversely affect the control of voltage. Both DG and tap changers aim to improve voltage profile of the network, and hence they can interact causing unstable operation or increased losses. Simulations show that a fast responding DG with appropriate voltage references is capable of reduction of such problems in the network. A DG control model is developed based on voltage sensitivity of lines and evaluated on a single wire earth return (SWER) system. An investigation of voltage interaction between DG controllers is conducted and interaction-index is developed to predict the degree of interaction. From the simulation it is found that the best power factor for DG injection to achieve voltage correction becomes higher for high resistance lines. A drastic reduction in power losses can be achieved in SWER systems if DG is installed. Multiple DG can aid voltage profile of feeder and should provide higher reliability. Setting the voltage references of separate DGs can provide a graduated response to voltage correction

    An Experimental Study of Reduced-Voltage Operation in Modern FPGAs for Neural Network Acceleration

    Get PDF
    We empirically evaluate an undervolting technique, i.e., underscaling the circuit supply voltage below the nominal level, to improve the power-efficiency of Convolutional Neural Network (CNN) accelerators mapped to Field Programmable Gate Arrays (FPGAs). Undervolting below a safe voltage level can lead to timing faults due to excessive circuit latency increase. We evaluate the reliability-power trade-off for such accelerators. Specifically, we experimentally study the reduced-voltage operation of multiple components of real FPGAs, characterize the corresponding reliability behavior of CNN accelerators, propose techniques to minimize the drawbacks of reduced-voltage operation, and combine undervolting with architectural CNN optimization techniques, i.e., quantization and pruning. We investigate the effect of environmental temperature on the reliability-power trade-off of such accelerators. We perform experiments on three identical samples of modern Xilinx ZCU102 FPGA platforms with five state-of-the-art image classification CNN benchmarks. This approach allows us to study the effects of our undervolting technique for both software and hardware variability. We achieve more than 3X power-efficiency (GOPs/W) gain via undervolting. 2.6X of this gain is the result of eliminating the voltage guardband region, i.e., the safe voltage region below the nominal level that is set by FPGA vendor to ensure correct functionality in worst-case environmental and circuit conditions. 43% of the power-efficiency gain is due to further undervolting below the guardband, which comes at the cost of accuracy loss in the CNN accelerator. We evaluate an effective frequency underscaling technique that prevents this accuracy loss, and find that it reduces the power-efficiency gain from 43% to 25%.Comment: To appear at the DSN 2020 conferenc
    • …
    corecore