162,959 research outputs found
Determination of Harmonics for Modeling Integration of Solar Generation to The Electric Grid
The purpose of this study is to determine a model for analysis of integrating solar generation to the electric grid .The model is then used in determining Harmonics of Integrating solar panels to the electric grid that are based on parallel or series combination of solar cells. To study integration of solar generation to the grid, we have used solar series and solar parallel models in EMTP (Electro Magnetic Transient Program) real time simulation software. When integrating solar generation models to the grid, due to DC to AC conversion and due to variation of solar energy intensity, the electric utility shall experience undesired harmonics that may impact quality of service to other customers in the grid. This study identifies one method of analysis for determining harmonic content of solar panels before solar generation can be integrated in to the electric grid
Determination of Harmonics for Modeling Integration of Solar Generation to The Electric Grid
The purpose of this study is to determine a model for analysis of integrating solar generation to the electric grid .The model is then used in determining Harmonics of Integrating solar panels to the electric grid that are based on parallel or series combination of solar cells. To study integration of solar generation to the grid, we have used solar series and solar parallel models in EMTP (Electro Magnetic Transient Program) real time simulation software. When integrating solar generation models to the grid, due to DC to AC conversion and due to variation of solar energy intensity, the electric utility shall experience undesired harmonics that may impact quality of service to other customers in the grid. This study identifies one method of analysis for determining harmonic content of solar panels before solar generation can be integrated in to the electric grid
Distributed Game Theoretic Optimization and Management of Multichannel ALOHA Networks
The problem of distributed rate maximization in multi-channel ALOHA networks
is considered. First, we study the problem of constrained distributed rate
maximization, where user rates are subject to total transmission probability
constraints. We propose a best-response algorithm, where each user updates its
strategy to increase its rate according to the channel state information and
the current channel utilization. We prove the convergence of the algorithm to a
Nash equilibrium in both homogeneous and heterogeneous networks using the
theory of potential games. The performance of the best-response dynamic is
analyzed and compared to a simple transmission scheme, where users transmit
over the channel with the highest collision-free utility. Then, we consider the
case where users are not restricted by transmission probability constraints.
Distributed rate maximization under uncertainty is considered to achieve both
efficiency and fairness among users. We propose a distributed scheme where
users adjust their transmission probability to maximize their rates according
to the current network state, while maintaining the desired load on the
channels. We show that our approach plays an important role in achieving the
Nash bargaining solution among users. Sequential and parallel algorithms are
proposed to achieve the target solution in a distributed manner. The
efficiencies of the algorithms are demonstrated through both theoretical and
simulation results.Comment: 34 pages, 6 figures, accepted for publication in the IEEE/ACM
Transactions on Networking, part of this work was presented at IEEE CAMSAP
201
Topology-aware GPU scheduling for learning workloads in cloud environments
Recent advances in hardware, such as systems with multiple GPUs and their availability in the cloud, are enabling deep learning in various domains including health care, autonomous vehicles, and Internet of Things. Multi-GPU systems exhibit complex connectivity among GPUs and between GPUs and CPUs. Workload schedulers must consider hardware topology and workload communication requirements in order to allocate CPU and GPU resources for optimal execution time and improved utilization in shared cloud environments.
This paper presents a new topology-aware workload placement strategy to schedule deep learning jobs on multi-GPU systems. The placement strategy is evaluated with a prototype on a Power8 machine with Tesla P100 cards, showing speedups of up to ≈1.30x compared to state-of-the-art strategies; the proposed algorithm achieves this result by allocating GPUs that satisfy workload requirements while preventing interference. Additionally, a large-scale simulation shows that the proposed strategy provides higher resource utilization and performance in cloud systems.This project is supported by the IBM/BSC Technology Center for Supercomputing
collaboration agreement. It has also received funding from the European Research Council (ERC) under the European Union’s Horizon
2020 research and innovation programme (grant agreement No 639595). It is
also partially supported by the Ministry of Economy of Spain under contract
TIN2015-65316-P and Generalitat de Catalunya under contract 2014SGR1051,
by the ICREA Academia program, and by the BSC-CNS Severo Ochoa program
(SEV-2015-0493). We thank our IBM Research colleagues Alaa Youssef
and Asser Tantawi for the valuable discussions. We also thank SC17 committee
member Blair Bethwaite of Monash University for his constructive feedback on the earlier drafts of this paper.Peer ReviewedPostprint (published version
21st Century Simulation: Exploiting High Performance Computing and Data Analysis
This paper identifies, defines, and analyzes the limitations imposed on Modeling and Simulation by outmoded
paradigms in computer utilization and data analysis. The authors then discuss two emerging capabilities to
overcome these limitations: High Performance Parallel Computing and Advanced Data Analysis. First, parallel
computing, in supercomputers and Linux clusters, has proven effective by providing users an advantage in
computing power. This has been characterized as a ten-year lead over the use of single-processor computers.
Second, advanced data analysis techniques are both necessitated and enabled by this leap in computing power.
JFCOM's JESPP project is one of the few simulation initiatives to effectively embrace these concepts. The
challenges facing the defense analyst today have grown to include the need to consider operations among non-combatant
populations, to focus on impacts to civilian infrastructure, to differentiate combatants from non-combatants,
and to understand non-linear, asymmetric warfare. These requirements stretch both current
computational techniques and data analysis methodologies. In this paper, documented examples and potential
solutions will be advanced. The authors discuss the paths to successful implementation based on their experience.
Reviewed technologies include parallel computing, cluster computing, grid computing, data logging, OpsResearch,
database advances, data mining, evolutionary computing, genetic algorithms, and Monte Carlo sensitivity analyses.
The modeling and simulation community has significant potential to provide more opportunities for training and
analysis. Simulations must include increasingly sophisticated environments, better emulations of foes, and more
realistic civilian populations. Overcoming the implementation challenges will produce dramatically better insights,
for trainees and analysts. High Performance Parallel Computing and Advanced Data Analysis promise increased
understanding of future vulnerabilities to help avoid unneeded mission failures and unacceptable personnel losses.
The authors set forth road maps for rapid prototyping and adoption of advanced capabilities. They discuss the
beneficial impact of embracing these technologies, as well as risk mitigation required to ensure success
Rare-Event Sampling: Occupation-Based Performance Measures for Parallel Tempering and Infinite Swapping Monte Carlo Methods
In the present paper we identify a rigorous property of a number of
tempering-based Monte Carlo sampling methods, including parallel tempering as
well as partial and infinite swapping. Based on this property we develop a
variety of performance measures for such rare-event sampling methods that are
broadly applicable, informative, and straightforward to implement. We
illustrate the use of these performance measures with a series of applications
involving the equilibrium properties of simple Lennard-Jones clusters,
applications for which the performance levels of partial and infinite swapping
approaches are found to be higher than those of conventional parallel
tempering.Comment: 18 figure
Utility spot pricing study : Wisconsin
Spot pricing covers a range of electric utility pricing structures which relate the marginal costs of electric generation to the prices seen by utility customers. At the shortest time frames prices change every five minutes--the same time frame as used in utility dispatch--longer time frames might include 24-hour updating in which prices are set one day in advance but vary hourly as a function of projected system operating costs. The critical concept is that customers see and respond to marginal rather than average costs. In addition the concept of spot pricing includes a "quality of supply" component by which prices are increased at times in which the system is approaching maximum capacity, thus providing a pricing mechanism to replace or augment rationing.This research project evaluated the potential for spot pricing of industrial customers from the perspective both of the utility and its customers. A prototype Wisconsin (based on the WFPCO system) and its industrial customers was evaluated assuming 1980 demand level and tariff structures. The utility system was simplified to include limited interconnection and exchange of power with. surrounding utilities. The analysis was carried out using an hourly simulation model, ENPRO, to evaluate the marginal operating cost for any hour. The industrial energy demand was adjusted to reflect the price (relative to the present time-of-use pricing system). The simulation was then rerun to calculate the change in revenues (and customer bill) and the amount of consumer surplus generated.A second analysis assumed a 5 percent increase in demand with no increase in capacity. Each analysis was carried out for an assumed low and high industrial response to price changes.In an effort to generalize beyond the Wisconsin data and to evaluate the likely implications of a flexible pricing scheme relative to a utility system with a greater level of oil generation, particularly on the margin, the system capacity of the study utility was altered by substitution of a limited number of coal plants by identical but with higher-fuel cost oil-fired plants. The analyses for the modified utility structure are parallel to those for the standard utility structure discussed above.The results of the analysis showed that the flexible pricing system produced both utility and customer savings. At lower capacity utilization the utility recovered less revenue than it did under the present time-of-use rates. While at higher utilization it recovered more. Under all scenarios tested, consumer surplus benefits were five to ten times greater than were simple fuel savings for the utility. While these results must be evaluated in additional testing of specific customer response patterns, it is significant to note that the ability of the customer to choose his pattern more flexibly holds a significant potential for customers to achieve greater surplus--even if their bill may in fact increase. These results are discussed in detail in the report as are a number of customer bill impact considerations and the issues associated with revenue reconciliation
- …