3,689 research outputs found

    Wind Power Integration Control Technology for Sustainable, Stable and Smart Trend: A Review

    Get PDF
    The key to achieve sustainable development of wind power is integration absorptive, involving the generation, transmission, distribution, operation, scheduling plurality of electric production processes. The paper based on the analyses of the situation of wind power development and grid integration requirements for wind power, summarized wind power integration technologies' development, characteristics, applicability and trends from five aspects, grid mode, control technology, transmission technology, scheduling, and forecasting techniques. And friendly integration, intelligent control, reliable transmission, and accurate prediction would be the major trends of wind power integration, these five aspects interactive and mutually reinforcing would realize common development both grid and wind power, both economic and ecological

    On Reinforcement Learning for Full-length Game of StarCraft

    Full text link
    StarCraft II poses a grand challenge for reinforcement learning. The main difficulties of it include huge state and action space and a long-time horizon. In this paper, we investigate a hierarchical reinforcement learning approach for StarCraft II. The hierarchy involves two levels of abstraction. One is the macro-action automatically extracted from expert's trajectories, which reduces the action space in an order of magnitude yet remains effective. The other is a two-layer hierarchical architecture which is modular and easy to scale, enabling a curriculum transferring from simpler tasks to more complex tasks. The reinforcement training algorithm for this architecture is also investigated. On a 64x64 map and using restrictive units, we achieve a winning rate of more than 99\% against the difficulty level-1 built-in AI. Through the curriculum transfer learning algorithm and a mixture of combat model, we can achieve over 93\% winning rate of Protoss against the most difficult non-cheating built-in AI (level-7) of Terran, training within two days using a single machine with only 48 CPU cores and 8 K40 GPUs. It also shows strong generalization performance, when tested against never seen opponents including cheating levels built-in AI and all levels of Zerg and Protoss built-in AI. We hope this study could shed some light on the future research of large-scale reinforcement learning.Comment: Appeared in AAAI 201

    Excited Heavy Quarkonium Production at the LHC through WW-Boson Decays

    Full text link
    Sizable amount of heavy-quarkonium events can be produced through WW-boson decays at the LHC. Such channels will provide a suitable platform to study the heavy-quarkonium properties. The "improved trace technology", which disposes the amplitude M{\cal M} at the amplitude-level, is helpful for deriving compact analytical results for complex processes. As an important new application, in addition to the production of the lower-level Fock states ∣(QQ′ˉ)[1S]>|(Q\bar{Q'})[1S]> and ∣(QQ′ˉ)[1P]>|(Q\bar{Q'})[1P]>, we make a further study on the production of higher-excited ∣(QQ′ˉ)>|(Q\bar{Q'})>-quarkonium Fock states ∣(QQ′ˉ)[2S]>|(Q\bar{Q'})[2S]>, ∣(QQ′ˉ)[3S]>|(Q\bar{Q'})[3S]> and ∣(QQ′ˉ)[2P]>|(Q\bar{Q'})[2P]>. Here ∣(QQ′ˉ)>|(Q\bar{Q'})> stands for the ∣(ccˉ)>|(c\bar{c})>-charmonium, ∣(cbˉ)>|(c\bar{b})>-quarkonium and ∣(bbˉ)>|(b\bar{b})>-bottomonium respectively. We show that sizable amount of events for those higher-excited states can also be produced at the LHC. Therefore, we need to take them into consideration for a sound estimation.Comment: 7 pages, 9 figures and 6 tables. Typo errors are corrected, more discussions and two new figures have been adde

    Characterizing and Subsetting Big Data Workloads

    Full text link
    Big data benchmark suites must include a diversity of data and workloads to be useful in fairly evaluating big data systems and architectures. However, using truly comprehensive benchmarks poses great challenges for the architecture community. First, we need to thoroughly understand the behaviors of a variety of workloads. Second, our usual simulation-based research methods become prohibitively expensive for big data. As big data is an emerging field, more and more software stacks are being proposed to facilitate the development of big data applications, which aggravates hese challenges. In this paper, we first use Principle Component Analysis (PCA) to identify the most important characteristics from 45 metrics to characterize big data workloads from BigDataBench, a comprehensive big data benchmark suite. Second, we apply a clustering technique to the principle components obtained from the PCA to investigate the similarity among big data workloads, and we verify the importance of including different software stacks for big data benchmarking. Third, we select seven representative big data workloads by removing redundant ones and release the BigDataBench simulation version, which is publicly available from http://prof.ict.ac.cn/BigDataBench/simulatorversion/.Comment: 11 pages, 6 figures, 2014 IEEE International Symposium on Workload Characterizatio
    • …
    corecore