1,029 research outputs found

    Harnessing Flexible and Reliable Demand Response Under Customer Uncertainties

    Full text link
    Demand response (DR) is a cost-effective and environmentally friendly approach for mitigating the uncertainties in renewable energy integration by taking advantage of the flexibility of customers' demands. However, existing DR programs suffer from either low participation due to strict commitment requirements or not being reliable in voluntary programs. In addition, the capacity planning for energy storage/reserves is traditionally done separately from the demand response program design, which incurs inefficiencies. Moreover, customers often face high uncertainties in their costs in providing demand response, which is not well studied in literature. This paper first models the problem of joint capacity planning and demand response program design by a stochastic optimization problem, which incorporates the uncertainties from renewable energy generation, customer power demands, as well as the customers' costs in providing DR. We propose online DR control policies based on the optimal structures of the offline solution. A distributed algorithm is then developed for implementing the control policies without efficiency loss. We further offer enhanced policy design by allowing flexibilities into the commitment level. We perform real world trace based numerical simulations. Results demonstrate that the proposed algorithms can achieve near optimal social costs, and significant social cost savings compared to baseline methods

    Incentivizing Reliable Demand Response with Customers' Uncertainties and Capacity Planning

    Full text link
    One of the major issues with the integration of renewable energy sources into the power grid is the increased uncertainty and variability that they bring. If this uncertainty is not sufficiently addressed, it will limit the further penetration of renewables into the grid and even result in blackouts. Compared to energy storage, Demand Response (DR) has advantages to provide reserves to the load serving entities (LSEs) in a cost-effective and environmentally friendly way. DR programs work by changing customers' loads when the power grid experiences a contingency such as a mismatch between supply and demand. Uncertainties from both the customer-side and LSE-side make designing algorithms for DR a major challenge. This paper makes the following main contributions: (i) We propose DR control policies based on the optimal structures of the offline solution. (ii) A distributed algorithm is developed for implementing the control policies without efficiency loss. (iii) We further offer an enhanced policy design by allowing flexibilities into the commitment level. (iv) We perform real world trace based numerical simulations which demonstrate that the proposed algorithms can achieve near optimal social cost. Details can be found in our extended version.Comment: arXiv admin note: substantial text overlap with arXiv:1704.0453

    Generalized Nash Equilibrium Seeking Algorithm Design for Distributed Constrained Multi-Cluster Games

    Full text link
    The multi-cluster games are addressed in this paper, where all players team up with the players in the cluster that they belong to, and compete against the players in other clusters to minimize the cost function of their own cluster. The decision of every player is constrained by coupling inequality constraints, local inequality constraints and local convex set constraints. Our problem extends well-known noncooperative game problems and resource allocation problems by considering the competition between clusters and the cooperation within clusters at the same time. Besides, without involving the resource allocation within clusters, the noncooperative game between clusters, and the aforementioned constraints, existing game algorithms as well as resource allocation algorithms cannot solve the problem. In order to seek the variational generalized Nash equilibrium (GNE) of the multi-cluster games, we design a distributed algorithm via gradient descent and projections. Moreover, we analyze the convergence of the algorithm with the help of variational analysis and Lyapunov stability theory. Under the algorithm, all players asymptotically converge to the variational GNE of the multi-cluster game. Simulation examples are presented to verify the effectiveness of the algorithm

    Hydraulic retention time and pressure affect anaerobic digestion process treating synthetic glucose wastewater

    Get PDF
    High-pressure anaerobic digestion (HPAD) can directly upgrade biogas (CH4 content to 90 %) within a reactor. Understanding of how HPAD-related microbiomes are constructed by operational parameters (hydraulic retention time (HRT) and pressure) and their interactions within the biochemical process remain underexplored. In this study, an HPAD reactor was operated at five different HRT (from 40 to 13 d), with pressure around 10–13 bar. In HPAD, pressure was the driving force behind CH4 content. Low HRTs (13–20 d) for HPAD led to volatile fatty acids accumulation, which occurred earlier than that in normal-pressure digestion. HRT mainly affected the archaeal community, whereas pressure mostly affected the bacterial community. Hydrogenotrophic methanogen Methanobacterium prevailed at low HRTs (13–20 d). When operating continuous HPAD, attention should be paid to HRT optimization, as low HRTs (e.g., 13 d) impaired the activity of CH4-synthesizing enzyme Methyl-coenzyme M reductase.</p

    Automatic Curriculum Learning With Over-repetition Penalty for Dialogue Policy Learning

    Full text link
    Dialogue policy learning based on reinforcement learning is difficult to be applied to real users to train dialogue agents from scratch because of the high cost. User simulators, which choose random user goals for the dialogue agent to train on, have been considered as an affordable substitute for real users. However, this random sampling method ignores the law of human learning, making the learned dialogue policy inefficient and unstable. We propose a novel framework, Automatic Curriculum Learning-based Deep Q-Network (ACL-DQN), which replaces the traditional random sampling method with a teacher policy model to realize the dialogue policy for automatic curriculum learning. The teacher model arranges a meaningful ordered curriculum and automatically adjusts it by monitoring the learning progress of the dialogue agent and the over-repetition penalty without any requirement of prior knowledge. The learning progress of the dialogue agent reflects the relationship between the dialogue agent's ability and the sampled goals' difficulty for sample efficiency. The over-repetition penalty guarantees the sampled diversity. Experiments show that the ACL-DQN significantly improves the effectiveness and stability of dialogue tasks with a statistically significant margin. Furthermore, the framework can be further improved by equipping with different curriculum schedules, which demonstrates that the framework has strong generalizability

    Microstructure Evolution and Surface Cracking Behavior of Superheavy Forgings during Hot Forging

    Get PDF
    In recent years, superheavy forgings that are manufactured from 600 t grade ingots have been applied in the latest generation of nuclear power plants to provide good safety. However, component production is pushing the limits of the current free-forging industry. Large initial grain sizes and a low strain rate are the main factors that contribute to the deformation of superheavy forgings during forging. In this study, 18Mn18Cr0.6N steel with a coarse grain structure was selected as a model material. Hot compression and hot tension tests were conducted at a strain rate of 10−4·s−1. The essential nucleation mechanism of the dynamic recrystallization involved low-angle grain boundary formation and subgrain rotation, which was independent of the original high-angle grain boundary bulging and the presence of twins. Twins were formed during the growth of dynamic recrystallization grains. The grain refinement was not obvious at 1150°C. A lowering of the deformation temperature to 1050°C resulted in a fine grain structure; however, the stress increased significantly. Crack-propagation paths included high-angle grain boundaries, twin boundaries, and the insides of grains, in that order. For superheavy forging, the ingot should have a larger height and a smaller diameter
    • …
    corecore