22,576 research outputs found
Recommended from our members
Cost Efficient Distributed Load Frequency Control in Power Systems
The introduction of new technologies and increased penetration of renewable resources is altering the power distribution landscape which now includes a larger numbers of micro-generators. The centralized strategies currently employed for performing frequency control in a cost efficient way need to be revisited and decentralized to conform with the increase of distributed generation in the grid. In this paper, the use of Multi-Agent and Multi-Objective Reinforcement Learning techniques to train models to perform cost efficient frequency control through decentralized decision making is proposed. More specifically, we cast the frequency control problem as a Markov Decision Process and propose the use of reward composition and action composition multi-objective techniques and compare the results between the two. Reward composition is achieved by increasing the dimensionality of the reward function, while action composition is achieved through linear combination of actions produced by multiple single objective models. The proposed framework is validated through comparing the observed dynamics with the acceptable limits enforced in the industry and the cost optimal setups
Recommended from our members
Reinforcement Learning for Hybrid and Plug-In Hybrid Electric Vehicle Energy Management: Recent Advances and Prospects
Reinforcement machine learning for predictive analytics in smart cities
The digitization of our lives cause a shift in the data production as well as in the required data management. Numerous nodes are capable of producing huge volumes of data in our everyday activities. Sensors, personal smart devices as well as the Internet of Things (IoT) paradigm lead to a vast infrastructure that covers all the aspects of activities in modern societies. In the most of the cases, the critical issue for public authorities (usually, local, like municipalities) is the efficient management of data towards the support of novel services. The reason is that analytics provided on top of the collected data could help in the delivery of new applications that will facilitate citizens’ lives. However, the provision of analytics demands intelligent techniques for the underlying data management. The most known technique is the separation of huge volumes of data into a number of parts and their parallel management to limit the required time for the delivery of analytics. Afterwards, analytics requests in the form of queries could be realized and derive the necessary knowledge for supporting intelligent applications. In this paper, we define the concept of a Query Controller ( QC ) that receives queries for analytics and assigns each of them to a processor placed in front of each data partition. We discuss an intelligent process for query assignments that adopts Machine Learning (ML). We adopt two learning schemes, i.e., Reinforcement Learning (RL) and clustering. We report on the comparison of the two schemes and elaborate on their combination. Our aim is to provide an efficient framework to support the decision making of the QC that should swiftly select the appropriate processor for each query. We provide mathematical formulations for the discussed problem and present simulation results. Through a comprehensive experimental evaluation, we reveal the advantages of the proposed models and describe the outcomes results while comparing them with a deterministic framework
Avoiding Braess' Paradox through Collective Intelligence
In an Ideal Shortest Path Algorithm (ISPA), at each moment each router in a
network sends all of its traffic down the path that will incur the lowest cost
to that traffic. In the limit of an infinitesimally small amount of traffic for
a particular router, its routing that traffic via an ISPA is optimal, as far as
cost incurred by that traffic is concerned. We demonstrate though that in many
cases, due to the side-effects of one router's actions on another routers
performance, having routers use ISPA's is suboptimal as far as global aggregate
cost is concerned, even when only used to route infinitesimally small amounts
of traffic. As a particular example of this we present an instance of Braess'
paradox for ISPA's, in which adding new links to a network decreases overall
throughput. We also demonstrate that load-balancing, in which the routing
decisions are made to optimize the global cost incurred by all traffic
currently being routed, is suboptimal as far as global cost averaged across
time is concerned. This is also due to "side-effects", in this case of current
routing decision on future traffic.
The theory of COllective INtelligence (COIN) is concerned precisely with the
issue of avoiding such deleterious side-effects. We present key concepts from
that theory and use them to derive an idealized algorithm whose performance is
better than that of the ISPA, even in the infinitesimal limit. We present
experiments verifying this, and also showing that a machine-learning-based
version of this COIN algorithm in which costs are only imprecisely estimated (a
version potentially applicable in the real world) also outperforms the ISPA,
despite having access to less information than does the ISPA. In particular,
this COIN algorithm avoids Braess' paradox.Comment: 28 page
- …