7,685 research outputs found
Intelligent Agents for Disaster Management
ALADDIN [1] is a multi-disciplinary project that is developing novel techniques, architectures, and mechanisms for multi-agent systems in uncertain and dynamic environments. The application focus of the project is disaster management. Research within a number of themes is being pursued and this is considering different aspects of the interaction between autonomous agents and the decentralised system architectures that support those interactions. The aim of the research is to contribute to building more robust multi-agent systems for future applications in disaster management and other similar domains
A survey of machine learning techniques applied to self organizing cellular networks
In this paper, a survey of the literature of the past fifteen years involving Machine Learning (ML) algorithms applied to self organizing cellular networks is performed. In order for future networks to overcome the current limitations and address the issues of current cellular systems, it is clear that more intelligence needs to be deployed, so that a fully autonomous and flexible network can be enabled. This paper focuses on the learning perspective of Self Organizing Networks (SON) solutions and provides, not only an overview of the most common ML techniques encountered in cellular networks, but also manages to classify each paper in terms of its learning solution, while also giving some examples. The authors also classify each paper in terms of its self-organizing use-case and discuss how each proposed solution performed. In addition, a comparison between the most commonly found ML algorithms in terms of certain SON metrics is performed and general guidelines on when to choose each ML algorithm for each SON function are proposed. Lastly, this work also provides future research directions and new paradigms that the use of more robust and intelligent algorithms, together with data gathered by operators, can bring to the cellular networks domain and fully enable the concept of SON in the near future
A reinforcement learning based discrete supplementary control for power system transient stability enhancement
peer reviewedThis paper proposes an application of a Reinforcement Learning (RL) method to the control of a dynamic brake aimed to enhance power system transient stability. The control law of the resistive brake is in the form of switching strategies. In particular, the paper focuses on the application of a model based RL method, known as prioritized sweeping, a method proven to be suitable in applications in which computation is considered to be cheap. The curse of dimensionality problem is resolved by the system state dimensionality reduction based on the One Machine Infinite Bus (OMIB) transformation. Results obtained by using a synthetic four-machine power system are given to illustrate the performances of the proposed methodology
Smart grids as distributed learning control
The topic of smart grids has received a lot of attention but from a scientific point of view it is a highly imprecise concept. This paper attempts to describe what could ultimately work as a control process to fulfill the aims usually stated for such grids without throwing away some important principles established by the pioneers in power system control. In modern terms, we need distributed (or multi-agent) learning control which is suggested to work with a certain consensus mechanism which appears to leave room for achieving cyber-physical security, robustness and performance goals. © 2012 IEEE.published_or_final_versio
Physics Informed Reinforcement Learning for Power Grid Control using Augmented Random Search
Wide adoption of deep reinforcement learning in energy system domain needs to overcome several challenges , including scalability, learning from limited samples, and high-dimensional continuous state and action spaces. In this paper, we integrated physics-based information from the generator operation state formula, also known as Swing Equation, into the reinforcement learning agent's neural network loss function, and applied an augmented random search agent to optimize the generator control under dynamic contingency. Simulation results demonstrated the reliability performance improvements in training speed, reward convergence, and future potentials in its transferability and scalability
Power System Stability Analysis using Neural Network
This work focuses on the design of modern power system controllers for
automatic voltage regulators (AVR) and the applications of machine learning
(ML) algorithms to correctly classify the stability of the IEEE 14 bus system.
The LQG controller performs the best time domain characteristics compared to
PID and LQG, while the sensor and amplifier gain is changed in a dynamic
passion. After that, the IEEE 14 bus system is modeled, and contingency
scenarios are simulated in the System Modelica Dymola environment. Application
of the Monte Carlo principle with modified Poissons probability distribution
principle is reviewed from the literature that reduces the total contingency
from 1000k to 20k. The damping ratio of the contingency is then extracted,
pre-processed, and fed to ML algorithms, such as logistic regression, support
vector machine, decision trees, random forests, Naive Bayes, and k-nearest
neighbor. A neural network (NN) of one, two, three, five, seven, and ten hidden
layers with 25%, 50%, 75%, and 100% data size is considered to observe and
compare the prediction time, accuracy, precision, and recall value. At lower
data size, 25%, in the neural network with two-hidden layers and a single
hidden layer, the accuracy becomes 95.70% and 97.38%, respectively. Increasing
the hidden layer of NN beyond a second does not increase the overall score and
takes a much longer prediction time; thus could be discarded for similar
analysis. Moreover, when five, seven, and ten hidden layers are used, the F1
score reduces. However, in practical scenarios, where the data set contains
more features and a variety of classes, higher data size is required for NN for
proper training. This research will provide more insight into the damping
ratio-based system stability prediction with traditional ML algorithms and
neural networks.Comment: Masters Thesis Dissertatio
- âŠ