13,251 research outputs found

    Deep Reinforcement Learning for Long-Term Voltage Stability Control

    Get PDF
    Deep reinforcement learning (DRL) is a machine learning-based method suited for complex and high-dimensional control problems. In this study, a real-time control system based on DRL is developed for long-term voltage stability events. The possibility of using system services from demand response (DR) and energy storage systems (ESS) as control measures to stabilize the system is investigated. The performance of the DRL control is evaluated on a modified Nordic32 test system. The results show that the DRL control quickly learns an effective control policy that can handle the uncertainty involved when using DR and ESS. The DRL control is compared to a rule-based load shedding scheme and the DRL control is shown to stabilize the system both significantly faster and with lesser load curtailment. Finally, when testing and evaluating the performance on load and disturbance scenarios that were not included in the training data, the robustness and generalization capability of the control were shown to be effective

    Data-driven methods for real-time dynamic stability assessment and control

    Get PDF
    Electric power systems are becoming increasingly complex to operate; a trend driven by an increased demand for electricity, large-scale integration of renewable energy resources, and new system components with power electronic interfaces. In this thesis, a new real-time monitoring and control tool that can support system operators to allow more efficient utilization of the transmission grid has been developed. The developed tool is comprised of four methods aimed to handle the following complementary tasks in power system operation: 1) preventive monitoring, 2) preventive control, 3) emergency monitoring, and 4) emergency control. The methods are based on recent advances in machine learning and deep reinforcement learning to allow real-time assessment and optimized control, while taking into account the dynamic stability of a power system. The developed method for preventive monitoring is proposed to be used to ensure a secure operation by providing real-time estimates of a power system’s dynamic security margins. The method is based on a two-step approach, where neural networks are first used to estimate the security margin, which then is followed by a validation of the estimates using a search algorithm and actual time-domain simulations. The two-step approach is proposed to mitigate any inconsistency issues associated with neural networks under new or unseen operating conditions. The method is shown to reduce the total computation time of the security margin by approximately 70 % for the given test system. Whenever the security margins are below a certain threshold, another developed method, aimed at preventive control, is used to determine the optimal control actions that can restore the security margins to a level above a pre-defined threshold. This method is based on deep reinforcement learning and uses a hybrid control scheme that is capable of simultaneously adjusting both discrete and continuous action variables. The results show that the developed method quickly learns an effective control policy to ensure a sufficient security margin for a range of different system scenarios. In case of severe disturbances and when the preventive methods have not been sufficient to guarantee a stable operation, system operators are required to rely on emergency monitoring and control methods. In the thesis, a method for emergency monitoring is developed that can quickly detect the onset of instability and predict whether the present system state is stable or if it will evolve into an alert or an emergency state in the near future. As time progresses and if new events occur in the system, the network can update the assessment continuously. The results from case studies show good performance and the network can accurately, within only a few seconds after a disturbance, predict voltage instability in almost all test cases. Finally, a method for emergency control is developed, which is based on deep reinforcement learning and is aimed to mitigate long-term voltage instability in real-time. Once trained, the method can continuously assess the system stability and suggest fast and efficient control actions to system operators in case of voltage instability. The control is trained to use load curtailment supplied from demand response and energy storage systems as an efficient and flexible alternative to stabilize the system. The results show that the developed method learns an effective control policy that can stabilize the system quickly while also minimizing the amount of required load curtailment

    A deep reinforcement learning based homeostatic system for unmanned position control

    Get PDF
    Deep Reinforcement Learning (DRL) has been proven to be capable of designing an optimal control theory by minimising the error in dynamic systems. However, in many of the real-world operations, the exact behaviour of the environment is unknown. In such environments, random changes cause the system to reach different states for the same action. Hence, application of DRL for unpredictable environments is difficult as the states of the world cannot be known for non-stationary transition and reward functions. In this paper, a mechanism to encapsulate the randomness of the environment is suggested using a novel bio-inspired homeostatic approach based on a hybrid of Receptor Density Algorithm (an artificial immune system based anomaly detection application) and a Plastic Spiking Neuronal model. DRL is then introduced to run in conjunction with the above hybrid model. The system is tested on a vehicle to autonomously re-position in an unpredictable environment. Our results show that the DRL based process control raised the accuracy of the hybrid model by 32%.N/

    A Survey of Prediction and Classification Techniques in Multicore Processor Systems

    Get PDF
    In multicore processor systems, being able to accurately predict the future provides new optimization opportunities, which otherwise could not be exploited. For example, an oracle able to predict a certain application\u27s behavior running on a smart phone could direct the power manager to switch to appropriate dynamic voltage and frequency scaling modes that would guarantee minimum levels of desired performance while saving energy consumption and thereby prolonging battery life. Using predictions enables systems to become proactive rather than continue to operate in a reactive manner. This prediction-based proactive approach has become increasingly popular in the design and optimization of integrated circuits and of multicore processor systems. Prediction transforms from simple forecasting to sophisticated machine learning based prediction and classification that learns from existing data, employs data mining, and predicts future behavior. This can be exploited by novel optimization techniques that can span across all layers of the computing stack. In this survey paper, we present a discussion of the most popular techniques on prediction and classification in the general context of computing systems with emphasis on multicore processors. The paper is far from comprehensive, but, it will help the reader interested in employing prediction in optimization of multicore processor systems
    • …
    corecore