1,497 research outputs found

    Joint Control of Manufacturing and Onsite Microgrid System Via Novel Neural-Network Integrated Reinforcement Learning Algorithms

    Get PDF
    Microgrid is a promising technology of distributed energy supply system, which consists of storage devices, generation capacities including renewable sources, and controllable loads. It has been widely investigated and applied for residential and commercial end-use customers as well as critical facilities. In this paper, we propose a joint state-based dynamic control model on microgrids and manufacturing systems where optimal controls for both sides are implemented to coordinate the energy demand and supply so that the overall production cost can be minimized considering the constraint of production target. Markov Decision Process (MDP) is used to formulate the decision-making procedure. The main computing challenge to solve the formulated MDP lies in the co-existence of both discrete and continuous parts of the high-dimensional state/action space that are intertwined with constraints. A novel reinforcement learning algorithm that leverages both Temporal Difference (TD) and Deterministic Policy Gradient (DPG) algorithms is proposed to address the computation challenge. Experiments for a manufacturing system with an onsite microgrid system with renewable sources have been implemented to justify the effectiveness of the proposed method

    Joint Manufacturing and Onsite Microgrid System Control using Markov Decision Process and Neural Network Integrated Reinforcement Learning

    Get PDF
    Onsite microgrid generation systems with renewable sources are considered a promising complementary energy supply system for manufacturing plant, especially when outage occurs during which the energy supplied from the grid is not available. Compared to the widely recognized benefits in terms of the resilience improvement when it is used as a backup energy system, the operation along with the electricity grid to support the manufacturing operations in non-emergent mode has been less investigated. In this paper, we propose a joint dynamic decision-making model for the optimal control for both manufacturing system and onsite generation system. Markov Decision Process (MDP) is used to formulate the decision-making model. A neural network integrated reinforcement learning algorithm is proposed to approximately estimate the value function given policy of MDP. A case study based on a manufacturing system as well as a typical onsite microgrid generation system is conducted to validate the proposed MDP model as well as the solution strategy

    A systematic review of machine learning techniques related to local energy communities

    Get PDF
    In recent years, digitalisation has rendered machine learning a key tool for improving processes in several sectors, as in the case of electrical power systems. Machine learning algorithms are data-driven models based on statistical learning theory and employed as a tool to exploit the data generated by the power system and its users. Energy communities are emerging as novel organisations for consumers and prosumers in the distribution grid. These communities may operate differently depending on their objectives and the potential service the community wants to offer to the distribution system operator. This paper presents the conceptualisation of a local energy community on the basis of a review of 25 energy community projects. Furthermore, an extensive literature review of machine learning algorithms for local energy community applications was conducted, and these algorithms were categorised according to forecasting, storage optimisation, energy management systems, power stability and quality, security, and energy transactions. The main algorithms reported in the literature were analysed and classified as supervised, unsupervised, and reinforcement learning algorithms. The findings demonstrate the manner in which supervised learning can provide accurate models for forecasting tasks. Similarly, reinforcement learning presents interesting capabilities in terms of control-related applications.publishedVersio

    A Stochastic Game Framework for Efficient Energy Management in Microgrid Networks

    Full text link
    We consider the problem of energy management in microgrid networks. A microgrid is capable of generating a limited amount of energy from a renewable resource and is responsible for handling the demands of its dedicated customers. Owing to the variable nature of renewable generation and the demands of the customers, it becomes imperative that each microgrid optimally manages its energy. This involves intelligently scheduling the demands at the customer side, selling (when there is a surplus) and buying (when there is a deficit) the power from its neighboring microgrids depending on its current and future needs. Typically, the transaction of power among the microgrids happens at a pre-decided price by the central grid. In this work, we formulate the problems of demand and battery scheduling, energy trading and dynamic pricing (where we allow the microgrids to decide the price of the transaction depending on their current configuration of demand and renewable energy) in the framework of stochastic games. Subsequently, we propose a novel approach that makes use of independent learners Deep Q-learning algorithm to solve this problem. Through extensive empirical evaluation, we show that our proposed framework is more beneficial to the majority of the microgrids and we provide a detailed analysis of the results
    • …
    corecore