40,927 research outputs found

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Demand Response Strategy Based on Reinforcement Learning and Fuzzy Reasoning for Home Energy Management

    Get PDF
    As energy demand continues to increase, demand response (DR) programs in the electricity distribution grid are gaining momentum and their adoption is set to grow gradually over the years ahead. Demand response schemes seek to incentivise consumers to use green energy and reduce their electricity usage during peak periods which helps support grid balancing of supply-demand and generate revenue by selling surplus of energy back to the grid. This paper proposes an effective energy management system for residential demand response using Reinforcement Learning (RL) and Fuzzy Reasoning (FR). RL is considered as a model-free control strategy which learns from the interaction with its environment by performing actions and evaluating the results. The proposed algorithm considers human preference by directly integrating user feedback into its control logic using fuzzy reasoning as reward functions. Q-learning, a RL strategy based on a reward mechanism, is used to make optimal decisions to schedule the operation of smart home appliances by shifting controllable appliances from peak periods, when electricity prices are high, to off-peak hours, when electricity prices are lower without affecting the customer’s preferences. The proposed approach works with a single agent to control 14 household appliances and uses a reduced number of state-action pairs and fuzzy logic for rewards functions to evaluate an action taken for a certain state. The simulation results show that the proposed appliances scheduling approach can smooth the power consumption profile and minimise the electricity cost while considering user’s preferences, user’s feedbacks on each action taken and his/her preference settings. A user-interface is developed in MATLAB/Simulink for the Home Energy Management System (HEMS) to demonstrate the proposed DR scheme. The simulation tool includes features such as smart appliances, electricity pricing signals, smart meters, solar photovoltaic generation, battery energy storage, electric vehicle and grid supply.Peer reviewe

    Achieving High Renewable Energy Integration in Smart Grids with Machine Learning

    Get PDF
    The integration of high levels of renewable energy into smart grids is crucial for achieving a sustainable and efficient energy infrastructure. However, this integration presents significant technical and operational challenges due to the intermittent nature and inherent uncertainty of renewable energy sources (RES). Therefore, the energy storage system (ESS) has always been bound to renewable energy, and its charge and discharge control has become an important part of the integration. The addition of RES and ESS comes with their complex control, communication, and monitor capabilities, which also makes the grid more vulnerable to attacks, brings new challenges to the cybersecurity. A large number of works have been devoted to the optimization integration of the RES and ESS system to the traditional grid, along with combining the ESS scheduling control with the traditional Optimal Power Flow (OPF) control. Cybersecurity problem focusing on the RES integrated grid has also gradually aroused researchers’ interest. In recent years, machine learning techniques have emerged in different research field including optimizing renewable energy integration in smart grids. Reinforcement learning (RL), which trains agent to interact with the environment by making sequential decisions to maximize the expected future reward, is used as an optimization tool. This dissertation explores the application of RL algorithms and models to achieve high renewable energy integration in smart grids. The research questions focus on the effectiveness, benefits of renewable energy integration to individual consumers and electricity utilities, applying machine learning techniques in optimizing the behaviors of the ESS and the generators and other components in the grid. The objectives of this research are to investigate the current algorithms of renewable energy integration in smart grids, explore RL algorithms, develop novel RL-based models and algorithms for optimization control and cybersecurity, evaluate their performance through simulations on real-world data set, and provide practical recommendations for implementation. The research approach includes a comprehensive literature review to understand the challenges and opportunities associated with renewable energy integration. Various optimization algorithms, such as linear programming (LP), dynamic programming (DP) and various RL algorithms, such as Deep Q-Learning (DQN) and Deep Deterministic Policy Gradient (DDPG), are applied to solve problems during renewable energy integration in smart grids. Simulation studies on real-world data, including different types of loads, solar and wind energy profiles, are used to evaluate the performance and effectiveness of the proposed machine learning techniques. The results provide insights into the capabilities and limitations of machine learning in solving the optimization problems in the power system. Compared with traditional optimization tools, the RL approach has the advantage of real-time implementation, with the cost being the training time and unguaranteed model performance. Recommendations and guidelines for practical implementation of RL algorithms on power systems are provided in the appendix

    Achieving High Renewable Energy Integration in Smart Grids with Machine Learning

    Get PDF
    The integration of high levels of renewable energy into smart grids is crucial for achieving a sustainable and efficient energy infrastructure. However, this integration presents significant technical and operational challenges due to the intermittent nature and inherent uncertainty of renewable energy sources (RES). Therefore, the energy storage system (ESS) has always been bound to renewable energy, and its charge and discharge control has become an important part of the integration. The addition of RES and ESS comes with their complex control, communication, and monitor capabilities, which also makes the grid more vulnerable to attacks, brings new challenges to the cybersecurity. A large number of works have been devoted to the optimization integration of the RES and ESS system to the traditional grid, along with combining the ESS scheduling control with the traditional Optimal Power Flow (OPF) control. Cybersecurity problem focusing on the RES integrated grid has also gradually aroused researchers’ interest. In recent years, machine learning techniques have emerged in different research field including optimizing renewable energy integration in smart grids. Reinforcement learning (RL), which trains agent to interact with the environment by making sequential decisions to maximize the expected future reward, is used as an optimization tool. This dissertation explores the application of RL algorithms and models to achieve high renewable energy integration in smart grids. The research questions focus on the effectiveness, benefits of renewable energy integration to individual consumers and electricity utilities, applying machine learning techniques in optimizing the behaviors of the ESS and the generators and other components in the grid. The objectives of this research are to investigate the current algorithms of renewable energy integration in smart grids, explore RL algorithms, develop novel RL-based models and algorithms for optimization control and cybersecurity, evaluate their performance through simulations on real-world data set, and provide practical recommendations for implementation. The research approach includes a comprehensive literature review to understand the challenges and opportunities associated with renewable energy integration. Various optimization algorithms, such as linear programming (LP), dynamic programming (DP) and various RL algorithms, such as Deep Q-Learning (DQN) and Deep Deterministic Policy Gradient (DDPG), are applied to solve problems during renewable energy integration in smart grids. Simulation studies on real-world data, including different types of loads, solar and wind energy profiles, are used to evaluate the performance and effectiveness of the proposed machine learning techniques. The results provide insights into the capabilities and limitations of machine learning in solving the optimization problems in the power system. Compared with traditional optimization tools, the RL approach has the advantage of real-time implementation, with the cost being the training time and unguaranteed model performance. Recommendations and guidelines for practical implementation of RL algorithms on power systems are provided in the appendix
    • …
    corecore