8,600 research outputs found

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    A Deep Reinforcement Learning-Based Model for Optimal Resource Allocation and Task Scheduling in Cloud Computing

    Get PDF
    The advent of cloud computing has dramatically altered how information is stored and retrieved. However, the effectiveness and speed of cloud-based applications can be significantly impacted by inefficiencies in the distribution of resources and task scheduling. Such issues have been challenging, but machine and deep learning methods have shown great potential in recent years. This paper suggests a new technique called Deep Q-Networks and Actor-Critic (DQNAC) models that enhance cloud computing efficiency by optimizing resource allocation and task scheduling. We evaluate our approach using a dataset of real-world cloud workload traces and demonstrate that it can significantly improve resource utilization and overall performance compared to traditional approaches. Furthermore, our findings indicate that deep reinforcement learning (DRL)-based methods can be potent and effective for optimizing cloud computing, leading to improved cloud-based application efficiency and flexibility

    Application of Machine Learning Methods for Asset Management on Power Distribution Networks

    Get PDF
    This study aims to study the different kinds of Machine Learning (ML) models and their working principles for asset management in power networks. Also, it investigates the challenges behind asset management and its maintenance activities. In this review article, Machine Learning (ML) models are analyzed to improve the lifespan of the electrical components based on the maintenance management and assessment planning policies. The articles are categorized according to their purpose: 1) classification, 2) machine learning, and 3) artificial intelligence mechanisms. Moreover, the importance of using ML models for proper decision making based on the asset management plan is illustrated in a detailed manner. In addition to this, a comparative analysis between the ML models is performed, identifying the advantages and disadvantages of these techniques. Then, the challenges and managing operations of the asset management strategies are discussed based on the technical and economic factors. The proper functioning, maintenance and controlling operations of the electric components are key challenging and demanding tasks in the power distribution systems. Typically, asset management plays an essential role in determining the quality and profitability of the elements in the power network. Based on this investigation, the most suitable and optimal machine learning technique can be identified and used for future work. Doi: 10.28991/ESJ-2022-06-04-017 Full Text: PD

    Research on operation optimization of building energy systems based on machine learning

    Get PDF
    北九州市立大学博士(工学)本研究では、建築エネルギーシステムの運用を最適化するために機械学習を応用し、建築エネルギーシステムの運用コストを削減し、再生可能エネルギーの自給率を向上させることを重点的に扱っています。これらの一連の研究成果は、この分野に新たな知見をもたらし、建築エネルギーシステムの経済的効率を向上させるのに役立っています。In this study, we focus on applying machine learning to optimize the operation of building energy systems, with a primary emphasis on reducing the operational costs of these systems and enhancing the self-sufficiency of renewable energy. This series of research outcomes has brought new insights to the field and contributes to improving the economic efficiency of building energy systems.doctoral thesi

    Reinforcement Learning and Its Applications in Modern Power and Energy Systems:A Review

    Get PDF

    Enhancing Dynamic Production Scheduling And Resource Allocation Through Adaptive Control Systems With Deep Reinforcement Learning

    Get PDF
    Traditional production scheduling and resource allocation methods often struggle to adapt to changing conditions in manufacturing environments. To address this challenge, this study leverages an adaptive control system integrated with a Deep Deterministic Policy Gradient (DDPG) alongside a particle swarm optimization algorithm to enable real-time production scheduling and allocation of resources. The system continuously learns from generated production data and adjusts production schedules with resource allocations based on evolving conditions such as demand fluctuations and resource availability. By harnessing the capabilities of Deep Reinforcement learning, the proposed approach of applying the DDPG algorithm to simulate the environment improves production efficiency, minimizes delays, and optimizes resource utilization. Through conducted experiments, the effectiveness of the DDPG-Particle Swarm Optimization technique (DRPO) was demonstrated in enhancing dynamic production scheduling and resource allocation in simulated manufacturing settings. This study presents a significant step towards intelligent, self-improving production control systems that can navigate complex and dynamic manufacturing environments

    Deep Reinforcement Learning for Artificial Upwelling Energy Management

    Full text link
    The potential of artificial upwelling (AU) as a means of lifting nutrient-rich bottom water to the surface, stimulating seaweed growth, and consequently enhancing ocean carbon sequestration, has been gaining increasing attention in recent years. This has led to the development of the first solar-powered and air-lifted AU system (AUS) in China. However, efficient scheduling of air injection systems remains a crucial challenge in operating AUS, as it holds the potential to significantly improve system efficiency. Conventional approaches based on rules or models are often impractical due to the complex and heterogeneous nature of the marine environment and its associated disturbances. To address this challenge, we propose a novel energy management approach that utilizes deep reinforcement learning (DRL) algorithm to develop efficient strategies for operating AUS. Through extensive simulations, we evaluate the performance of our algorithm and demonstrate its superior effectiveness over traditional rule-based approaches and other DRL algorithms in reducing energy wastage while ensuring the stable and efficient operation of AUS. Our findings suggest that a DRL-based approach offers a promising way for improving the efficiency of AUS and enhancing the sustainability of seaweed cultivation and carbon sequestration in the ocean.Comment: 31 pages, 13 figure
    corecore