1,506 research outputs found

    Applications of Soft Computing in Mobile and Wireless Communications

    Get PDF
    Soft computing is a synergistic combination of artificial intelligence methodologies to model and solve real world problems that are either impossible or too difficult to model mathematically. Furthermore, the use of conventional modeling techniques demands rigor, precision and certainty, which carry computational cost. On the other hand, soft computing utilizes computation, reasoning and inference to reduce computational cost by exploiting tolerance for imprecision, uncertainty, partial truth and approximation. In addition to computational cost savings, soft computing is an excellent platform for autonomic computing, owing to its roots in artificial intelligence. Wireless communication networks are associated with much uncertainty and imprecision due to a number of stochastic processes such as escalating number of access points, constantly changing propagation channels, sudden variations in network load and random mobility of users. This reality has fuelled numerous applications of soft computing techniques in mobile and wireless communications. This paper reviews various applications of the core soft computing methodologies in mobile and wireless communications

    Reinforcement learning and A* search for the unit commitment problem

    Get PDF
    Previous research has combined model-free reinforcement learning with model-based tree search methods to solve the unit commitment problem with stochastic demand and renewables generation. This approach was limited to shallow search depths and suffered from significant variability in run time across problem instances with varying complexity. To mitigate these issues, we extend this methodology to more advanced search algorithms based on A* search. First, we develop a problem-specific heuristic based on priority list unit commitment methods and apply this in Guided A* search, reducing run time by up to 94% with negligible impact on operating costs. In addition, we address the run time variability issue by employing a novel anytime algorithm, Guided IDA*, replacing the fixed search depth parameter with a time budget constraint. We show that Guided IDA* mitigates the run time variability of previous guided tree search algorithms and enables further operating cost reductions of up to 1%

    Reinforcement Learning

    Get PDF
    Brains rule the world, and brain-like computation is increasingly used in computers and electronic devices. Brain-like computation is about processing and interpreting data or directly putting forward and performing actions. Learning is a very important aspect. This book is on reinforcement learning which involves performing actions to achieve a goal. The first 11 chapters of this book describe and extend the scope of reinforcement learning. The remaining 11 chapters show that there is already wide usage in numerous fields. Reinforcement learning can tackle control tasks that are too complex for traditional, hand-designed, non-learning controllers. As learning computers can deal with technical complexities, the tasks of human operators remain to specify goals on increasingly higher levels. This book shows that reinforcement learning is a very dynamic area in terms of theory and applications and it shall stimulate and encourage new research in this field

    Attributes of Big Data Analytics for Data-Driven Decision Making in Cyber-Physical Power Systems

    Get PDF
    Big data analytics is a virtually new term in power system terminology. This concept delves into the way a massive volume of data is acquired, processed, analyzed to extract insight from available data. In particular, big data analytics alludes to applications of artificial intelligence, machine learning techniques, data mining techniques, time-series forecasting methods. Decision-makers in power systems have been long plagued by incapability and weakness of classical methods in dealing with large-scale real practical cases due to the existence of thousands or millions of variables, being time-consuming, the requirement of a high computation burden, divergence of results, unjustifiable errors, and poor accuracy of the model. Big data analytics is an ongoing topic, which pinpoints how to extract insights from these large data sets. The extant article has enumerated the applications of big data analytics in future power systems through several layers from grid-scale to local-scale. Big data analytics has many applications in the areas of smart grid implementation, electricity markets, execution of collaborative operation schemes, enhancement of microgrid operation autonomy, management of electric vehicle operations in smart grids, active distribution network control, district hub system management, multi-agent energy systems, electricity theft detection, stability and security assessment by PMUs, and better exploitation of renewable energy sources. The employment of big data analytics entails some prerequisites, such as the proliferation of IoT-enabled devices, easily-accessible cloud space, blockchain, etc. This paper has comprehensively conducted an extensive review of the applications of big data analytics along with the prevailing challenges and solutions

    Reinforcement Learning and Mixed-Integer Programming for Power Plant Scheduling in Low Carbon Systems: Comparison and Hybridisation

    Full text link
    Decarbonisation is driving dramatic growth in renewable power generation. This increases uncertainty in the load to be served by power plants and makes their efficient scheduling, known as the unit commitment (UC) problem, more difficult. UC is solved in practice by mixed-integer programming (MIP) methods; however, there is growing interest in emerging data-driven methods including reinforcement learning (RL). In this paper, we extensively test two MIP (deterministic and stochastic) and two RL (model-free and with lookahead) scheduling methods over a large set of test days and problem sizes, for the first time comparing the state-of-the-art of these two approaches on a level playing field. We find that deterministic and stochastic MIP consistently produce lower-cost UC schedules than RL, exhibiting better reliability and scalability with problem size. Average operating costs of RL are more than 2 times larger than stochastic MIP for a 50-generator test case, while the cost is 13 times larger in the worst instance. However, the key strength of RL is the ability to produce solutions practically instantly, irrespective of problem size. We leverage this advantage to produce various initial solutions for warm starting concurrent stochastic MIP solves. By producing several near-optimal solutions simultaneously and then evaluating them using Monte Carlo methods, the differences between the true cost function and the discrete approximation required to formulate the MIP are exploited. The resulting hybrid technique outperforms both the RL and MIP methods individually, reducing total operating costs by 0.3% on average.Comment: Submitted to Applied Energy, Dec 202

    Ensemble Reinforcement Learning: A Survey

    Full text link
    Reinforcement Learning (RL) has emerged as a highly effective technique for addressing various scientific and applied problems. Despite its success, certain complex tasks remain challenging to be addressed solely with a single model and algorithm. In response, ensemble reinforcement learning (ERL), a promising approach that combines the benefits of both RL and ensemble learning (EL), has gained widespread popularity. ERL leverages multiple models or training algorithms to comprehensively explore the problem space and possesses strong generalization capabilities. In this study, we present a comprehensive survey on ERL to provide readers with an overview of recent advances and challenges in the field. First, we introduce the background and motivation for ERL. Second, we analyze in detail the strategies that have been successfully applied in ERL, including model averaging, model selection, and model combination. Subsequently, we summarize the datasets and analyze algorithms used in relevant studies. Finally, we outline several open questions and discuss future research directions of ERL. By providing a guide for future scientific research and engineering applications, this survey contributes to the advancement of ERL.Comment: 42 page

    Machine learning techniques implementation in power optimization, data processing, and bio-medical applications

    Get PDF
    The rapid progress and development in machine-learning algorithms becomes a key factor in determining the future of humanity. These algorithms and techniques were utilized to solve a wide spectrum of problems extended from data mining and knowledge discovery to unsupervised learning and optimization. This dissertation consists of two study areas. The first area investigates the use of reinforcement learning and adaptive critic design algorithms in the field of power grid control. The second area in this dissertation, consisting of three papers, focuses on developing and applying clustering algorithms on biomedical data. The first paper presents a novel modelling approach for demand side management of electric water heaters using Q-learning and action-dependent heuristic dynamic programming. The implemented approaches provide an efficient load management mechanism that reduces the overall power cost and smooths grid load profile. The second paper implements an ensemble statistical and subspace-clustering model for analyzing the heterogeneous data of the autism spectrum disorder. The paper implements a novel k-dimensional algorithm that shows efficiency in handling heterogeneous dataset. The third paper provides a unified learning model for clustering neuroimaging data to identify the potential risk factors for suboptimal brain aging. In the last paper, clustering and clustering validation indices are utilized to identify the groups of compounds that are responsible for plant uptake and contaminant transportation from roots to plants edible parts --Abstract, page iv

    Auto-scaling techniques for cloud-based Complex Event Processing

    Get PDF
    One key topic in cloud computing is elasticity, which is the ability of the cloud environment to timely adapt the resource assignment along with the workload demand. According to cloud on-demand model, the infrastructure should be able to scale up and down to unpredictable workloads, in order to achieve both a guaranteed service level and cost efficiency. This work addresses the cloud elasticity problem, with particular reference to the Complex Event Processing (CEP) systems. CEP systems are designed to process large volumes of event-driven data streams and continuously provide results with a low latency and in real-time. CEP systems need to adapt to changing query and events loads. Because of the high computational requirements and varying loads, CEP are distributed system and running on cloud infrastructures. In this work we review the cloud computing auto-scaling solutions, and study their suit- ability in the CEP model. We implement some solutions in a CEP prototype and evaluate the experimental results
    • …
    corecore