5,334 research outputs found

    Review of trends and targets of complex systems for power system optimization

    Get PDF
    Optimization systems (OSs) allow operators of electrical power systems (PS) to optimally operate PSs and to also create optimal PS development plans. The inclusion of OSs in the PS is a big trend nowadays, and the demand for PS optimization tools and PS-OSs experts is growing. The aim of this review is to define the current dynamics and trends in PS optimization research and to present several papers that clearly and comprehensively describe PS OSs with characteristics corresponding to the identified current main trends in this research area. The current dynamics and trends of the research area were defined on the basis of the results of an analysis of the database of 255 PS-OS-presenting papers published from December 2015 to July 2019. Eleven main characteristics of the current PS OSs were identified. The results of the statistical analyses give four characteristics of PS OSs which are currently the most frequently presented in research papers: OSs for minimizing the price of electricity/OSs reducing PS operation costs, OSs for optimizing the operation of renewable energy sources, OSs for regulating the power consumption during the optimization process, and OSs for regulating the energy storage systems operation during the optimization process. Finally, individual identified characteristics of the current PS OSs are briefly described. In the analysis, all PS OSs presented in the observed time period were analyzed regardless of the part of the PS for which the operation was optimized by the PS OS, the voltage level of the optimized PS part, or the optimization goal of the PS OS.Web of Science135art. no. 107

    Improving Exploration in Evolution Strategies for Deep Reinforcement Learning via a Population of Novelty-Seeking Agents

    Full text link
    Evolution strategies (ES) are a family of black-box optimization algorithms able to train deep neural networks roughly as well as Q-learning and policy gradient methods on challenging deep reinforcement learning (RL) problems, but are much faster (e.g. hours vs. days) because they parallelize better. However, many RL problems require directed exploration because they have reward functions that are sparse or deceptive (i.e. contain local optima), and it is unknown how to encourage such exploration with ES. Here we show that algorithms that have been invented to promote directed exploration in small-scale evolved neural networks via populations of exploring agents, specifically novelty search (NS) and quality diversity (QD) algorithms, can be hybridized with ES to improve its performance on sparse or deceptive deep RL tasks, while retaining scalability. Our experiments confirm that the resultant new algorithms, NS-ES and two QD algorithms, NSR-ES and NSRA-ES, avoid local optima encountered by ES to achieve higher performance on Atari and simulated robots learning to walk around a deceptive trap. This paper thus introduces a family of fast, scalable algorithms for reinforcement learning that are capable of directed exploration. It also adds this new family of exploration algorithms to the RL toolbox and raises the interesting possibility that analogous algorithms with multiple simultaneous paths of exploration might also combine well with existing RL algorithms outside ES

    Multi-criteria Evolution of Neural Network Topologies: Balancing Experience and Performance in Autonomous Systems

    Full text link
    Majority of Artificial Neural Network (ANN) implementations in autonomous systems use a fixed/user-prescribed network topology, leading to sub-optimal performance and low portability. The existing neuro-evolution of augmenting topology or NEAT paradigm offers a powerful alternative by allowing the network topology and the connection weights to be simultaneously optimized through an evolutionary process. However, most NEAT implementations allow the consideration of only a single objective. There also persists the question of how to tractably introduce topological diversification that mitigates overfitting to training scenarios. To address these gaps, this paper develops a multi-objective neuro-evolution algorithm. While adopting the basic elements of NEAT, important modifications are made to the selection, speciation, and mutation processes. With the backdrop of small-robot path-planning applications, an experience-gain criterion is derived to encapsulate the amount of diverse local environment encountered by the system. This criterion facilitates the evolution of genes that support exploration, thereby seeking to generalize from a smaller set of mission scenarios than possible with performance maximization alone. The effectiveness of the single-objective (optimizing performance) and the multi-objective (optimizing performance and experience-gain) neuro-evolution approaches are evaluated on two different small-robot cases, with ANNs obtained by the multi-objective optimization observed to provide superior performance in unseen scenarios

    Online Build-Order Optimization for Real-Time Strategy Agents Using Multi-Objective Evolutionary Algorithms

    Get PDF
    The investigation introduces a novel approach for online build-order optimization in real-time strategy (RTS) games. The goal of our research is to develop an artificial intelligence (AI) RTS planning agent for military critical decision- making education with the ability to perform at an expert human level, as well as to assess a players critical decision- making ability or skill-level. Build-order optimization is modeled as a multi-objective problem (MOP), and solutions are generated utilizing a multi-objective evolutionary algorithm (MOEA) that provides a set of good build-orders to a RTS planning agent. We de ne three research objectives: (1) Design, implement and validate a capability to determine the skill-level of a RTS player. (2) Design, implement and validate a strategic planning tool that produces near expert level build-orders which are an ordered sequence of actions a player can issue to achieve a goal, and (3) Integrate the strategic planning tool into our existing RTS agent framework and an RTS game engine. The skill-level metric we selected provides an original and needed method of evaluating a RTS players skill-level during game play. This metric is a high-level description of how quickly a player executes a strategy versus known players executing the same strategy. Our strategic planning tool combines a game simulator and an MOEA to produce a set of diverse and good build-orders for an RTS agent. Through the integration of case-base reasoning (CBR), planning goals are derived and expert build- orders are injected into a MOEA population. The MOEA then produces a diverse and approximate Pareto front that is integrated into our AI RTS agent framework. Thus, the planning tool provides an innovative online approach for strategic planning in RTS games. Experimentation via the Spring Engine Balanced Annihilation game reveals that the strategic planner is able to discover build-orders that are better than an expert scripted agent and thus achieve faster strategy execution times

    A Multi-Objective Approach to Tactical Maneuvering Within Real Time Strategy Games

    Get PDF
    The real time strategy (RTS) environment is a strong platform for simulating complex tactical problems. The overall research goal is to develop artificial intelligence (AI) RTS planning agents for military critical decision making education. These agents should have the ability to perform at an expert level as well as to assess a players critical decision-making ability or skill-level. The nature of the time sensitivity within the RTS environment creates very complex situations. Each situation must be analyzed and orders must be given to each tactical unit before the scenario on the battlefield changes and makes the decisions no longer relevant. This particular research effort of RTS AI development focuses on constructing a unique approach for tactical unit positioning within an RTS environment. By utilizing multiobjective evolutionary algorithms (MOEAs) for finding an \optimal positioning solution, an AI agent can quickly determine an effective unit positioning solution with a fast, rapid response. The development of such an RTS AI agent goes through three distinctive phases. The first of which is mathematically describing the problem space of the tactical positioning of units within a combat scenario. Such a definition allows for the development of a generic MOEA search algorithm that is applicable to nearly every scenario. The next major phase requires the development and integration of this algorithm into the Air Force Institute of Technology RTS AI agent. Finally, the last phase involves experimenting with the positioning agent in order to determine the effectiveness and efficiency when placed against various other tactical options. Experimental results validate that controlling the position of the units within a tactical situation is an effective alternative for an RTS AI agent to win a battle
    corecore