5 research outputs found

    Computational complexity of evolutionary algorithms, hybridizations, and swarm intelligence

    Get PDF
    Bio-inspired randomized search heuristics such as evolutionary algorithms, hybridizations with local search, and swarm intelligence are very popular among practitioners as they can be applied in case the problem is not well understood or when there is not enough knowledge, time, or expertise to design problem-specific algorithms. Evolutionary algorithms simulate the natural evolution of species by iteratively applying evolutionary operators such as mutation, recombination, and selection to a set of solutions for a given problem. A recent trend is to hybridize evolutionary algorithms with local search to refine newly constructed solutions by hill climbing. Swarm intelligence comprises ant colony optimization as well as particle swarm optimization. These modern search paradigms rely on the collective intelligence of many single agents to find good solutions for the problem at hand. Many empirical studies demonstrate the usefulness of these heuristics for a large variety of problems, but a thorough understanding is still far away. We regard these algorithms from the perspective of theoretical computer science and analyze the random time these heuristics need to optimize pseudo-Boolean problems. This is done in a mathematically rigorous sense, using tools known from the analysis of randomized algorithms, and it leads to asymptotic bounds on their computational complexity. This approach has been followed successfully for evolutionary algorithms, but the theory of hybrid algorithms and swarm intelligence is still in its very infancy. Our results shed light on the asymptotic performance of these heuristics, increase our understanding of their dynamic behavior, and contribute to a rigorous theoretical foundation of randomized search heuristics

    Robust approach for capacity benefit margin computation with wind energy consideration for large multi-area power systems

    Get PDF
    Capacity benefit margin (CBM) represents the tie-lines transfer capability margin for power interchange between interconnected areas. Accurate evaluation of CBM is essential for available transfer capability (ATC) determination. Most of the existing methods for CBM computation rely on complex optimization techniques. In these techniques, for every step increase in power transfer, to improve supply reliability of the deficient areas, the reliability must be recalculated and checked through optimization. Thus, for a large number of interconnected areas, these techniques might not scale well. Another shortcoming of these techniques is the simplifying assumption of only one deficient area with a fully connected network (i.e., all the areas have a direct connection or tie line with each other). In this thesis, a robust graph-theoretic approach is proposed to calculate CBM in a multi-area network with multiple deficient non-directly connected areas. Unlike the existing approaches, multiple deficient areas are considered and some of the areas are not fully connected. From literature, previous techniques only considered conventional generating units in the loss of load expectation (LOLE) computation. A strategy for the incorporation of wind power generating unit is proposed using Weibull probability distribution. This is important since the supply reliability of an area is measured using LOLE of the area and considering the random nature of wind generating systems which has a great effect on the supply reliability. In addition, LOLE which is commonly used as an index for the CBM computation is usually evaluated by using the area peak load demand and the available reserve capacity. The system peak demand usually occurs within a few weeks in a year; therefore, the period of off-peak demand is not efficiently accounted for in the LOLE evaluation. Hence, demand side management (DSM) resources; peak clipping and valley filling are employed to modify the chronological load model of the system which subsequently enhances the CBM quantification. Finally, the results of the CBM are incorporated in ATC computation to study the influence on the ATC evaluation. The proposed technique has been evaluated using IEEE RTS-96 test system because the system has all the required reliability data for LOLE computation. The technique can evaluate and allocate CBM among multi-area systems consisting of two deficient areas. The influence of renewable energy on LOLE has been efficiently evaluated and the DSM technique was efficiently employed to improve three-area test system generation reliability. The generation reliability of the interconnected areas has been improved by an average of 35%. This improvement is very significant in terms of the generation facilities and the financial implication that may be required to be put in place if the proposed DSM technique was not applied. The results and the performance evaluation showed that the proposed technique is simple and robust compared to the existing methods. The technique can also be used as a feasibility tool by utilities to verify the possibility of wheeling power to a deficient area using maximum flow algorithm

    A Computational View on Natural Evolution: On the Rigorous Analysis of the Speed of Adaptation

    Get PDF
    Inspired by Darwin’s ideas, Turing (1948) proposed an evolutionary search as an automated problem solving approach. Mimicking natural evolution, evolutionary algorithms evolve a set of solutions through the repeated application of the evolutionary operators (mutation, recombination and selection). Evolutionary algorithms belong to the family of black box algorithms which are general purpose optimisation tools. They are typically used when no good specific algorithm is known for the problem at hand and they have been reported to be surprisingly effective (Eiben and Smith, 2015; Sarker et al., 2002). Interestingly, although evolutionary algorithms are heavily inspired by natural evolution, their study has deviated from the study of evolution by the population genetics community. We believe that this is a missed opportunity and that both fields can benefit from an interdisciplinary collaboration. The question of how long it takes for a natural population to evolve complex adaptations has fascinated researchers for decades. We will argue that this is an equivalent research question to the runtime analysis of algorithms. By making use of the methods and techniques used in both fields, we will derive plenty of meaningful results for both communities, proving that this interdisciplinary approach is effective and relevant. We will apply the tools used in the theoretical analysis of evolutionary algorithms to quantify the complexity of adaptive walks on many landscapes, illustrating how the structure of the fitness landscape and the parameter conditions can impose limits to adaptation. Furthermore, as geneticists use diffusion theory to track the change in the allele frequencies of a population, we will develop a brand new model to analyse the dynamics of evolutionary algorithms. Our model, based on stochastic differential equations, will allow to describe not only the expected behaviour, but also to measure how much the process might deviate from that expectation

    Theoretical and Empirical Evaluation of Diversity-preserving Mechanisms in Evolutionary Algorithms: On the Rigorous Runtime Analysis of Diversity-preserving Mechanisms in Evolutionary Algorithms

    Get PDF
    Evolutionary algorithms (EAs) simulate the natural evolution of species by iteratively applying evolutionary operators such as mutation, recombination, and selection to a set of solutions for a given problem. One of the major advantages of these algorithms is that they can be easily implemented when the optimisation problem is not well understood, and the design of problem-specific algorithms cannot be performed due to lack of time, knowledge, or expertise to design problem-specific algorithms. Also, EAs can be used as a first step to get insights when the problem is just a black box to the developer/programmer. In these cases, by evaluating candidate solutions it is possible to gain knowledge on the problem at hand. EAs are well suited to dealing with multimodal problems due to their use of a population. A diverse population can explore several hills in the fitness landscape simultaneously and offer several good solutions to the user, a feature desirable for decision making, multi-objective optimisation and dynamic optimisation. However, a major difficulty when applying EAs is that the population may converge to a sub-optimal individual before the fitness landscape is explored properly. Many diversity-preserving mechanisms have been developed to reduce the risk of such premature convergence and given such a variety of mechanisms to choose from, it is often not clear which mechanism is the best choice for a particular problem. We study the (expected/average) time for such algorithms to find satisfactory solutions for multimodal and multi-objective problems and to extract guidelines for the informed design of efficient and effective EAs. The resulting runtime bounds are used to predict and to judge the performance of algorithms for arbitrary problem sizes, further used to clarify important design issues from a theoretical perspective. We combine theoretical research with empirical applications to test the theoretical recommendations for their practicality, and to engage in rapid knowledge transfer from theory to practice. With this approach, we provide a better understanding of the working principles of EAs with diversity-preserving mechanisms. We provide theoretical foundations and we explain when and why certain diversity mechanisms are effective, and when they are not. It thus contributes to the informed design of better EAs
    corecore