144 research outputs found

    Evolutionary Algorithms

    Full text link
    Evolutionary algorithms (EAs) are population-based metaheuristics, originally inspired by aspects of natural evolution. Modern varieties incorporate a broad mixture of search mechanisms, and tend to blend inspiration from nature with pragmatic engineering concerns; however, all EAs essentially operate by maintaining a population of potential solutions and in some way artificially 'evolving' that population over time. Particularly well-known categories of EAs include genetic algorithms (GAs), Genetic Programming (GP), and Evolution Strategies (ES). EAs have proven very successful in practical applications, particularly those requiring solutions to combinatorial problems. EAs are highly flexible and can be configured to address any optimization task, without the requirements for reformulation and/or simplification that would be needed for other techniques. However, this flexibility goes hand in hand with a cost: the tailoring of an EA's configuration and parameters, so as to provide robust performance for a given class of tasks, is often a complex and time-consuming process. This tailoring process is one of the many ongoing research areas associated with EAs.Comment: To appear in R. Marti, P. Pardalos, and M. Resende, eds., Handbook of Heuristics, Springe

    Adaptive algorithms for history matching and uncertainty quantification

    Get PDF
    Numerical reservoir simulation models are the basis for many decisions in regard to predicting, optimising, and improving production performance of oil and gas reservoirs. History matching is required to calibrate models to the dynamic behaviour of the reservoir, due to the existence of uncertainty in model parameters. Finally a set of history matched models are used for reservoir performance prediction and economic and risk assessment of different development scenarios. Various algorithms are employed to search and sample parameter space in history matching and uncertainty quantification problems. The algorithm choice and implementation, as done through a number of control parameters, have a significant impact on effectiveness and efficiency of the algorithm and thus, the quality of results and the speed of the process. This thesis is concerned with investigation, development, and implementation of improved and adaptive algorithms for reservoir history matching and uncertainty quantification problems. A set of evolutionary algorithms are considered and applied to history matching. The shared characteristic of applied algorithms is adaptation by balancing exploration and exploitation of the search space, which can lead to improved convergence and diversity. This includes the use of estimation of distribution algorithms, which implicitly adapt their search mechanism to the characteristics of the problem. Hybridising them with genetic algorithms, multiobjective sorting algorithms, and real-coded, multi-model and multivariate Gaussian-based models can help these algorithms to adapt even more and improve their performance. Finally diversity measures are used to develop an explicit, adaptive algorithm and control the algorithm’s performance, based on the structure of the problem. Uncertainty quantification in a Bayesian framework can be carried out by resampling of the search space using Markov chain Monte-Carlo sampling algorithms. Common critiques of these are low efficiency and their need for control parameter tuning. A Metropolis-Hastings sampling algorithm with an adaptive multivariate Gaussian proposal distribution and a K-nearest neighbour approximation has been developed and applied

    An analysis of the local optima storage capacity of Hopfield network based fitness function models

    Get PDF
    A Hopfield Neural Network (HNN) with a new weight update rule can be treated as a second order Estimation of Distribution Algorithm (EDA) or Fitness Function Model (FFM) for solving optimisation problems. The HNN models promising solutions and has a capacity for storing a certain number of local optima as low energy attractors. Solutions are generated by sampling the patterns stored in the attractors. The number of attractors a network can store (its capacity) has an impact on solution diversity and, consequently solution quality. This paper introduces two new HNN learning rules and presents the Hopfield EDA (HEDA), which learns weight values from samples of the fitness function. It investigates the attractor storage capacity of the HEDA and shows it to be equal to that known in the literature for a standard HNN. The relationship between HEDA capacity and linkage order is also investigated

    Adaptive scaling of evolvable systems

    Get PDF
    Neo-Darwinian evolution is an established natural inspiration for computational optimisation with a diverse range of forms. A particular feature of models such as Genetic Algorithms (GA) [18, 12] is the incremental combination of partial solutions distributed within a population of solutions. This mechanism in principle allows certain problems to be solved which would not be amenable to a simple local search. Such problems require these partial solutions, generally known as building-blocks, to be handled without disruption. The traditional means for this is a combination of a suitable chromosome ordering with a sympathetic recombination operator. More advanced algorithms attempt to adapt to accommodate these dependencies during the search. The recent approach of Estimation of Distribution Algorithms (EDA) aims to directly infer a probabilistic model of a promising population distribution from a sample of fitter solutions [23]. This model is then sampled to generate a new solution set. A symbiotic view of evolution is behind the recent development of the Compositional Search Evolutionary Algorithms (CSEA) [49, 19, 8] which build up an incremental model of variable dependencies conditional on a series of tests. Building-blocks are retained as explicit genetic structures and conditionally joined to form higher-order structures. These have been shown to be effective on special classes of hierarchical problems but are unproven on less tightly-structured problems. We propose that there exists a simple yet powerful combination of the above approaches: the persistent, adapting dependency model of a compositional pool with the expressive and compact variable weighting of probabilistic models. We review and deconstruct some of the key methods above for the purpose of determining their individual drawbacks and their common principles. By this reasoned approach we aim to arrive at a unifying framework that can adaptively scale to span a range of problem structure classes. This is implemented in a novel algorithm called the Transitional Evolutionary Algorithm (TEA). This is empirically validated in an incremental manner, verifying the various facets of the TEA and comparing it with related algorithms for an increasingly structured series of benchmark problems. This prompts some refinements to result in a simple and general algorithm that is nevertheless competitive with state-of-the-art methods

    An overview of population-based algorithms for multi-objective optimisation

    Get PDF
    In this work we present an overview of the most prominent population-based algorithms and the methodologies used to extend them to multiple objective problems. Although not exact in the mathematical sense, it has long been recognised that population-based multi-objective optimisation techniques for real-world applications are immensely valuable and versatile. These techniques are usually employed when exact optimisation methods are not easily applicable or simply when, due to sheer complexity, such techniques could potentially be very costly. Another advantage is that since a population of decision vectors is considered in each generation these algorithms are implicitly parallelisable and can generate an approximation of the entire Pareto front at each iteration. A critique of their capabilities is also provided

    TEDA: A Targeted Estimation of Distribution Algorithm

    Get PDF
    This thesis discusses the development and performance of a novel evolutionary algorithm, the Targeted Estimation of Distribution Algorithm (TEDA). TEDA takes the concept of targeting, an idea that has previously been shown to be effective as part of a Genetic Algorithm (GA) called Fitness Directed Crossover (FDC), and introduces it into a novel hybrid algorithm that transitions from a GA to an Estimation of Distribution Algorithm (EDA). Targeting is a process for solving optimisation problems where there is a concept of control points, genes that can be said to be active, and where the total number of control points found within a solution is as important as where they are located. When generating a new solution an algorithm that uses targeting must first of all choose the number of control points to set in the new solution before choosing which to set. The hybrid approach is designed to take advantage of the ability of EDAs to exploit patterns within the population to effectively locate the global optimum while avoiding the tendency of EDAs to prematurely converge. This is achieved by initially using a GA to effectively explore the search space before transitioning into an EDA as the population converges on the region of the global optimum. As targeting places an extra restriction on the solutions produced by specifying their size, combining it with the hybrid approach allows TEDA to produce solutions that are of an optimal size and of a higher quality than would be found using a GA alone without risking a loss of diversity. TEDA is tested on three different problem domains. These are optimal control of cancer chemotherapy, network routing and Feature Subset Selection (FSS). Of these problems, TEDA showed consistent advantage over standard EAs in the routing problem and demonstrated that it is able to find good solutions faster than untargeted EAs and non evolutionary approaches at the FSS problem. It did not demonstrate any advantage over other approaches when applied to chemotherapy. The FSS domain demonstrated that in large and noisy problems TEDA’s targeting derived ability to reduce the size of the search space significantly increased the speed with which good solutions could be found. The routing domain demonstrated that, where the ideal number of control points is deceptive, both targeting and the exploitative capabilities of an EDA are needed, making TEDA a more effective approach than both untargeted approaches and FDC. Additionally, in none of the problems was TEDA seen to perform significantly worse than any alternative approaches

    DEUM: a framework for an estimation of distribution algorithm based on Markov random fields.

    Get PDF
    Estimation of Distribution Algorithms (EDAs) belong to the class of population based optimisation algorithms. They are motivated by the idea of discovering and exploiting the interaction between variables in the solution. They estimate a probability distribution from population of solutions, and sample it to generate the next population. Many EDAs use probabilistic graphical modelling techniques for this purpose. In particular, directed graphical models (Bayesian networks) have been widely used in EDA. This thesis proposes an undirected graphical model (Markov Random Field (MRF)) approach to estimate and sample the distribution in EDAs. The interaction between variables in the solution is modelled as an undirected graph and the joint probability of a solution is factorised as a Gibbs distribution. The thesis describes a model of fitness function that approximates the energy in the Gibbs distribution, and shows how this model can be fitted to a population of solutions to estimate the parameters of the MRF. The estimated MRF is then sampled to generate the next population. This approach is applied to estimation of distribution in a general framework of an EDA, called Distribution Estimation using Markov Random Fields (DEUM). The thesis then proposes several variants of DEUM using different sampling techniques and tests their performance on a range of optimisation problems. The results show that, for most of the tested problems, the DEUM algorithms significantly outperform other EDAs, both in terms of number of fitness evaluations and the quality of the solutions found by them. There are two main explanations for the success of DEUM algorithms. Firstly, DEUM builds a model of fitness function to approximate the MRF. This contrasts with other EDAs, which build a model of selected solutions. This allows DEUM to use fitness in variation part of the evolution. Secondly, DEUM exploits the temperature coefficient in the Gibbs distribution to regulate the behaviour of the algorithm. In particular, with higher temperature, the distribution is closer to being uniform and with lower temperature it concentrates near some global optima. This gives DEUM an explicit control over the convergence of the algorithm, resulting in better optimisation

    An improved MOEA/D algorithm for multi-objective multicast routing with network coding

    Get PDF
    Network coding enables higher network throughput, more balanced traffic, and securer data transmission. However, complicated mathematical operations incur when packets are combined at intermediate nodes, which, if not operated properly, lead to very high network resource consumption and unacceptable delay. Therefore, it is of vital importance to minimize various network resources and end-to-end delays while exploiting promising benefits of network coding. Multicast has been used in increasingly more applications, such as video conferencing and remote education. In this paper the multicast routing problem with network coding is formulated as a multi-objective optimization problem (MOP), where the total coding cost, the total link cost and the end-to-end delay are minimized simultaneously. We adapt the multi-objective evolutionary algorithm based on decomposition (MOEA/D) for this MOP by hybridizing it with a population-based incremental learning technique which makes use of the global and historical information collected to provide additional guidance to the evolutionary search. Three new schemes are devised to facilitate the performance improvement, including a probability-based initialization scheme, a problem-specific population updating rule, and a hybridized reproduction operator. Experimental results clearly demonstrate that the proposed algorithm outperforms a number of state-of-the-art MOEAs regarding the solution quality and computational time

    Evolvable Neuronal Paths: A Novel Basis for Information and Search in the Brain

    Get PDF
    We propose a previously unrecognized kind of informational entity in the brain that is capable of acting as the basis for unlimited hereditary variation in neuronal networks. This unit is a path of activity through a network of neurons, analogous to a path taken through a hidden Markov model. To prove in principle the capabilities of this new kind of informational substrate, we show how a population of paths can be used as the hereditary material for a neuronally implemented genetic algorithm, (the swiss-army knife of black-box optimization techniques) which we have proposed elsewhere could operate at somatic timescales in the brain. We compare this to the same genetic algorithm that uses a standard ‘genetic’ informational substrate, i.e. non-overlapping discrete genotypes, on a range of optimization problems. A path evolution algorithm (PEA) is defined as any algorithm that implements natural selection of paths in a network substrate. A PEA is a previously unrecognized type of natural selection that is well suited for implementation by biological neuronal networks with structural plasticity. The important similarities and differences between a standard genetic algorithm and a PEA are considered. Whilst most experiments are conducted on an abstract network model, at the conclusion of the paper a slightly more realistic neuronal implementation of a PEA is outlined based on Izhikevich spiking neurons. Finally, experimental predictions are made for the identification of such informational paths in the brain

    Mixed Order Hyper-Networks for Function Approximation and Optimisation

    Get PDF
    Many systems take inputs, which can be measured and sometimes controlled, and outputs, which can also be measured and which depend on the inputs. Taking numerous measurements from such systems produces data, which may be used to either model the system with the goal of predicting the output associated with a given input (function approximation, or regression) or of finding the input settings required to produce a desired output (optimisation, or search). Approximating or optimising a function is central to the field of computational intelligence. There are many existing methods for performing regression and optimisation based on samples of data but they all have limitations. Multi layer perceptrons (MLPs) are universal approximators, but they suffer from the black box problem, which means their structure and the function they implement is opaque to the user. They also suffer from a propensity to become trapped in local minima or large plateaux in the error function during learning. A regression method with a structure that allows models to be compared, human knowledge to be extracted, optimisation searches to be guided and model complexity to be controlled is desirable. This thesis presents such as method. This thesis presents a single framework for both regression and optimisation: the mixed order hyper network (MOHN). A MOHN implements a function f:{-1,1}^n ->R to arbitrary precision. The structure of a MOHN makes the ways in which input variables interact to determine the function output explicit, which allows human insights and complexity control that are very difficult in neural networks with hidden units. The explicit structure representation also allows efficient algorithms for searching for an input pattern that leads to a desired output. A number of learning rules for estimating the weights based on a sample of data are presented along with a heuristic method for choosing which connections to include in a model. Several methods for searching a MOHN for inputs that lead to a desired output are compared. Experiments compare a MOHN to an MLP on regression tasks. The MOHN is found to achieve a comparable level of accuracy to an MLP but suffers less from local minima in the error function and shows less variance across multiple training trials. It is also easier to interpret and combine from an ensemble. The trade-off between the fit of a model to its training data and that to an independent set of test data is shown to be easier to control in a MOHN than an MLP. A MOHN is also compared to a number of existing optimisation methods including those using estimation of distribution algorithms, genetic algorithms and simulated annealing. The MOHN is able to find optimal solutions in far fewer function evaluations than these methods on tasks selected from the literature
    corecore