229 research outputs found

    Sub-structural Niching in Estimation of Distribution Algorithms

    Full text link
    We propose a sub-structural niching method that fully exploits the problem decomposition capability of linkage-learning methods such as the estimation of distribution algorithms and concentrate on maintaining diversity at the sub-structural level. The proposed method consists of three key components: (1) Problem decomposition and sub-structure identification, (2) sub-structure fitness estimation, and (3) sub-structural niche preservation. The sub-structural niching method is compared to restricted tournament selection (RTS)--a niching method used in hierarchical Bayesian optimization algorithm--with special emphasis on sustained preservation of multiple global solutions of a class of boundedly-difficult, additively-separable multimodal problems. The results show that sub-structural niching successfully maintains multiple global optima over large number of generations and does so with significantly less population than RTS. Additionally, the market share of each of the niche is much closer to the expected level in sub-structural niching when compared to RTS

    Learning to Generate Genotypes with Neural Networks

    Get PDF
    Neural networks and evolutionary computation have a rich intertwined history. They most commonly appear together when an evolutionary algorithm optimises the parameters and topology of a neural network for reinforcement learning problems, or when a neural network is applied as a surrogate fitness function to aid the evolutionary optimisation of expensive fitness functions. In this paper we take a different approach, asking the question of whether a neural network can be used to provide a mutation distribution for an evolutionary algorithm, and what advantages this approach may offer? Two modern neural network models are investigated, a Denoising Autoencoder modified to produce stochastic outputs and the Neural Autoregressive Distribution Estimator. Results show that the neural network approach to learning genotypes is able to solve many difficult discrete problems, such as MaxSat and HIFF, and regularly outperforms other evolutionary techniques

    An exploration of evolutionary computation applied to frequency modulation audio synthesis parameter optimisation

    Get PDF
    With the ever-increasing complexity of sound synthesisers, there is a growing demand for automated parameter estimation and sound space navigation techniques. This thesis explores the potential for evolutionary computation to automatically map known sound qualities onto the parameters of frequency modulation synthesis. Within this exploration are original contributions in the domain of synthesis parameter estimation and, within the developed system, evolutionary computation, in the form of the evolutionary algorithms that drive the underlying optimisation process. Based upon the requirement for the parameter estimation system to deliver multiple search space solutions, existing evolutionary algorithmic architectures are augmented to enable niching, while maintaining the strengths of the original algorithms. Two novel evolutionary algorithms are proposed in which cluster analysis is used to identify and maintain species within the evolving populations. A conventional evolution strategy and cooperative coevolution strategy are defined, with cluster-orientated operators that enable the simultaneous optimisation of multiple search space solutions at distinct optima. A test methodology is developed that enables components of the synthesis matching problem to be identified and isolated, enabling the performance of different optimisation techniques to be compared quantitatively. A system is consequently developed that evolves sound matches using conventional frequency modulation synthesis models, and the effectiveness of different evolutionary algorithms is assessed and compared in application to both static and timevarying sound matching problems. Performance of the system is then evaluated by interview with expert listeners. The thesis is closed with a reflection on the algorithms and systems which have been developed, discussing possibilities for the future of automated synthesis parameter estimation techniques, and how they might be employed

    A clustering particle swarm optimizer for locating and tracking multiple optima in dynamic environments

    Get PDF
    This article is posted here with permission from the IEEE - Copyright @ 2010 IEEEIn the real world, many optimization problems are dynamic. This requires an optimization algorithm to not only find the global optimal solution under a specific environment but also to track the trajectory of the changing optima over dynamic environments. To address this requirement, this paper investigates a clustering particle swarm optimizer (PSO) for dynamic optimization problems. This algorithm employs a hierarchical clustering method to locate and track multiple peaks. A fast local search method is also introduced to search optimal solutions in a promising subregion found by the clustering method. Experimental study is conducted based on the moving peaks benchmark to test the performance of the clustering PSO in comparison with several state-of-the-art algorithms from the literature. The experimental results show the efficiency of the clustering PSO for locating and tracking multiple optima in dynamic environments in comparison with other particle swarm optimization models based on the multiswarm method.This work was supported by the Engineering and Physical Sciences Research Council of U.K., under Grant EP/E060722/1
    corecore