84 research outputs found

    Reducing the Computational Effort Associated with Evolutionary Optimisation in Single Component Design

    Get PDF
    The dissertation presents innovative Evolutionary Search (ES) methods for the reduction in computational expense associated with the optimisation of highly dimensional design spaces. The objective is to develop a semi-automated system which successfully negotiates complex search spaces. Such a system would be highly desirable to a human designer by providing optimised design solutions in realistic time. The design domain represents a real-world industrial problem concerning the optimal material distribution on the underside of a flat roof tile with varying load and support conditions. The designs utilise a large number of design variables (circa 400). Due to the high computational expense associated with analysis such as finite element for detailed evaluation, in order to produce "good" design solutions within an acceptable period of time, the number of calls to the evaluation model must be kept to a minimum. The objective therefore is to minimise the number of calls required to the analysis tool whilst also achieving an optimal design solution. To minimise the number of model evaluations for detailed shape optimisation several evolutionary algorithms are investigated. The better performing algorithms are combined with multi-level search techniques which have been developed to further reduce the number of evaluations and improve quality of design solutions. Multi-level techniques utilise a number of levels of design representation. The solutions of the coarse representations are injected into the more detailed designs for fine grained refinement. The techniques developed include Dynamic Shape Refinement (DSR), Modified Injection Island Genetic Algorithm (MiiGA) and Dynamic Injection Island Genetic Algorithm (DiiGA). The multi-level techniques are able to handle large numbers of design variables (i.e. > 100). Based on the performance characteristics of the individual algorithms and multi-level search techniques, distributed search techniques are proposed. These techniques utilise different evolutionary strategies in a multi-level environment and were developed as a way of further reducing computational expense and improve design solutions. The results indicate a considerable potential for a significant reduction in the number of evaluation calls during evolutionary search. In general this allows a more efficient integration with computationally intensive analytical techniques during detailed design and contribute significantly to those preliminary stages of the design process where a greater degree of analysis is required to validate results from more simplistic preliminary design models

    From Understanding Genetic Drift to a Smart-Restart Mechanism for Estimation-of-Distribution Algorithms

    Full text link
    Estimation-of-distribution algorithms (EDAs) are optimization algorithms that learn a distribution on the search space from which good solutions can be sampled easily. A key parameter of most EDAs is the sample size (population size). If the population size is too small, the update of the probabilistic model builds on few samples, leading to the undesired effect of genetic drift. Too large population sizes avoid genetic drift, but slow down the process. Building on a recent quantitative analysis of how the population size leads to genetic drift, we design a smart-restart mechanism for EDAs. By stopping runs when the risk for genetic drift is high, it automatically runs the EDA in good parameter regimes. Via a mathematical runtime analysis, we prove a general performance guarantee for this smart-restart scheme. This in particular shows that in many situations where the optimal (problem-specific) parameter values are known, the restart scheme automatically finds these, leading to the asymptotically optimal performance. We also conduct an extensive experimental analysis. On four classic benchmark problems, we clearly observe the critical influence of the population size on the performance, and we find that the smart-restart scheme leads to a performance close to the one obtainable with optimal parameter values. Our results also show that previous theory-based suggestions for the optimal population size can be far from the optimal ones, leading to a performance clearly inferior to the one obtained via the smart-restart scheme. We also conduct experiments with PBIL (cross-entropy algorithm) on two combinatorial optimization problems from the literature, the max-cut problem and the bipartition problem. Again, we observe that the smart-restart mechanism finds much better values for the population size than those suggested in the literature, leading to a much better performance.Comment: Accepted for publication in "Journal of Machine Learning Research". Extended version of our GECCO 2020 paper. This article supersedes arXiv:2004.0714

    Generation and optimisation of real-world static and dynamic location-allocation problems with application to the telecommunications industry.

    Get PDF
    The location-allocation (LA) problem concerns the location of facilities and the allocation of demand, to minimise or maximise a particular function such as cost, profit or a measure of distance. Many formulations of LA problems have been presented in the literature to capture and study the unique aspects of real-world problems. However, some real-world aspects, such as resilience, are still lacking in the literature. Resilience ensures uninterrupted supply of demand and enhances the quality of service. Due to changes in population shift, market size, and the economic and labour markets - which often cause demand to be stochastic - a reasonable LA problem formulation should consider some aspect of future uncertainties. Almost all LA problem formulations in the literature that capture some aspect of future uncertainties fall in the domain of dynamic optimisation problems, where new facilities are located every time the environment changes. However, considering the substantial cost associated with locating a new facility, it becomes infeasible to locate facilities each time the environment changes. In this study, we propose and investigate variations of LA problem formulations. Firstly, we develop and study new LA formulations, which extend the location of facilities and the allocation of demand to add a layer of resilience. We apply the population-based incremental learning algorithm for the first time in the literature to solve the new novel LA formulations. Secondly, we propose and study a new dynamic formulation of the LA problem where facilities are opened once at the start of a defined period and are expected to be satisfactory in servicing customers' demands irrespective of changes in customer distribution. The problem is based on the idea that customers will change locations over a defined period and that these changes have to be taken into account when establishing facilities to service changing customers' distributions. Thirdly, we employ a simulation-based optimisation approach to tackle the new dynamic formulation. Owing to the high computational costs associated with simulation-based optimisation, we investigate the concept of Racing, an approach used in model selection, to reduce the high computational cost by employing the minimum number of simulations for solution selection

    TEDA: A Targeted Estimation of Distribution Algorithm

    Get PDF
    This thesis discusses the development and performance of a novel evolutionary algorithm, the Targeted Estimation of Distribution Algorithm (TEDA). TEDA takes the concept of targeting, an idea that has previously been shown to be effective as part of a Genetic Algorithm (GA) called Fitness Directed Crossover (FDC), and introduces it into a novel hybrid algorithm that transitions from a GA to an Estimation of Distribution Algorithm (EDA). Targeting is a process for solving optimisation problems where there is a concept of control points, genes that can be said to be active, and where the total number of control points found within a solution is as important as where they are located. When generating a new solution an algorithm that uses targeting must first of all choose the number of control points to set in the new solution before choosing which to set. The hybrid approach is designed to take advantage of the ability of EDAs to exploit patterns within the population to effectively locate the global optimum while avoiding the tendency of EDAs to prematurely converge. This is achieved by initially using a GA to effectively explore the search space before transitioning into an EDA as the population converges on the region of the global optimum. As targeting places an extra restriction on the solutions produced by specifying their size, combining it with the hybrid approach allows TEDA to produce solutions that are of an optimal size and of a higher quality than would be found using a GA alone without risking a loss of diversity. TEDA is tested on three different problem domains. These are optimal control of cancer chemotherapy, network routing and Feature Subset Selection (FSS). Of these problems, TEDA showed consistent advantage over standard EAs in the routing problem and demonstrated that it is able to find good solutions faster than untargeted EAs and non evolutionary approaches at the FSS problem. It did not demonstrate any advantage over other approaches when applied to chemotherapy. The FSS domain demonstrated that in large and noisy problems TEDA’s targeting derived ability to reduce the size of the search space significantly increased the speed with which good solutions could be found. The routing domain demonstrated that, where the ideal number of control points is deceptive, both targeting and the exploitative capabilities of an EDA are needed, making TEDA a more effective approach than both untargeted approaches and FDC. Additionally, in none of the problems was TEDA seen to perform significantly worse than any alternative approaches

    Machine learning for corporate failure prediction : an empirical study of South African companies

    Get PDF
    Includes bibliographical references (leaves 255-266).The research objective of this study was to construct an empirical model for the prediction of corporate failure in South Africa through the application of machine learning techniques using information generally available to investors. The study began with a thorough review of the corporate failure literature, breaking the process of prediction model construction into the following steps: * Defining corporate failure * Sample selection * Feature selection * Data pre-processing * Feature Subset Selection * Classifier construction * Model evaluation These steps were applied to the construction of a model, using a sample of failed companies that were listed on the JSE Securities Exchange between 1 January 1996 and 30 June 2003. A paired sample of non-failed companies was selected. Pairing was performed on the basis of year of failure, industry and asset size (total assets per the company financial statements excluding intangible assets). A minimum of two years and a maximum of three years of financial data were collated for each company. Such data was mainly sourced from BFA McGregor RAID Station, although the BFA McGregor Handbook and JSE Handbook were also consulted for certain data items. A total of 75 financial and non-financial ratios were calculated for each year of data collected for every company in the final sample. Two databases of ratios were created - one for all companies with at least two years of data and another for those companies with three years of data. Missing and undefined data items were rectified before all the ratios were normalised. The set of normalised values was then imported into MatLab Version 6 and input into a Population-Based Incremental Learning (PBIL) algorithm. PBIL was then used to identify those subsets of features that best separated the failed and non-failed data clusters for a one, two and three year forward forecast period. Thornton's Separability Index (SI) was used to evaluate the degree of separation achieved by each feature subset

    Advances in Evolutionary Algorithms

    Get PDF
    With the recent trends towards massive data sets and significant computational power, combined with evolutionary algorithmic advances evolutionary computation is becoming much more relevant to practice. Aim of the book is to present recent improvements, innovative ideas and concepts in a part of a huge EA field

    Evolutionary approaches for portfolio optimization

    Get PDF
    Portfolio optimization involves the optimal assignment of limited capital to different available financial assets to achieve a reasonable trade-off between profit and risk objectives. Markowitz’s mean variance (MV) model is widely regarded as the foundation of modern portfolio theory and provides a quantitative framework for portfolio optimization problems. In real market, investors commonly face real-world trading restrictions and it requires that the constructed portfolios have to meet trading constraints. When additional constraints are added to the basic MV model, the problem thus becomes more complex and the exact optimization approaches run into difficulties to deliver solutions within reasonable time for large problem size. By introducing the cardinality constraint alone already transformed the classic quadratic optimization model into a mixed-integer quadratic programming problem which is an NP-hard problem. Evolutionary algorithms, a class of metaheuristics, are one of the known alternatives for optimization problems that are too complex to be solved using deterministic techniques. This thesis focuses on single-period portfolio optimization problems with practical trading constraints and two different risk measures. Four hybrid evolutionary algorithms are presented to efficiently solve these problems with gradually more complex real world constraints. In the first part of the thesis, the mean variance portfolio model is investigated by taking into account real-world constraints. A hybrid evolutionary algorithm (PBILDE) for portfolio optimization with cardinality and quantity constraints is presented. The proposed PBILDE is able to achieve a strong synergetic effect through hybridization of PBIL and DE. A partially guided mutation and an elitist update strategy are proposed in order to promote the efficient convergence of PBILDE. Its effectiveness is evaluated and compared with other existing algorithms over a number of datasets. A multi-objective scatter search with archive (MOSSwA) algorithm for portfolio optimization with cardinality, quantity and pre-assignment constraints is then presented. New subset generations and solution combination methods are proposed to generate efficient and diverse portfolios. A learning-guided multi-objective evolutionary (MODEwAwL) algorithm for the portfolio optimization problems with cardinality, quantity, pre-assignment and round lot constraints is presented. A learning mechanism is introduced in order to extract important features from the set of elite solutions. Problem-specific selection heuristics are introduced in order to identify high-quality solutions with a reduced computational cost. An efficient and effective candidate generation scheme utilizing a learning mechanism, problem specific heuristics and effective direction-based search methods is proposed to guide the search towards the promising regions of the search space. In the second part of the thesis, an alternative risk measure, VaR, is considered. A non-parametric mean-VaR model with six practical trading constraints is investigated. A multi-objective evolutionary algorithm with guided learning (MODE-GL) is presented for the mean-VaR model. Two different variants of DE mutation schemes in the solution generation scheme are proposed in order to promote the exploration of the search towards the least crowded region of the solution space. Experimental results using historical daily financial market data from S &P 100 and S & P 500 indices are presented. When the cardinality constraints are considered, incorporating a learning mechanism significantly promotes the efficient convergence of the search

    Nuclear Power

    Get PDF
    The world of the twenty first century is an energy consuming society. Due to increasing population and living standards, each year the world requires more energy and new efficient systems for delivering it. Furthermore, the new systems must be inherently safe and environmentally benign. These realities of today's world are among the reasons that lead to serious interest in deploying nuclear power as a sustainable energy source. Today's nuclear reactors are safe and highly efficient energy systems that offer electricity and a multitude of co-generation energy products ranging from potable water to heat for industrial applications. The goal of the book is to show the current state-of-the-art in the covered technical areas as well as to demonstrate how general engineering principles and methods can be applied to nuclear power systems

    Evolutionary mechanism design using agent-based models

    Get PDF
    corecore