30,103 research outputs found

    Computational steering of a multi-objective genetic algorithm using a PDA

    Get PDF
    The execution process of a genetic algorithm typically involves some trial-and-error. This is due to the difficulty in setting the initial parameters of the algorithm – especially when little is known about the problem domain. The problem is magnified when applied to multi-objective optimisation, as care is needed to ensure that the final population of candidate solutions is representative of the trade-off surface. We propose a computational steering system that allows the engineer to interact with the optimisation routine during execution. This interaction can be as simple as monitoring the values of some parameters during the execution process, or could involve altering those parameters to influence the quality of the solutions produce by the optimisation process

    Mutual benefits of two multicriteria analysis methodologies: A case study for batch plant design

    Get PDF
    This paper presents a MultiObjective Genetic Algorithm (MOGA) optimization framework for batch plant design. For this purpose, two approaches are implemented and compared with respect to three criteria, i.e., investment cost, equipment number and a flexibility indicator based on work in process (the so-called WIP) computed by use of a discrete-event simulation model. The first approach involves a genetic algorithm in order to generate acceptable solutions, from which the best ones are chosen by using a Pareto Sort algorithm. The second approach combines the previous Genetic Algorithm with a multicriteria analysis methodology, i.e., the Electre method in order to find the best solutions. The performances of the two procedures are studied for a large-size problem and a comparison between the procedures is then made

    The Emergence of Canalization and Evolvability in an Open-Ended, Interactive Evolutionary System

    Full text link
    Natural evolution has produced a tremendous diversity of functional organisms. Many believe an essential component of this process was the evolution of evolvability, whereby evolution speeds up its ability to innovate by generating a more adaptive pool of offspring. One hypothesized mechanism for evolvability is developmental canalization, wherein certain dimensions of variation become more likely to be traversed and others are prevented from being explored (e.g. offspring tend to have similarly sized legs, and mutations affect the length of both legs, not each leg individually). While ubiquitous in nature, canalization almost never evolves in computational simulations of evolution. Not only does that deprive us of in silico models in which to study the evolution of evolvability, but it also raises the question of which conditions give rise to this form of evolvability. Answering this question would shed light on why such evolvability emerged naturally and could accelerate engineering efforts to harness evolution to solve important engineering challenges. In this paper we reveal a unique system in which canalization did emerge in computational evolution. We document that genomes entrench certain dimensions of variation that were frequently explored during their evolutionary history. The genetic representation of these organisms also evolved to be highly modular and hierarchical, and we show that these organizational properties correlate with increased fitness. Interestingly, the type of computational evolutionary experiment that produced this evolvability was very different from traditional digital evolution in that there was no objective, suggesting that open-ended, divergent evolutionary processes may be necessary for the evolution of evolvability.Comment: SI can be found at: http://www.evolvingai.org/files/SI_0.zi

    Searching for test data with feature diversity

    Full text link
    There is an implicit assumption in software testing that more diverse and varied test data is needed for effective testing and to achieve different types and levels of coverage. Generic approaches based on information theory to measure and thus, implicitly, to create diverse data have also been proposed. However, if the tester is able to identify features of the test data that are important for the particular domain or context in which the testing is being performed, the use of generic diversity measures such as this may not be sufficient nor efficient for creating test inputs that show diversity in terms of these features. Here we investigate different approaches to find data that are diverse according to a specific set of features, such as length, depth of recursion etc. Even though these features will be less general than measures based on information theory, their use may provide a tester with more direct control over the type of diversity that is present in the test data. Our experiments are carried out in the context of a general test data generation framework that can generate both numerical and highly structured data. We compare random sampling for feature-diversity to different approaches based on search and find a hill climbing search to be efficient. The experiments highlight many trade-offs that needs to be taken into account when searching for diversity. We argue that recurrent test data generation motivates building statistical models that can then help to more quickly achieve feature diversity.Comment: This version was submitted on April 14th 201

    Non-parametric inversion of gravitational lensing systems with few images using a multi-objective genetic algorithm

    Full text link
    Galaxies acting as gravitational lenses are surrounded by, at most, a handful of images. This apparent paucity of information forces one to make the best possible use of what information is available to invert the lens system. In this paper, we explore the use of a genetic algorithm to invert in a non-parametric way strong lensing systems containing only a small number of images. Perhaps the most important conclusion of this paper is that it is possible to infer the mass distribution of such gravitational lens systems using a non-parametric technique. We show that including information about the null space (i.e. the region where no images are found) is prerequisite to avoid the prediction of a large number of spurious images, and to reliably reconstruct the lens mass density. While the total mass of the lens is usually constrained within a few percent, the fidelity of the reconstruction of the lens mass distribution depends on the number and position of the images. The technique employed to include null space information can be extended in a straightforward way to add additional constraints, such as weak lensing data or time delay information.Comment: 9 pages, accepted for publication by MNRA

    Automatic surrogate model type selection during the optimization of expensive black-box problems

    Get PDF
    The use of Surrogate Based Optimization (SBO) has become commonplace for optimizing expensive black-box simulation codes. A popular SBO method is the Efficient Global Optimization (EGO) approach. However, the performance of SBO methods critically depends on the quality of the guiding surrogate. In EGO the surrogate type is usually fixed to Kriging even though this may not be optimal for all problems. In this paper the authors propose to extend the well-known EGO method with an automatic surrogate model type selection framework that is able to dynamically select the best model type (including hybrid ensembles) depending on the data available so far. Hence, the expected improvement criterion will always be based on the best approximation available at each step of the optimization process. The approach is demonstrated on a structural optimization problem, i.e., reducing the stress on a truss-like structure. Results show that the proposed algorithm consequently finds better optimums than traditional kriging-based infill optimization

    Machine Learning for Fluid Mechanics

    Full text link
    The field of fluid mechanics is rapidly advancing, driven by unprecedented volumes of data from field measurements, experiments and large-scale simulations at multiple spatiotemporal scales. Machine learning offers a wealth of techniques to extract information from data that could be translated into knowledge about the underlying fluid mechanics. Moreover, machine learning algorithms can augment domain knowledge and automate tasks related to flow control and optimization. This article presents an overview of past history, current developments, and emerging opportunities of machine learning for fluid mechanics. It outlines fundamental machine learning methodologies and discusses their uses for understanding, modeling, optimizing, and controlling fluid flows. The strengths and limitations of these methods are addressed from the perspective of scientific inquiry that considers data as an inherent part of modeling, experimentation, and simulation. Machine learning provides a powerful information processing framework that can enrich, and possibly even transform, current lines of fluid mechanics research and industrial applications.Comment: To appear in the Annual Reviews of Fluid Mechanics, 202
    corecore