108 research outputs found

    A multi-objective evolutionary approach to simulation-based optimisation of real-world problems.

    Get PDF
    This thesis presents a novel evolutionary optimisation algorithm that can improve the quality of solutions in simulation-based optimisation. Simulation-based optimisation is the process of finding optimal parameter settings without explicitly examining each possible configuration of settings. An optimisation algorithm generates potential configurations and sends these to the simulation, which acts as an evaluation function. The evaluation results are used to refine the optimisation such that it eventually returns a high-quality solution. The algorithm described in this thesis integrates multi-objective optimisation, parallelism, surrogate usage, and noise handling in a unique way for dealing with simulation-based optimisation problems incurred by these characteristics. In order to handle multiple, conflicting optimisation objectives, the algorithm uses a Pareto approach in which the set of best trade-off solutions is searched for and presented to the user. The algorithm supports a high degree of parallelism by adopting an asynchronous master-slave parallelisation model in combination with an incremental population refinement strategy. A surrogate evaluation function is adopted in the algorithm to quickly identify promising candidate solutions and filter out poor ones. A novel technique based on inheritance is used to compensate for the uncertainties associated with the approximative surrogate evaluations. Furthermore, a novel technique for multi-objective problems that effectively reduces noise by adopting a dynamic procedure in resampling solutions is used to tackle the problem of real-world unpredictability (noise). The proposed algorithm is evaluated on benchmark problems and two complex real-world problems of manufacturing optimisation. The first real-world problem concerns the optimisation of a production cell at Volvo Aero, while the second one concerns the optimisation of a camshaft machining line at Volvo Cars Engine. The results from the optimisations show that the algorithm finds better solutions for all the problems considered than existing, similar algorithms. The new techniques for dealing with surrogate imprecision and noise used in the algorithm are identified as key reasons for the good performance.University of Skövde Knowledge Foundation Swede

    Systems for AutoML Research

    Get PDF

    Numerical and Evolutionary Optimization 2020

    Get PDF
    This book was established after the 8th International Workshop on Numerical and Evolutionary Optimization (NEO), representing a collection of papers on the intersection of the two research areas covered at this workshop: numerical optimization and evolutionary search techniques. While focusing on the design of fast and reliable methods lying across these two paradigms, the resulting techniques are strongly applicable to a broad class of real-world problems, such as pattern recognition, routing, energy, lines of production, prediction, and modeling, among others. This volume is intended to serve as a useful reference for mathematicians, engineers, and computer scientists to explore current issues and solutions emerging from these mathematical and computational methods and their applications

    Evolutionary multiobjective optimization for automatic agent-based model calibration: A comparative study

    Get PDF
    This work was supported by the Spanish Agencia Estatal de Investigacion, the Andalusian Government, the University of Granada, and European Regional Development Funds (ERDF) under Grants EXASOCO (PGC2018-101216-B-I00), SIMARK (P18-TP-4475), and AIMAR (A-TIC-284-UGR18). Manuel Chica was also supported by the Ramon y Cajal program (RYC-2016-19800).The authors would like to thank the ``Centro de Servicios de InformĂĄtica y Redes de Comunicaciones'' (CSIRC), University of Granada, for providing the computing resources (Alhambra supercomputer).Complex problems can be analyzed by using model simulation but its use is not straight-forward since modelers must carefully calibrate and validate their models before using them. This is specially relevant for models considering multiple outputs as its calibration requires handling different criteria jointly. This can be achieved using automated calibration and evolutionary multiobjective optimization methods which are the state of the art in multiobjective optimization as they can find a set of representative Pareto solutions under these restrictions and in a single run. However, selecting the best algorithm for performing automated calibration can be overwhelming. We propose to deal with this issue by conducting an exhaustive analysis of the performance of several evolutionary multiobjective optimization algorithms when calibrating several instances of an agent-based model for marketing with multiple outputs. We analyze the calibration results using multiobjective performance indicators and attainment surfaces, including a statistical test for studying the significance of the indicator values, and benchmarking their performance with respect to a classical mathematical method. The results of our experimentation reflect that those algorithms based on decomposition perform significantly better than the remaining methods in most instances. Besides, we also identify how different properties of the problem instances (i.e., the shape of the feasible region, the shape of the Pareto front, and the increased dimensionality) erode the behavior of the algorithms to different degrees.Spanish Agencia Estatal de InvestigacionAndalusian GovernmentUniversity of GranadaEuropean Commission PGC2018-101216-B-I00 P18-TP-4475 A-TIC-284-UGR18Spanish Government RYC-2016-1980

    A survey on metaheuristics for stochastic combinatorial optimization

    Get PDF
    Metaheuristics are general algorithmic frameworks, often nature-inspired, designed to solve complex optimization problems, and they are a growing research area since a few decades. In recent years, metaheuristics are emerging as successful alternatives to more classical approaches also for solving optimization problems that include in their mathematical formulation uncertain, stochastic, and dynamic information. In this paper metaheuristics such as Ant Colony Optimization, Evolutionary Computation, Simulated Annealing, Tabu Search and others are introduced, and their applications to the class of Stochastic Combinatorial Optimization Problems (SCOPs) is thoroughly reviewed. Issues common to all metaheuristics, open problems, and possible directions of research are proposed and discussed. In this survey, the reader familiar to metaheuristics finds also pointers to classical algorithmic approaches to optimization under uncertainty, and useful informations to start working on this problem domain, while the reader new to metaheuristics should find a good tutorial in those metaheuristics that are currently being applied to optimization under uncertainty, and motivations for interest in this fiel

    Preventing premature convergence and proving the optimality in evolutionary algorithms

    Get PDF
    http://ea2013.inria.fr//proceedings.pdfInternational audienceEvolutionary Algorithms (EA) usually carry out an efficient exploration of the search-space, but get often trapped in local minima and do not prove the optimality of the solution. Interval-based techniques, on the other hand, yield a numerical proof of optimality of the solution. However, they may fail to converge within a reasonable time due to their inability to quickly compute a good approximation of the global minimum and their exponential complexity. The contribution of this paper is a hybrid algorithm called Charibde in which a particular EA, Differential Evolution, cooperates with a Branch and Bound algorithm endowed with interval propagation techniques. It prevents premature convergence toward local optima and outperforms both deterministic and stochastic existing approaches. We demonstrate its efficiency on a benchmark of highly multimodal problems, for which we provide previously unknown global minima and certification of optimality

    Development of an optimization framework for solving engineering design problems.

    Get PDF
    The integration of optimization methodologies with computational simulations plays a profound role in the product design. Such integration, however, faces multiple challenges arising from computation-intensive simulations, unknown function properties (i.e., black-box functions), complex constraints, and high-dimensionality of problems. To address these challenges, metamodel-based methods which apply metamodels as a cheaper alternative to costly analysis tools prove to be a practical way in design optimization and have gained continuous development. In this thesis, an intrinsically linear function (ILF) assisted and trust region based optimization method (IATRO) is proposed ïŹrst for solving low-dimensional constrained black-box problems. Then, the economical sampling strategy (ESS), modiïŹed trust region strategy and self-adaptive normalization strategy (SANS) are developed to enhance the overall optimization capability. Moreover, as the radial basis function (RBF) interpolation is found to better approximate both objective and constraint functions than ILF, a RBF-assisted optimization framework is established by the combination of the balanced trust region strategy (BTRS), global intelligence selection strategy (GIS) and early termination strategy (ETS). Following that, the fast computation strategy (FCS) and successive reïŹnement strategy (SRS) are proposed for solving large-scale constrained black-box problems and the ïŹnal optimization framework is called as RATRLO (radial basis function assisted and trust region based large-scale optimization framework). By testing a set of well-known benchmark problems including 22 G-problems, 4 engineering design problems and 1 high-dimensional automotive problem, RATRLO shows remarkable advantages in achieving high-quality results with very few function evaluations and slight parameter tuning. Compared with various state-of-the-art algorithms, RATRLO can be considered one of the best global optimizers for solving constrained optimization problems. Further more, RATRLO provides a valuable insight into the development of algorithms for eïŹƒcient large-scale optimization

    Development of Methods for Solving Bilevel Optimization Problems

    Full text link
    Bilevel optimization, also referred to as bilevel programming, involves solving an upper level problem subject to the optimality of a corresponding lower level problem. The upper and lower level problems are also referred to as the leader and follower problems, respectively. Both levels have their associated objective(s), variable(s) and constraint(s). Such problems model real-life scenarios of cases where the performance of an upper level authority is realizable/sustainable only if the corresponding lower level objective is optimum. A number of practical applications in the field of engineering, logistics, economics and transportation have inherent nested structure that are suited to this type of modelling. The range of applications as well as a rapid increase in the size and complexity of such problems has prompted active interest in the design of efficient algorithms for bilevel optimization. Bilevel optimization problems present a number of unique and interesting challenges to algorithm design. The nested nature of the problem requires optimization of a lower level problem to evaluate each upper level solution, which makes it computationally exorbitant. Theoretically, an upper level solution is considered valid/feasible only if the corresponding lower level variables are the true global optimum of the lower level problem. Global optimality can be reliably asserted in very limited cases, for example convex and linear problems. In deceptive cases, an inaccurate lower level optimum may result in an objective value better than true optimum at the upper level, which poses a severe challenge for ranking/selection strategies used within any optimization technique. In turn, this also makes the performance evaluation very difficult since the performance cannot be judged based on the objective values alone. While the area of bilevel (or more generally, multilevel) programming itself is not very new, most reports in this direction up until about a decade ago considered solving linear or at most quadratic problems at both levels. Correspondingly, the focus on was on development of exact methods to solve such problems. However, such methods typically require assumptions on mathematical properties, which may not always hold in practical applications. With increasing use of computer simulation-based evaluations in a number of disciplines in science and engineering, there is more need than ever to handle problems that are highly nonlinear or even black-box in nature. Metaheuristic algorithms, such as evolutionary algorithms are more suited to this emerging paradigm. The foray of evolutionary algorithms in bilevel programming is relatively recent and there remains scope of substantial development in the field in terms of addressing the aforementioned challenges. The work presented in this thesis is directed towards improving evolutionary techniques to enable them solve generic bilevel problems more accurately using lower number of function evaluations compared to the existing methods. Three key approaches are investigated towards accomplishing this: (a) e active hybridization of global and local search methods during dierent stages of the overall search; (b) use of surrogate models to guide the search using approximations in lieu of true function evaluations, and (c) use of a non-nested re-formulation of the problem. While most of the work is focused on single-objective problems, preliminary studies are also presented on multi-objective bilevel problems. The performance of the proposed approaches is evaluated on a comprehensive suite of mathematical test problems available in the literature, as well as some practical problems. The proposed approaches are observed to achieve a favourable balance between accuracy and computational expense for solving bilevel optimization problems, and thus exhibit suitability for use in real-life applications

    JavaEvA : a Java based framework for Evolutionary Algorithms

    Get PDF
    Das Softwarepaket JavaEvA (eine Java Implementierung EvolutionĂ€rer Algorithmen) ist ein allgemeines modulares Framework fĂŒr Optimierungsalgorithmen basierend auf einer Client-Server Architektur, das geeignet ist eine Vielzahl von Optimierungsproblemen zu lösen. Das Paket wurde mit dem Schwerpunkt entwickelt neue Verfahren im Bereich der EvolutionĂ€ren Algorithmen einfach entwickeln und testen zu können und diese Verfahren letztlich in praktischen Anwendungen anzuwenden. JavaEvA beinhaltet Implementierungen der ĂŒblichen EvolutionĂ€ren Verfahren wie zum Beispiel Genetische Algorithmen, die CHC Adaptive Search, Population Based Incremental Learning, Evolutionsstrategien, ModellunterstĂŒtzte Evolutionsstrategien, Genetisches Programmieren und Grammatical Evolution. ZusĂ€tzlich erlaubt es das modulare Framework von JavaEvA eigene eventuell problemspezifische Optimierungsmodule zu ergĂ€nzen und mit den implementieren Verfahren zu vergleichen. Das JavaEvA Paket benutzt ein generisches Verfahren zur GUI Generierung und erlaubt so einen einfachen Objektorientierten Zugang zu allen relevanten Parametern eines EvolutionĂ€ren Algorithmus. Das gleiche Verfahren generiert auch entsprechende GUI Elemente fĂŒr neu entwickelte Methoden und vereinfacht so den Aufwand bei der Entwicklung neuer Methoden erheblich. ZusĂ€tzlich besteht die Möglichkeit spezialisiere GUI Elemente fĂŒr einzelne Objekte zu entwickeln und in das bestehende System zu integrieren, um die Benutzerfreundlichkeit weiter zu erhöhen. Da es uns unmöglich ist jedwede potenzielle Anwendung oder Optimierungsproblem zu antizipieren, ist es aus praktischen GrĂŒnden fast immer nötig eigene Implementierungen des jeweiligen Anwendungsproblems zu erstellen. Um diesen Vorgang zu erleichtern bietet diese Anleitung zusĂ€tzliche Beispiele mit detaillierten Beschreibungen, wie man ein eigenes Problem implementieren kann und JavaEvA lediglich als Optimierungstoolbox integriert. Auf diese Weise behĂ€lt ihre jeweilige Anwendung die vollstĂ€ndige Kontrolle ĂŒber die verwendeten Verfahren und die anwendungsspezifische Darstellung der Optimierungsergebnisse.The package JavaEvA (a Java implementation of Evolutionary Algorithms) is a general modular framework with an inherent client server structure to solve practical optimization problems. This package was especially designed to test and develop new approaches for Evolutionary Algorithms and to utilize them in real-world applications. JavaEvA already provides implementations of the most common Evolutionary Algorithms, like Genetic Algorithms, CHC Adaptive Search, Population Based Incremental Learning, Evolution Strategies, Model-Assisted Evolution Strategies, Genetic Programming and Grammatical Evolution. In addition the modular framework of JavaEvA allows everyone to add their own optimization modules to meet their specific requirements. The JavaEvA package uses a generic GUI framework that allows GUI access to any member of a class if get and set methods are provided and an editor is defined for the given data type. This approach allows very fast development cycles, since hardly any additional effort is necessary for implementing GUI elements, while still at the same time user specific GUI elements can be developed and integrated to increase usability. Since we cannot anticipate specific optimization problem and requirements, it is necessary for users to define their optimization problem. Therefore, we provide an additional framework and explain how one can include JavaEvA in an existing Java project or how one can implement ones own optimization problem and optimize it by using JavaEvA. This gives users total control of the optimization algorithms used

    Performance assessment of Surrogate model integrated with sensitivity analysis in multi-objective optimization

    Get PDF
    This Thesis develops a new multi-objective heuristic algorithm. The optimum searching task is performed by a standard genetic algorithm. Furthermore, it is assisted by the Response Surface Methodology surrogate model and by two sensitivity analysis methods: the Variance-based, also known as Sobol’ analysis, and the Elementary Effects. Once built the entire method, it is compared on several multi-objective problems with some other algorithms
    • 

    corecore