2,079 research outputs found

    Towards efficient multiobjective optimization: multiobjective statistical criterions

    Get PDF
    The use of Surrogate Based Optimization (SBO) is widely spread in engineering design to reduce the number of computational expensive simulations. However, "real-world" problems often consist of multiple, conflicting objectives leading to a set of equivalent solutions (the Pareto front). The objectives are often aggregated into a single cost function to reduce the computational cost, though a better approach is to use multiobjective optimization methods to directly identify a set of Pareto-optimal solutions, which can be used by the designer to make more efficient design decisions (instead of making those decisions upfront). Most of the work in multiobjective optimization is focused on MultiObjective Evolutionary Algorithms (MOEAs). While MOEAs are well-suited to handle large, intractable design spaces, they typically require thousands of expensive simulations, which is prohibitively expensive for the problems under study. Therefore, the use of surrogate models in multiobjective optimization, denoted as MultiObjective Surrogate-Based Optimization (MOSBO), may prove to be even more worthwhile than SBO methods to expedite the optimization process. In this paper, the authors propose the Efficient Multiobjective Optimization (EMO) algorithm which uses Kriging models and multiobjective versions of the expected improvement and probability of improvement criterions to identify the Pareto front with a minimal number of expensive simulations. The EMO algorithm is applied on multiple standard benchmark problems and compared against the well-known NSGA-II and SPEA2 multiobjective optimization methods with promising results

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    Search based software engineering: Trends, techniques and applications

    Get PDF
    © ACM, 2012. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version is available from the link below.In the past five years there has been a dramatic increase in work on Search-Based Software Engineering (SBSE), an approach to Software Engineering (SE) in which Search-Based Optimization (SBO) algorithms are used to address problems in SE. SBSE has been applied to problems throughout the SE lifecycle, from requirements and project planning to maintenance and reengineering. The approach is attractive because it offers a suite of adaptive automated and semiautomated solutions in situations typified by large complex problem spaces with multiple competing and conflicting objectives. This article provides a review and classification of literature on SBSE. The work identifies research trends and relationships between the techniques applied and the applications to which they have been applied and highlights gaps in the literature and avenues for further research.EPSRC and E

    Solving dynamic multi-objective problems with a new prediction-based optimization algorithm

    Get PDF
    Funding Information: This work is supported by the National Natural Science Foundation of China under Grants 62006103 and 61872168, in part by the Jiangsu national science research of high education under Grand 20KJB110021. The authors express sincerely appreciation to the anonymous reviewers for their helpful opinions.Peer reviewedPublisher PD

    A novel hybrid multi-objective metamodel-based evolutionary optimization algorithm

    Get PDF
    Optimization via Simulation (OvS) is an useful optimization tool to find a solution to an optimization problem that is difficult to model analytically. OvS consists in evaluating potential solutions through simulation executions; however, its high computational cost is a factor that can make its implementation infeasible. This issue also occurs in multi-objective problems, which tend to be expensive to solve. In this work, we present a new hybrid multi-objective OvS algorithm, which uses Kriging-type metamodels to estimate the simulations results and a multi-objective evolutionary algorithm to manage the optimization process. Our proposal succeeds in reducing the computational cost significantly without affecting the quality of the results obtained. The evolutionary part of the hybrid algorithm is based on the popular NSGA-II. The hybrid method is compared to the canonical NSGA-II and other hybrid approaches, showing a good performance not only in the quality of the solutions but also as computational cost saving.Fil: Baquela, Enrique Gabriel. Universidad Tecnológica Nacional. Facultad Regional San Nicolás; ArgentinaFil: Olivera, Ana Carolina. Universidad Nacional de Cuyo. Instituto para las Tecnologías de la Informacion y las Comunicaciones; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentin

    Grammar-based Representation and Identification of Dynamical Systems

    Get PDF
    In this paper we propose a novel approach to identify dynamical systems. The method estimates the model structure and the parameters of the model simultaneously, automating the critical decisions involved in identification such as model structure and complexity selection. In order to solve the combined model structure and model parameter estimation problem, a new representation of dynamical systems is proposed. The proposed representation is based on Tree Adjoining Grammar, a formalism that was developed from linguistic considerations. Using the proposed representation, the identification problem can be interpreted as a multi-objective optimization problem and we propose a Evolutionary Algorithm-based approach to solve the problem. A benchmark example is used to demonstrate the proposed approach. The results were found to be comparable to that obtained by state-of-the-art non-linear system identification methods, without making use of knowledge of the system description.Comment: Submitted to European Control Conference (ECC) 201
    corecore