834 research outputs found

    A new memetic strategy for the numerical treatment of multi-objective optimization problems

    Full text link
    In this paper we propose a novel iterative search procedure for multi-objective optimization problems. The iteration process – though derivative free – utilizes the geometry of the directional cones of such optimization problems, and is capable both of moving toward and along the (local) Pareto set depending on the distance of the current iterate toward this set. Next, we give one possible way of integrating this local search procedure into a given EMO algorithm result-ing in a novel memetic strategy. Finally, we present some numerical results on some well-known benchmark problems indicating the strength of both the local search strategy as well as the new hybrid approach

    Meta-heuristic algorithms in car engine design: a literature survey

    Get PDF
    Meta-heuristic algorithms are often inspired by natural phenomena, including the evolution of species in Darwinian natural selection theory, ant behaviors in biology, flock behaviors of some birds, and annealing in metallurgy. Due to their great potential in solving difficult optimization problems, meta-heuristic algorithms have found their way into automobile engine design. There are different optimization problems arising in different areas of car engine management including calibration, control system, fault diagnosis, and modeling. In this paper we review the state-of-the-art applications of different meta-heuristic algorithms in engine management systems. The review covers a wide range of research, including the application of meta-heuristic algorithms in engine calibration, optimizing engine control systems, engine fault diagnosis, and optimizing different parts of engines and modeling. The meta-heuristic algorithms reviewed in this paper include evolutionary algorithms, evolution strategy, evolutionary programming, genetic programming, differential evolution, estimation of distribution algorithm, ant colony optimization, particle swarm optimization, memetic algorithms, and artificial immune system

    Multi agent collaborative search based on Tchebycheff decomposition

    Get PDF
    This paper presents a novel formulation of Multi Agent Collaborative Search, for multi-objective optimization, based on Tchebycheff decomposition. A population of agents combines heuristics that aim at exploring the search space both globally (social moves) and in a neighborhood of each agent (individualistic moves). In this novel formulation the selection process is based on a combination of Tchebycheff scalarization and Pareto dominance. Furthermore, while in the previous implementation, social actions were applied to the whole population of agents and individualistic actions only to an elite sub-population, in this novel formulation this mechanism is inverted. The novel agent-based algorithm is tested at first on a standard benchmark of difficult problems and then on two specific problems in space trajectory design. Its performance is compared against a number of state-of-the-art multi objective optimization algorithms. The results demonstrate that this novel agent-based search has better performance with respect to its predecessor in a number of cases and converges better than the other state-of-the-art algorithms with a better spreading of the solutions

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    A new hybrid evolutionary algorithm for the treatment of equality constrained MOPs

    Get PDF
    Multi-objective evolutionary algorithms are widely used by researchers and practitioners to solve multi-objective optimization problems (MOPs), since they require minimal assumptions and are capable of computing a finite size approximation of the entire solution set in one run of the algorithm. So far, however, the adequate treatment of equality constraints has played a minor role. Equality constraints are particular since they typically reduce the dimension of the search space, which causes problems for stochastic search algorithms such as evolutionary strategies. In this paper, we show that multi-objective evolutionary algorithms hybridized with continuation-like techniques lead to fast and reliable numerical solvers. For this, we first propose three new problems with different characteristics that are indeed hard to solve by evolutionary algorithms. Next, we develop a variant of NSGA-II with a continuation method. We present numerical results on several equality-constrained MOPs to show that the resulting method is highly competitive to state-of-the-art evolutionary algorithms.Peer ReviewedPostprint (published version

    On the detection of nearly optimal solutions in the context of single-objective space mission design problems

    Get PDF
    When making decisions, having multiple options available for a possible realization of the same project can be advantageous. One way to increase the number of interesting choices is to consider, in addition to the optimal solution x*, also nearly optimal or approximate solutions; these alternative solutions differ from x* and can be in different regions – in the design space – but fulfil certain proximity to its function value f(x*). The scope of this article is the efficient computation and discretization of the set E of e–approximate solutions for scalar optimization problems. To accomplish this task, two strategies to archive and update the data of the search procedure will be suggested and investigated. To make emphasis on data storage efficiency, a way to manage significant and insignificant parameters is also presented. Further on, differential evolution will be used together with the new archivers for the computation of E. Finally, the behaviour of the archiver, as well as the efficiency of the resulting search procedure, will be demonstrated on some academic functions as well as on three models related to space mission design

    Parallel surrogate-assisted global optimization with expensive functions – a survey

    Get PDF
    Surrogate assisted global optimization is gaining popularity. Similarly, modern advances in computing power increasingly rely on parallelization rather than faster processors. This paper examines some of the methods used to take advantage of parallelization in surrogate based global optimization. A key issue focused on in this review is how different algorithms balance exploration and exploitation. Most of the papers surveyed are adaptive samplers that employ Gaussian Process or Kriging surrogates. These allow sophisticated approaches for balancing exploration and exploitation and even allow to develop algorithms with calculable rate of convergence as function of the number of parallel processors. In addition to optimization based on adaptive sampling, surrogate assisted parallel evolutionary algorithms are also surveyed. Beyond a review of the present state of the art, the paper also argues that methods that provide easy parallelization, like multiple parallel runs, or methods that rely on population of designs for diversity deserve more attention.United States. Dept. of Energy (National Nuclear Security Administration. Advanced Simulation and Computing Program. Cooperative Agreement under the Predictive Academic Alliance Program. DE-NA0002378

    Technological Innovations and Advances in Hydropower Engineering

    Get PDF
    It has been more than 140 years since water was used to generate electricity. Especially since the 1970s, with the advancement of science and technology, new technologies, new processes, and new materials have been widely used in hydropower construction. Engineering equipment and technology, as well as cascade development, have become increasingly mature, making possible the construction of many high dams and large reservoirs in the world. However, with the passage of time, hydropower infrastructure such as reservoirs, dams, and power stations built in large numbers in the past are aging. This, coupled with singular use of hydropower, limits the development of hydropower in the future. This book reports the achievements in hydropower construction and the efforts of sustainable hydropower development made by various countries around the globe. These existing innovative studies and applications stimulate new ideas for the renewal of hydropower infrastructure and the further improvement of hydropower development and utilization efficiency
    corecore