37,326 research outputs found
A Convergence indicator for Multi-Objective Optimisation Algorithms
The algorithms of multi-objective optimisation had a relative growth in the
last years. Thereby, it's requires some way of comparing the results of these.
In this sense, performance measures play a key role. In general, it's
considered some properties of these algorithms such as capacity, convergence,
diversity or convergence-diversity. There are some known measures such as
generational distance (GD), inverted generational distance (IGD), hypervolume
(HV), Spread(), Averaged Hausdorff distance (), R2-indicator,
among others. In this paper, we focuses on proposing a new indicator to measure
convergence based on the traditional formula for Shannon entropy. The main
features about this measure are: 1) It does not require tho know the true
Pareto set and 2) Medium computational cost when compared with Hypervolume.Comment: Submitted to TEM
An artificial immune systems based predictive modelling approach for the multi-objective elicitation of Mamdani fuzzy rules: a special application to modelling alloys
In this paper, a systematic multi-objective Mamdani fuzzy modeling approach is proposed, which can be viewed as an extended version of the previously proposed Singleton fuzzy modeling paradigm. A set of new back-error propagation (BEP) updating formulas are derived so that they can replace the old set developed in the singleton version. With the substitution, the extension to the multi-objective Mamdani Fuzzy Rule-Based Systems (FRBS) is almost endemic. Due to the carefully chosen output membership functions, the inference and the defuzzification methods, a closed form integral can be deducted for the defuzzification method, which ensures the efficiency of the developed Mamdani FRBS. Some important factors, such as the variable length coding scheme and the rule alignment, are also discussed. Experimental results for a real data set from the steel industry suggest that the proposed approach is capable of eliciting not only accurate but also transparent FRBS with good generalization ability
Recommended from our members
Combinatorial optimization and metaheuristics
Today, combinatorial optimization is one of the youngest and most active areas of discrete mathematics. It is a branch of optimization in applied mathematics and computer science, related to operational research, algorithm theory and computational complexity theory. It sits at the intersection of several fields, including artificial intelligence, mathematics and software engineering. Its increasing interest arises for the fact that a large number of scientific and industrial problems can be formulated as abstract combinatorial optimization problems, through graphs and/or (integer) linear programs. Some of these problems have polynomial-time (âefficientâ) algorithms, while most of them are NP-hard, i.e. it is not proved that they can be solved in polynomial-time. Mainly, it means that it is not possible to guarantee that an exact solution to the problem can be found and one has to settle for an approximate solution with known performance guarantees. Indeed, the goal of approximate methods is to find âquicklyâ (reasonable run-times), with âhighâ probability, provable âgoodâ solutions (low error from the real optimal solution). In the last 20 years, a new kind of algorithm commonly called metaheuristics have emerged in this class, which basically try to combine heuristics in high level frameworks aimed at efficiently and effectively exploring the search space. This report briefly outlines the components, concepts, advantages and disadvantages of different metaheuristic approaches from a conceptual point of view, in order to analyze their similarities and differences. The two very significant forces of intensification and diversification, that mainly determine the behavior of a metaheuristic, will be pointed out. The report concludes by exploring the importance of hybridization and integration methods
Fast calculation of multiobjective probability of improvement and expected improvement criteria for Pareto optimization
The use of surrogate based optimization (SBO) is widely spread in engineering design to reduce the number of computational expensive simulations. However, "real-world" problems often consist of multiple, conflicting objectives leading to a set of competitive solutions (the Pareto front). The objectives are often aggregated into a single cost function to reduce the computational cost, though a better approach is to use multiobjective optimization methods to directly identify a set of Pareto-optimal solutions, which can be used by the designer to make more efficient design decisions (instead of weighting and aggregating the costs upfront). Most of the work in multiobjective optimization is focused on multiobjective evolutionary algorithms (MOEAs). While MOEAs are well-suited to handle large, intractable design spaces, they typically require thousands of expensive simulations, which is prohibitively expensive for the problems under study. Therefore, the use of surrogate models in multiobjective optimization, denoted as multiobjective surrogate-based optimization, may prove to be even more worthwhile than SBO methods to expedite the optimization of computational expensive systems. In this paper, the authors propose the efficient multiobjective optimization (EMO) algorithm which uses Kriging models and multiobjective versions of the probability of improvement and expected improvement criteria to identify the Pareto front with a minimal number of expensive simulations. The EMO algorithm is applied on multiple standard benchmark problems and compared against the well-known NSGA-II, SPEA2 and SMS-EMOA multiobjective optimization methods
- âŠ