6 research outputs found
A Convergent Differential Evolution Algorithm with Hidden Adaptation Selection for Engineering Optimization
Many improved differential Evolution (DE) algorithms have emerged as a very competitive class of evolutionary computation more than a decade ago. However, few improved DE algorithms guarantee global convergence in theory. This paper developed a convergent DE algorithm in theory, which employs a self-adaptation scheme for the parameters and two operators, that is, uniform mutation and hidden adaptation selection (haS) operators. The parameter self-adaptation and uniform mutation operator enhance the diversity of populations and guarantee ergodicity. The haS can automatically remove some inferior individuals in the process of the enhancing population diversity. The haS controls the proposed algorithm to break the loop of current generation with a small probability. The breaking probability is a hidden adaptation and proportional to the changes of the number of inferior individuals. The proposed algorithm is tested on ten engineering optimization problems taken from IEEE CEC2011
Multiobjective programming for type-2 hierarchical fuzzy inference trees
This paper proposes a design of hierarchical fuzzy inference tree (HFIT). An HFIT produces an
optimum tree-like structure. Specifically, a natural hierarchical structure that accommodates simplicity by
combining several low-dimensional fuzzy inference systems (FISs). Such a natural hierarchical structure
provides a high degree of approximation accuracy. The construction of HFIT takes place in two phases.
Firstly, a nondominated sorting based multiobjective genetic programming (MOGP) is applied to obtain a
simple tree structure (low model’s complexity) with a high accuracy. Secondly, the differential evolution
algorithm is applied to optimize the obtained tree’s parameters. In the obtained tree, each node has a
different input’s combination, where the evolutionary process governs the input’s combination. Hence,
HFIT nodes are heterogeneous in nature, which leads to a high diversity among the rules generated
by the HFIT. Additionally, the HFIT provides an automatic feature selection because it uses MOGP
for the tree’s structural optimization that accept inputs only relevant to the knowledge contained in
data. The HFIT was studied in the context of both type-1 and type-2 FISs, and its performance was
evaluated through six application problems. Moreover, the proposed multiobjective HFIT was compared
both theoretically and empirically with recently proposed FISs methods from the literature, such as
McIT2FIS, TSCIT2FNN, SIT2FNN, RIT2FNS-WB, eT2FIS, MRIT2NFS, IT2FNN-SVR, etc. From the
obtained results, it was found that the HFIT provided less complex and highly accurate models compared
to the models produced by most of the other methods. Hence, the proposed HFIT is an efficient and
competitive alternative to the other FISs for function approximation and feature selectio
Recommended from our members
Development of a Multi-Objective Optimization Capability for Heterogeneous Light Water Reactor Fuel Assemblies
As pressure grows on developed nations to move away from fossil fuel-based energy sources, so does the potential for nuclear energy to make its resurgence. However, the complex nature of the design process in nuclear engineering and a regulatory culture of ever-increasing safety standards create unique challenges to the nuclear industry. As in many engineering disciplines, the question is one of trade-offs between safety, performance, cost, and time required to develop the design from paper to real life operation. The possibilities facing a designer are virtually unlimited, with fuel choice, layout and operating conditions just three of the many categories which interact with one another in a highly non-linear manner, making it difficult to quantitatively define these trade-offs. Deciding upon an ‘optimal’ design is therefore traditionally done through expert judgement and an iterative design process. Mathematical optimization methods offer a more formal way to optimize designs by employing algorithms to explore the myriad of possibilities in a methodical manner which can yield increased performance over expert designs. In this thesis, an extensive review of the literature revealed gaps which present opportunities for novel research. Two new algorithms are created with the ability to solve optimization problems with multiple objectives simultaneously without requiring weighting or bias from the designer. They are then applied to a series of problems drawn from both the literature and real world designs. The results demonstrate the algorithms’ effectiveness and robustness as well as their ability to handle complex multi-physics problems with reasonably low computational requirements. This research offers an original and effective tool for performing optimization on nuclear fuel assembly design problems and has advanced the state of the art in both multi-objective optimization and its application to the nuclear engineering industry
Contributions on evolutionary computation for statistical inference
Evolutionary Computation (EC) techniques have been introduced in the 1960s for dealing with complex situations. One possible example is an optimization problems not having an analytical solution or being computationally intractable; in many cases such methods, named Evolutionary Algorithms (EAs), have been successfully implemented. In statistics there are many situations where complex problems arise, in particular concerning optimization. A general example is when the statistician needs to select, inside a prohibitively large discrete set, just one element, which could be a model, a partition, an experiment, or such: this would be the case of model selection, cluster analysis or design of experiment. In other situations there could be an intractable function of data, such as a likelihood, which needs to be maximized, as it happens in model parameter estimation. These kind of problems are naturally well suited for EAs, and in the last 20 years a large number of papers has been concerned with applications of EAs in tackling statistical issues.
The present dissertation is set in this part of literature, as it reports several implementations of EAs in statistics, although being mainly focused on statistical inference problems. Original results are proposed, as well as overviews and surveys on several topics. EAs are employed and analyzed considering various statistical points of view, showing and confirming their efficiency and flexibility.
The first proposal is devoted to parametric estimation problems. When EAs are employed in such analysis a novel form of variability related to their stochastic elements is introduced. We shall analyze both variability due to sampling, associated with selected estimator, and variability due to the EA. This analysis is set in a framework of statistical and computational tradeoff question, crucial in nowadays problems, by introducing cost functions related to both data acquisition and EA iterations. The proposed method will be illustrated by means of model building problem examples.
Subsequent chapter is concerned with EAs employed in Markov Chain Monte Carlo (MCMC) sampling. When sampling from multimodal or highly correlated distribution is concerned, in fact, a possible strategy suggests to run several chains in parallel, in order to improve their mixing. If these chains are allowed to interact with each other then many analogies with EC techniques can be observed, and this has led to research in many fields. The chapter aims at reviewing various methods found in literature which conjugates EC techniques and MCMC sampling, in order to identify specific and common procedures, and unifying them in a framework of EC.
In the last proposal we present a complex time series model and an identification procedure based on Genetic Algorithms (GAs). The model is capable of dealing with seasonality, by Periodic AutoRegressive (PAR) modelling, and structural changes in time, leading to a nonstationary structure. As far as a very large number of parameters and possibilites of change points are concerned, GAs are appropriate for identifying such model. Effectiveness of procedure is shown on both simulated data and real examples, these latter referred to river flow data in hydrology.
The thesis concludes with some final remarks, concerning also future work
Contributions on evolutionary computation for statistical inference
Evolutionary Computation (EC) techniques have been introduced in the 1960s for dealing with complex situations. One possible example is an optimization problems not having an analytical solution or being computationally intractable; in many cases such methods, named Evolutionary Algorithms (EAs), have been successfully implemented. In statistics there are many situations where complex problems arise, in particular concerning optimization. A general example is when the statistician needs to select, inside a prohibitively large discrete set, just one element, which could be a model, a partition, an experiment, or such: this would be the case of model selection, cluster analysis or design of experiment. In other situations there could be an intractable function of data, such as a likelihood, which needs to be maximized, as it happens in model parameter estimation. These kind of problems are naturally well suited for EAs, and in the last 20 years a large number of papers has been concerned with applications of EAs in tackling statistical issues.
The present dissertation is set in this part of literature, as it reports several implementations of EAs in statistics, although being mainly focused on statistical inference problems. Original results are proposed, as well as overviews and surveys on several topics. EAs are employed and analyzed considering various statistical points of view, showing and confirming their efficiency and flexibility.
The first proposal is devoted to parametric estimation problems. When EAs are employed in such analysis a novel form of variability related to their stochastic elements is introduced. We shall analyze both variability due to sampling, associated with selected estimator, and variability due to the EA. This analysis is set in a framework of statistical and computational tradeoff question, crucial in nowadays problems, by introducing cost functions related to both data acquisition and EA iterations. The proposed method will be illustrated by means of model building problem examples.
Subsequent chapter is concerned with EAs employed in Markov Chain Monte Carlo (MCMC) sampling. When sampling from multimodal or highly correlated distribution is concerned, in fact, a possible strategy suggests to run several chains in parallel, in order to improve their mixing. If these chains are allowed to interact with each other then many analogies with EC techniques can be observed, and this has led to research in many fields. The chapter aims at reviewing various methods found in literature which conjugates EC techniques and MCMC sampling, in order to identify specific and common procedures, and unifying them in a framework of EC.
In the last proposal we present a complex time series model and an identification procedure based on Genetic Algorithms (GAs). The model is capable of dealing with seasonality, by Periodic AutoRegressive (PAR) modelling, and structural changes in time, leading to a nonstationary structure. As far as a very large number of parameters and possibilites of change points are concerned, GAs are appropriate for identifying such model. Effectiveness of procedure is shown on both simulated data and real examples, these latter referred to river flow data in hydrology.
The thesis concludes with some final remarks, concerning also future work
Sufficient Conditions for Global Convergence of Differential Evolution Algorithm
The differential evolution algorithm (DE) is one of the most powerful stochastic real-parameter optimization algorithms. The theoretical studies on DE have gradually attracted the attention of more and more researchers. However, few theoretical researches have been done to deal with the convergence conditions for DE. In this paper, a sufficient condition and a corollary for the convergence of DE to the global optima are derived by using the infinite product. A DE algorithm framework satisfying the convergence conditions is then established. It is also proved that the two common mutation operators satisfy the algorithm framework. Numerical experiments are conducted on two parts. One aims to visualize the process that five convergent DE based on the classical DE algorithms escape from a local optimal set on two low dimensional functions. The other tests the performance of a modified DE algorithm inspired of the convergent algorithm framework on the benchmarks of the CEC2005