124,478 research outputs found
Global Optima for Size Optimization Benchmarks by Branch and Bound Principles
This paper searches for global optima for size optimization benchmarks utilizing a method based on branch and bound principles. The goal is to demonstrate the process for finding these global optima on the basis of two examples. A suitable parallelization strategy is used in order to minimize the computational demands. Optima found in the literature are compared with the optima used in this work
Recursive Percentage based Hybrid Pattern Training for Supervised Learning
Supervised learning algorithms, often used to find the I/O relationship in data, have the tendency to be trapped in local optima as opposed to the desirable global optima. In this paper, we discuss the RPHP learning algorithm. The algorithm uses Real Coded Genetic Algorithm based global and local searches to find a set of pseudo global optimal solutions. Each pseudo global optimum is a local optimal solution from the point of view of all the patterns but globally optimal from the point of view of a subset of patterns. Together with RPHP, a Kth nearest neighbor algorithm is used as a second level pattern distributor to solve a test pattern. We also show theoretically the condition under which finding several pseudo global optimal solutions requires a shorter training time than finding a single global optimal solution. As the difficulty of curve fitting problems is easily estimated, we verify the capability of the RPHP algorithm against them and compare the RPHP algorithm with three counterparts to show the benefits of hybrid learning and active recursive subset selection. The RPHP shows a clear superiority in performance. We conclude our paper by identifying possible loopholes in the RPHP algorithm and proposing possible solutions
Self-Adaptation and Global Convergence : A Counter-Example
The self-adaptation of the mutation distribution is a distinguishing feature of evolutionary algorithms that optimize over continuous variables. It is widely recognized that self-adaptation accelerates the search for optima and enhances the ability to locate optima accurately, but it is generally unclear whether these optima are global ones or not. Here, it is proven that the probability of convergence to the global optimum is less than one in general even if the objective function is continuous
- …
