45 research outputs found
MIDACO parallelization scalability on 200 minlp benchmarks
This contribution presents a numerical evaluation of the impact of parallelization on
the performance of an evolutionary algorithm for mixed-integer nonlinear programming
(MINLP). On a set of 200 MINLP benchmarks the performance of the MIDACO solver is
assessed with gradually increasing parallelization factor from one to three hundred. The
results demonstrate that the efficiency of the algorithm can be significantly improved by
parallelized function evaluation. Furthermore, the results indicate that the scale-up behaviour
on the efficiency resembles a linear nature, which implies that this approach will
even be promising for very large parallelization factors. The presented research is especially
relevant to CPU-time consuming real-world applications, where only a low number
of serial processed function evaluation can be calculated in reasonable time
Chi-square matrix: An approach for building-block identification
Abstract. This paper presents a line of research in genetic algorithms (GAs), called building-block identification. The building blocks (BBs) are common structures inferred from a set of solutions. In simple GA, crossover operator plays an important role in mixing BBs. However, the crossover probably disrupts the BBs because the cut point is chosen at random. Therefore the BBs need to be identified explicitly so that the solutions are efficiently mixed. Let S be a set of binary solutions and the solution s = b1... bℓ, bi ∈ {0, 1}. We construct a symmetric matrix of which the element in row i and column j, denoted by mij, is the chi-square of variables bi and bj. The larger the mij is, the higher the dependency is between bit i and bit j. If mij is high, bit i and bit j should be passed together to prevent BB disruption. Our approach is validated for additively decomposable functions (ADFs) and hierarchically decomposable functions (HDFs). In terms of scalability, our approach shows a polynomial relationship between the number of function evaluations required to reach the optimum and the problem size. A comparison between the chi-square matrix and the hierarchical Bayesian optimization algorithm (hBOA) shows that the matrix computation is 10 times faster and uses 10 times less memory than constructing the Bayesian network.