Global optimization methods tend to focus on exploitation of known optima, often getting stuck in local optima. For problems with costly evaluations, it is more effective to initially identify the high-performance regions. This thesis proposes an algorithm designed to provide good coverage of the highperformance regions of an objective function using few objective function evaluations. The algorithm performs consecutive Metropolis-Hastings random walks on an RBFN meta-model of the objective function. After each walk, it adds the endpoint to the training set, then retrains the RBFN. Experiments show that the algorithm explores good solutions in significantly fewer objective function evaluations than state of the art algorithms, such Niching ES. The efficiency of the algorithm can be significantly increased by raising theacceptance function to some power. The mapof thehigh-performance regions obtained can be used to initialize a more greedy optimization method. Moreover, the MIMH algorithm can be readily used to sample efficiently from a distribution the shape of which is determined by a costly evaluation
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.