3 research outputs found

    OPT-GAN: Black-Box Global Optimization via Generative Adversarial Nets

    Full text link
    Black-box optimization (BBO) algorithms are concerned with finding the best solutions for problems with missing analytical details. Most classical methods for such problems are based on strong and fixed a priori assumptions, such as Gaussianity. However, the complex real-world problems, especially when the global optimum is desired, could be very far from the a priori assumptions because of their diversities, causing unexpected obstacles to these methods. In this study, we propose a generative adversarial net-based broad-spectrum global optimizer (OPT-GAN) which estimates the distribution of optimum gradually, with strategies to balance exploration-exploitation trade-off. It has potential to better adapt to the regularity and structure of diversified landscapes than other methods with fixed prior, e.g. Gaussian assumption or separability. Experiments conducted on BBO benchmarking problems and several other benchmarks with diversified landscapes exhibit that OPT-GAN outperforms other traditional and neural net-based BBO algorithms.Comment: M. Lu and S. Ning contribute equally. Submitted to IEEE transactions on Neural Networks and Learning System

    Benchmarking GNN-CMA-ES on the BBOB noiseless testbed

    No full text
    International audiencePopular machine learning estimators involve regularization parameters that can be challenging to tune, and standard strategies rely on grid search for this task. In this paper, we revisit the techniques of approximating the regularization path up to predefined tolerance ϵ\epsilon in a unified framework and show that its complexity is O(1/ϵd)O(1/\sqrt[d]{\epsilon}) for uniformly convex loss of order d≥2d \geq 2 and O(1/ϵ)O(1/\sqrt{\epsilon}) for Generalized Self-Concordant functions. This framework encompasses least-squares but also logistic regression, a case that as far as we know was not handled as precisely in previous works. We leverage our technique to provide refined bounds on the validation error as well as a practical algorithm for hyperparameter tuning. The latter has global convergence guarantee when targeting a prescribed accuracy on the validation set. Last but not least, our approach helps relieving the practitioner from the (often neglected) task of selecting a stopping criterion when optimizing over the training set: our method automatically calibrates this criterion based on the targeted accuracy on the validation set

    Markov models of biomolecular systems

    Get PDF
    corecore