476 research outputs found

    The SOS Platform: Designing, Tuning and Statistically Benchmarking Optimisation Algorithms

    Get PDF
    open access articleWe present Stochastic Optimisation Software (SOS), a Java platform facilitating the algorithmic design process and the evaluation of metaheuristic optimisation algorithms. SOS reduces the burden of coding miscellaneous methods for dealing with several bothersome and time-demanding tasks such as parameter tuning, implementation of comparison algorithms and testbed problems, collecting and processing data to display results, measuring algorithmic overhead, etc. SOS provides numerous off-the-shelf methods including: (1) customised implementations of statistical tests, such as the Wilcoxon rank-sum test and the Holm–Bonferroni procedure, for comparing the performances of optimisation algorithms and automatically generating result tables in PDF and formats; (2) the implementation of an original advanced statistical routine for accurately comparing couples of stochastic optimisation algorithms; (3) the implementation of a novel testbed suite for continuous optimisation, derived from the IEEE CEC 2014 benchmark, allowing for controlled activation of the rotation on each testbed function. Moreover, we briefly comment on the current state of the literature in stochastic optimisation and highlight similarities shared by modern metaheuristics inspired by nature. We argue that the vast majority of these algorithms are simply a reformulation of the same methods and that metaheuristics for optimisation should be simply treated as stochastic processes with less emphasis on the inspiring metaphor behind them

    Artificial Intelligence for the design of symmetric cryptographic primitives

    Get PDF
    Algorithms and the Foundations of Software technolog

    A Classification of Hyper-heuristic Approaches

    Get PDF
    The current state of the art in hyper-heuristic research comprises a set of approaches that share the common goal of automating the design and adaptation of heuristic methods to solve hard computational search problems. The main goal is to produce more generally applicable search methodologies. In this chapter we present and overview of previous categorisations of hyper-heuristics and provide a unified classification and definition which captures the work that is being undertaken in this field. We distinguish between two main hyper-heuristic categories: heuristic selection and heuristic generation. Some representative examples of each category are discussed in detail. Our goal is to both clarify the main features of existing techniques and to suggest new directions for hyper-heuristic research

    looking back and looking forward

    Get PDF
    Mcdermott, J., Kronberger, G., Orzechowski, P., Vanneschi, L., Manzoni, L., Kalkreuth, R., & Castelli, M. (2022). Genetic programming benchmarks: looking back and looking forward. ACM SIGEVOlution, 15(3), 1-19. https://doi.org/10.1145/3578482.3578483The top image shows a set of scales, which are intended to bring to mind the ideas of balance and fair experimentation which are the focus of our article on genetic programming benchmarks in this issue. Image by Elena Mozhvilo and made available under the Unsplash license on https://unsplash.com/photos/j06gLuKK0GM.authorsversionpublishe

    Hybrid optimizer for expeditious modeling of virtual urban environments

    Get PDF
    Tese de mestrado. Engenharia Informática. Faculdade de Engenharia. Universidade do Porto. 200

    The SOS Platform: Designing, Tuning and Statistically Benchmarking Optimisation Algorithms

    Get PDF
    We present Stochastic Optimisation Software (SOS), a Java platform facilitating the algorithmic design process and the evaluation of metaheuristic optimisation algorithms. SOS reduces the burden of coding miscellaneous methods for dealing with several bothersome and time-demanding tasks such as parameter tuning, implementation of comparison algorithms and testbed problems, collecting and processing data to display results, measuring algorithmic overhead, etc. SOS provides numerous off-the-shelf methods including: (1) customised implementations of statistical tests, such as the Wilcoxon rank-sum test and the Holm–Bonferroni procedure, for comparing the performances of optimisation algorithms and automatically generating result tables in PDF and LATEX formats; (2) the implementation of an original advanced statistical routine for accurately comparing couples of stochastic optimisation algorithms; (3) the implementation of a novel testbed suite for continuous optimisation, derived from the IEEE CEC 2014 benchmark, allowing for controlled activation of the rotation on each testbed function. Moreover, we briefly comment on the current state of the literature in stochastic optimisation and highlight similarities shared by modern metaheuristics inspired by nature. We argue that the vast majority of these algorithms are simply a reformulation of the same methods and that metaheuristics for optimisation should be simply treated as stochastic processes with less emphasis on the inspiring metaphor behind them

    Benchmarking and analyzing iterative optimization heuristics with IOHprofiler

    Get PDF
    Algorithms and the Foundations of Software technolog

    Automatic machine learning:methods, systems, challenges

    Get PDF

    Automatic machine learning:methods, systems, challenges

    Get PDF
    This open access book presents the first comprehensive overview of general methods in Automatic Machine Learning (AutoML), collects descriptions of existing systems based on these methods, and discusses the first international challenge of AutoML systems. The book serves as a point of entry into this quickly-developing field for researchers and advanced students alike, as well as providing a reference for practitioners aiming to use AutoML in their work. The recent success of commercial ML applications and the rapid growth of the field has created a high demand for off-the-shelf ML methods that can be used easily and without expert knowledge. Many of the recent machine learning successes crucially rely on human experts, who select appropriate ML architectures (deep learning architectures or more traditional ML workflows) and their hyperparameters; however the field of AutoML targets a progressive automation of machine learning, based on principles from optimization and machine learning itself
    corecore