12,892 research outputs found

    A statistical learning based approach for parameter fine-tuning of metaheuristics

    Get PDF
    Metaheuristics are approximation methods used to solve combinatorial optimization problems. Their performance usually depends on a set of parameters that need to be adjusted. The selection of appropriate parameter values causes a loss of efficiency, as it requires time, and advanced analytical and problem-specific skills. This paper provides an overview of the principal approaches to tackle the Parameter Setting Problem, focusing on the statistical procedures employed so far by the scientific community. In addition, a novel methodology is proposed, which is tested using an already existing algorithm for solving the Multi-Depot Vehicle Routing Problem.Peer ReviewedPostprint (published version

    One-Step or Two-Step Optimization and the Overfitting Phenomenon: A Case Study on Time Series Classification

    Get PDF
    For the last few decades, optimization has been developing at a fast rate. Bio-inspired optimization algorithms are metaheuristics inspired by nature. These algorithms have been applied to solve different problems in engineering, economics, and other domains. Bio-inspired algorithms have also been applied in different branches of information technology such as networking and software engineering. Time series data mining is a field of information technology that has its share of these applications too. In previous works we showed how bio-inspired algorithms such as the genetic algorithms and differential evolution can be used to find the locations of the breakpoints used in the symbolic aggregate approximation of time series representation, and in another work we showed how we can utilize the particle swarm optimization, one of the famous bio-inspired algorithms, to set weights to the different segments in the symbolic aggregate approximation representation. In this paper we present, in two different approaches, a new meta optimization process that produces optimal locations of the breakpoints in addition to optimal weights of the segments. The experiments of time series classification task that we conducted show an interesting example of how the overfitting phenomenon, a frequently encountered problem in data mining which happens when the model overfits the training set, can interfere in the optimization process and hide the superior performance of an optimization algorithm

    A comparative study of adaptive mutation operators for metaheuristics

    Get PDF
    Genetic algorithms (GAs) are a class of stochastic optimization methods inspired by the principles of natural evolution. Adaptation of strategy parameters and genetic operators has become an important and promising research area in GAs. Many researchers are applying adaptive techniques to guide the search of GAs toward optimum solutions. Mutation is a key component of GAs. It is a variation operator to create diversity for GAs. This paper investigates several adaptive mutation operators, including population level adaptive mutation operators and gene level adaptive mutation operators, for GAs and compares their performance based on a set of uni-modal and multi-modal benchmark problems. The experimental results show that the gene level adaptive mutation operators are usually more efficient than the population level adaptive mutation operators for GAs
    corecore