3,854 research outputs found

    Metaheuristics “In the Large”

    Get PDF
    Many people have generously given their time to the various activities of the MitL initiative. Particular gratitude is due to Adam Barwell, John A. Clark, Patrick De Causmaecker, Emma Hart, Zoltan A. Kocsis, Ben Kovitz, Krzysztof Krawiec, John McCall, Nelishia Pillay, Kevin Sim, Jim Smith, Thomas Stutzle, Eric Taillard and Stefan Wagner. J. Swan acknowledges the support of UK EPSRC grant EP/J017515/1 and the EU H2020 SAFIRE Factories project. P. GarciaSanchez and J. J. Merelo acknowledges the support of TIN201785727-C4-2-P by the Spanish Ministry of Economy and Competitiveness. M. Wagner acknowledges the support of the Australian Research Council grants DE160100850 and DP200102364.Following decades of sustained improvement, metaheuristics are one of the great success stories of opti- mization research. However, in order for research in metaheuristics to avoid fragmentation and a lack of reproducibility, there is a pressing need for stronger scientific and computational infrastructure to sup- port the development, analysis and comparison of new approaches. To this end, we present the vision and progress of the Metaheuristics “In the Large”project. The conceptual underpinnings of the project are: truly extensible algorithm templates that support reuse without modification, white box problem descriptions that provide generic support for the injection of domain specific knowledge, and remotely accessible frameworks, components and problems that will enhance reproducibility and accelerate the field’s progress. We argue that, via such principled choice of infrastructure support, the field can pur- sue a higher level of scientific enquiry. We describe our vision and report on progress, showing how the adoption of common protocols for all metaheuristics can help liberate the potential of the field, easing the exploration of the design space of metaheuristics.UK Research & Innovation (UKRI)Engineering & Physical Sciences Research Council (EPSRC) EP/J017515/1EU H2020 SAFIRE Factories projectSpanish Ministry of Economy and Competitiveness TIN201785727-C4-2-PAustralian Research Council DE160100850 DP20010236

    Offline Learning for Selection Hyper-heuristics with Elman Networks

    Get PDF
    This is the author accepted manuscript. The final version is available from the publisher via the link in this record.Offline selection hyper-heuristics are machine learning methods that are trained on heuristic selections to create an algorithm that is tuned for a particular problem domain. In this work, a simple selection hyper-heuristic is executed on a number of computationally hard benchmark optimisation problems, and the resulting sequences of low level heuristic selections and objective function values are used to construct an offline learning database. An Elman network is trained on sequences of heuristic selections chosen from the offline database and the network’s ability to learn and generalise from these sequences is evaluated. The networks are trained using a leave-one-out cross validation methodology and the sequences of heuristic selections they produce are tested on benchmark problems drawn from the HyFlex set. The results demonstrate that the Elman network is capable of intra-domain learning and generalisation with 99% confidence and produces better results than the training sequences in many cases. When the network was trained using an interdomain training set, the Elman network did not exhibit generalisation indicating that inter-domain generalisation is a harder problem and that strategies learned on one domain cannot necessarily be transferred to another

    A harmony search algorithm for nurse rostering problems

    Get PDF
    Harmony search algorithm (HSA) is a relatively new nature-inspired algorithm. It evolves solutions in the problem search space by mimicking the musical improvisation process in seeking agreeable harmony measured by aesthetic standards. The nurse rostering problem (NRP) is a well-known NP-hard scheduling problem that aims at allocating the required workload to the available staff nurses at healthcare organizations to meet the operational requirements and a range of preferences. This work investigates research issues of the parameter settings in HSA and application of HSA to effectively solve complex NRPs. Due to the well-known fact that most NRPs algorithms are highly problem (or even instance) dependent, the performance of our proposed HSA is evaluated on two sets of very different nurse rostering problems. The first set represents a real world dataset obtained from a large hospital in Malaysia. Experimental results show that our proposed HSA produces better quality rosters for all considered instances than a genetic algorithm (implemented herein). The second is a set of well-known benchmark NRPs which are widely used by researchers in the literature. The proposed HSA obtains good results (and new lower bound for a few instances) when compared to the current state of the art of meta-heuristic algorithms in recent literature

    Towards ‘Metaheuristics in the Large’

    Get PDF
    There is a pressing need for a higher-level architectural per- spective in metaheuristics research. This article proposes a purely functional collection of component signatures as a basis for the scalable and automatic construction of meta- heuristics. We claim that this is an important step for sci- entific progress because: i). It is increasingly accepted that newly-proposed meta- heuristics should be grounded in terms of well-defined frameworks and components. Standardized descrip- tions help to distinguish novelty from minor variation. ii). Greater reproducibility is needed, particularly to facil- itate comparison with the state-of-the-art. iii). Interoperable descriptions are a pre-requisite for a data model supporting large-scale knowledge discovery across frameworks and problems. A key obstacle is that metaheuristic components suffer from an intrinsic lack of modularity, so we present some design op- tions for dealing with this and use this to provide a roadmap for addressing the above issues.PreprintPeer reviewe

    Tuning Parallel Applications in Parallel

    Get PDF
    Auto-tuning has recently received significant attention from the High Performance Computing community. Most auto-tuning approaches are specialized to work either on specific domains such as dense linear algebra and stencil computations, or only at certain stages of program execution such as compile time and runtime. Real scientific applications, however, demand a cohesive environment that can efficiently provide auto-tuning solutions at all stages of application development and deployment. Towards that end, we describe a unified end-to-end approach to auto-tuning scientific applications. Our system, Active Harmony, takes a search-based collaborative approach to auto-tuning. Application programmers, library writers and compilers collaborate to describe and export a set of performance related tunable parameters to the Active Harmony system. These parameters define a tuning search-space. The auto-tuner monitors the program performance and suggests adaptation decisions. The decisions are made by a central controller using a parallel search algorithm. The algorithm leverages parallel architectures to search across a set of optimization parameter values. Different nodes of a parallel system evaluate different configurations at each timestep. Active Harmony supports runtime adaptive code-generation and tuning for parameters that require new code (e.g. unroll factors). Effectively, we merge traditional feedback directed optimization and just-in-time compilation. This feature also enables application developers to write applications once and have the auto-tuner adjust the application behavior automatically when run on new systems. We evaluated our system on multiple large-scale parallel applications and showed that our system can improve the execution time by up to 46% compared to the original version of the program. Finally, we believe that the success of any auto-tuning research depends on how effectively application developers, domain-experts and auto-tuners communicate and work together. To that end, we have developed and released a simple and extensible language that standardizes the parameter space representation. Using this language, developers and researchers can collaborate to export tunable parameters to the tuning frameworks. Relationships (e.g. ordering, dependencies, constraints, ranking) between tunable parameters and search-hints can also be expressed

    An analysis of heuristic subsequences for offline hyper-heuristic learning

    Get PDF
    This is the final version. Available on open access from Springer Verlag via the DOI in this recordA selection hyper-heuristic is used to minimise the objective functions of a well-known set of benchmark problems. The resulting sequences of low level heuristic selections and objective function values are used to generate a database of heuristic selections. The sequences in the database are broken down into subsequences and the mathematical concept of a logarithmic return is used to discriminate between “effective” subsequences, which tend to decrease the objective value, and “disruptive” subsequences, which tend to increase the objective value. These subsequences are then employed in a sequenced based hyper-heuristic and evaluated on an unseen set of benchmark problems. Empirical results demonstrate that the “effective” subsequences perform significantly better than the “disruptive” subsequences across a number of problem domains with 99% confidence. The identification of subsequences of heuristic selections that can be shown to be effective across a number of problems or problem domains could have important implications for the design of future sequence based hyper-heuristics

    A Survey on Compiler Autotuning using Machine Learning

    Full text link
    Since the mid-1990s, researchers have been trying to use machine-learning based approaches to solve a number of different compiler optimization problems. These techniques primarily enhance the quality of the obtained results and, more importantly, make it feasible to tackle two main compiler optimization problems: optimization selection (choosing which optimizations to apply) and phase-ordering (choosing the order of applying optimizations). The compiler optimization space continues to grow due to the advancement of applications, increasing number of compiler optimizations, and new target architectures. Generic optimization passes in compilers cannot fully leverage newly introduced optimizations and, therefore, cannot keep up with the pace of increasing options. This survey summarizes and classifies the recent advances in using machine learning for the compiler optimization field, particularly on the two major problems of (1) selecting the best optimizations and (2) the phase-ordering of optimizations. The survey highlights the approaches taken so far, the obtained results, the fine-grain classification among different approaches and finally, the influential papers of the field.Comment: version 5.0 (updated on September 2018)- Preprint Version For our Accepted Journal @ ACM CSUR 2018 (42 pages) - This survey will be updated quarterly here (Send me your new published papers to be added in the subsequent version) History: Received November 2016; Revised August 2017; Revised February 2018; Accepted March 2018

    Evaluating an automated procedure of machine learning parameter tuning for software effort estimation

    Get PDF
    Software effort estimation requires accurate prediction models. Machine learning algorithms have been used to create more accurate estimation models. However, these algorithms are sensitive to factors such as the choice of hyper-parameters. To reduce this sensitivity, automated approaches for hyper-parameter tuning have been recently investigated. There is a need for further research on the effectiveness of such approaches in the context of software effort estimation. These evaluations could help understand which hyper-parameter settings can be adjusted to improve model accuracy, and in which specific contexts tuning can benefit model performance. The goal of this work is to develop an automated procedure for machine learning hyper-parameter tuning in the context of software effort estimation. The automated procedure builds and evaluates software effort estimation models to determine the most accurate evaluation schemes. The methodology followed in this work consists of first performing a systematic mapping study to characterize existing hyper-parameter tuning approaches in software effort estimation, developing the procedure to automate the evaluation of hyper-parameter tuning, and conducting controlled quasi experiments to evaluate the automated procedure. From the systematic literature mapping we discovered that effort estimation literature has favored the use of grid search. The results we obtained in our quasi experiments demonstrated that fast, less exhaustive tuners were viable in place of grid search. These results indicate that randomly evaluating 60 hyper-parameters can be as good as grid search, and that multiple state-of-the-art tuners were only more effective than this random search in 6% of the evaluated dataset-model combinations. We endorse random search, genetic algorithms, flash, differential evolution, and tabu and harmony search as effective tuners.Los algoritmos de aprendizaje automático han sido utilizados para crear modelos con mayor precisión para la estimación del esfuerzo del desarrollo de software. Sin embargo, estos algoritmos son sensibles a factores, incluyendo la selección de hiper parámetros. Para reducir esto, se han investigado recientemente algoritmos de ajuste automático de hiper parámetros. Es necesario evaluar la efectividad de estos algoritmos en el contexto de estimación de esfuerzo. Estas evaluaciones podrían ayudar a entender qué hiper parámetros se pueden ajustar para mejorar los modelos, y en qué contextos esto ayuda el rendimiento de los modelos. El objetivo de este trabajo es desarrollar un procedimiento automatizado para el ajuste de hiper parámetros para algoritmos de aprendizaje automático aplicados a la estimación de esfuerzo del desarrollo de software. La metodología seguida en este trabajo consta de realizar un estudio de mapeo sistemático para caracterizar los algoritmos de ajuste existentes, desarrollar el procedimiento automatizado, y conducir cuasi experimentos controlados para evaluar este procedimiento. Mediante el mapeo sistemático descubrimos que la literatura en estimación de esfuerzo ha favorecido el uso de la búsqueda en cuadrícula. Los resultados obtenidos en nuestros cuasi experimentos demostraron que algoritmos de estimación no-exhaustivos son viables para la estimación de esfuerzo. Estos resultados indican que evaluar aleatoriamente 60 hiper parámetros puede ser tan efectivo como la búsqueda en cuadrícula, y que muchos de los métodos usados en el estado del arte son solo más efectivos que esta búsqueda aleatoria en 6% de los escenarios. Recomendamos el uso de la búsqueda aleatoria, algoritmos genéticos y similares, y la búsqueda tabú y harmónica.Escuela de Ciencias de la Computación e InformáticaCentro de Investigaciones en Tecnologías de la Información y ComunicaciónUCR::Vicerrectoría de Investigación::Sistema de Estudios de Posgrado::Ingeniería::Maestría Académica en Computación e Informátic

    Feature Grouping-based Feature Selection

    Get PDF
    corecore