77 research outputs found

    A New Initialisation Method for Examination Timetabling Heuristics

    Get PDF
    This is the author accepted manuscript. The final version is available from IEEE via the DOI in this record.Timetabling problems are widespread, but are particularly prevalent in the educational domain. When sufficiently large, these are often only effectively tackled by timetabling meta-heuristics. The effectiveness of these in turn are often largely dependant on their initialisation protocols. There are a number of different initialisation approaches used in the literature for starting examination timetabling heuristics. We present a new iterative initialisation algorithm here --- which attempts to generate high-quality and legal solutions, to feed into a heuristic optimiser. The proposed approach is empirically verified on the ITC 2007 and Yeditepe benchmark sets. It is compared to popular initialisation approaches commonly employed in exam timetabling heuristics: the largest degree, largest weighted degree, largest enrollment, and saturation degree graph-colouring approaches, and random schedule allocation. The effectiveness of these approaches are also compared via incorporation in an exemplar evolutionary algorithm. The results show that the proposed method is capable of producing feasible solutions for all instances, with better quality and diversity compared to the alternative methods. It also leads to improved optimiser performance.Saudi Arabia Cultural Burea

    Automated and Surrogate Multi-Resolution Approaches in Genetic Algorithms

    Get PDF
    This is the author accepted manuscript. The final version is available from IEEE via the DOI in this record.Recent work on multi-resolution optimisation (varying the fidelity of a design during a search) has developed approaches for automated resolution change depending on the population characteristics. This used the standard deviation of the population, or the marginal probability density estimation per variable, to automatically determine the resolution to apply to a design in the next generation. Here we build on this methodology in a number of new directions. We investigate the use of a complete estimated probability density function for resolution determination, enabling the dependencies between variables to be represented. We also explore the use of the multi-resolution transformation to assign a surrogate fitness to population members, but without modifying their location, and discuss the fitness landscape implications of this approach. Results are presented on a range of popular uni-objective continuous test-functions. These demonstrate the performance improvements that can be gained using an automated multi-resolution approach, and surprisingly indicate the simplest resolution indicator is often the most effective, but that relative performance is often problem dependant. We also observe how population duplicates grow in multi-resolution approaches, and discuss the implications of this when comparing algorithms (and efficiently implementing them).Shaqra University, Saudi Arabi

    The Bayesian Decision Tree Technique with a Sweeping Strategy

    Full text link
    The uncertainty of classification outcomes is of crucial importance for many safety critical applications including, for example, medical diagnostics. In such applications the uncertainty of classification can be reliably estimated within a Bayesian model averaging technique that allows the use of prior information. Decision Tree (DT) classification models used within such a technique gives experts additional information by making this classification scheme observable. The use of the Markov Chain Monte Carlo (MCMC) methodology of stochastic sampling makes the Bayesian DT technique feasible to perform. However, in practice, the MCMC technique may become stuck in a particular DT which is far away from a region with a maximal posterior. Sampling such DTs causes bias in the posterior estimates, and as a result the evaluation of classification uncertainty may be incorrect. In a particular case, the negative effect of such sampling may be reduced by giving additional prior information on the shape of DTs. In this paper we describe a new approach based on sweeping the DTs without additional priors on the favorite shape of DTs. The performances of Bayesian DT techniques with the standard and sweeping strategies are compared on a synthetic data as well as on real datasets. Quantitatively evaluating the uncertainty in terms of entropy of class posterior probabilities, we found that the sweeping strategy is superior to the standard strategy

    Shape optimisation using Computational Fluid Dynamics and Evolutionary Algorithms

    Get PDF
    This is the author accepted manuscript.Optimisation of designs using Computational Fluid Dynamics (CFD) are frequently performed across many fields of research, such as the optimisation of an aircraft wing to reduce drag, or to increase the efficiency of a heat exchanger. General optimisation strategies involves altering design variables with a view to improve appropriate objective function(s). Often the objective function(s) are non-linear and multi-modal, and thus polynomial time algorithms for solving such problems may not be available. In such cases, applying Evolutionary Algorithms (EAs - a class of stochastic global optimisation techniques inspired from natural evolution) may locate good solutions within a practical time frame. The traditional CFD design optimisation process is often based on a ‘trial-and-error type approach. Starting from an initial geometry, Computational Aided Design changes are introduced manually based on results from a limited number of design iterations and CFD analyses. The process is usually complex, time-consuming and relies heavily on engineering experience, thus making the overall design procedure inconsistent, i.e. different ‘best’ solutions are obtained from different designers. [...]This work was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grant (reference number: EP/M017915/1) for the University of Exeter’s College of Engineering, Mathematics, and Physical Sciences

    Automatic shape optimisation of the turbine-99 draft tube

    Get PDF
    This is the author accepted manuscript.INTRODUCTION The performance of a hydraulic reaction turbine is significantly affected by the efficiency of its draft tube. Factors which impede the tube’s performance include the geometrical shape (profile), and velocity distribution at the inflow. So far, the design of draft tubes has been improved through experimental observations resulting in empirical formulae or ‘rules of thumb’. The use of Computational Fluid Dynamics (CFD) in this design process has only been a recent addition due to its robustness and cost-effectivenesses with increasing availability to computational power. The flexibility of CFD, allowing for comprehensive analysis of complex profiles, is especially appealing for optimising the design. Hence, there is a need for developing an accurate and reliable CFD approach together with an efficient optimisation strategy. Flows through a turbine draft tube are characterised as turbulent with a range of flow phenomena, e.g. unsteadiness, flow separation, and swirling flow. With the aim of improving the techniques for analysing such flows, the turbomachinery community have proposed a standard test case in the form of the Turbine-99 draft tube [1]. Along with this standard geometry, with the aim of simulating the swirling inflow, an additional runner proposed by Cervantes [2] is included in the present work. The draft tube geometry is shown in Fig.1. The purpose of this abstract is to outline the framework developed to achieve the automated shape optimisation of this draft tube.This work was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grant (reference number: EP/M017915/1) for the University of Exeters College of Engineering, Mathematics, and Physical Sciences

    A Toolkit for Generating Scalable Stochastic Multiobjective Test Problems

    Get PDF
    Real-world optimization problems typically include uncertainties over various aspects of the problem formulation. Some existing algorithms are designed to cope with stochastic multiobjective optimization problems, but in order to benchmark them, a proper framework still needs to be established. This paper presents a novel toolkit that generates scalable, stochastic, multiobjective optimization problems. A stochastic problem is generated by transforming the objective vectors of a given deterministic test problem into random vectors. All random objective vectors are bounded by the feasible objective space, defined by the deterministic problem. Therefore, the global solution for the deterministic problem can also serve as a reference for the stochastic problem. A simple parametric distribution for the random objective vector is defined in a radial coordinate system, allowing for direct control over the dual challenges of convergence towards the true Pareto front and diversity across the front. An example for a stochastic test problem, generated by the toolkit, is provided

    Trading-off Data Fit and Complexity in Training Gaussian Processes with Multiple Kernels

    Get PDF
    This is the author accepted manuscript. The final version is available from Springer Verlag via the DOI in this recordLOD 2019: Fifth International Conference on Machine Learning, Optimization, and Data Science, 10-13 September 2019, Siena, ItalyGaussian processes (GPs) belong to a class of probabilistic techniques that have been successfully used in different domains of machine learning and optimization. They are popular because they provide uncertainties in predictions, which sets them apart from other modelling methods providing only point predictions. The uncertainty is particularly useful for decision making as we can gauge how reliable a prediction is. One of the fundamental challenges in using GPs is that the efficacy of a model is conferred by selecting an appropriate kernel and the associated hyperparameter values for a given problem. Furthermore, the training of GPs, that is optimizing the hyperparameters using a data set is traditionally performed using a cost function that is a weighted sum of data fit and model complexity, and the underlying trade-off is completely ignored. Addressing these challenges and shortcomings, in this article, we propose the following automated training scheme. Firstly, we use a weighted product of multiple kernels with a view to relieve the users from choosing an appropriate kernel for the problem at hand without any domain specific knowledge. Secondly, for the first time, we modify GP training by using a multi-objective optimizer to tune the hyperparameters and weights of multiple kernels and extract an approximation of the complete trade-off front between data-fit and model complexity. We then propose to use a novel solution selection strategy based on mean standardized log loss (MSLL) to select a solution from the estimated trade-off front and finalise training of a GP model. The results on three data sets and comparison with the standard approach clearly show the potential benefit of the proposed approach of using multi-objective optimization with multiple kernels.Natural Environment Research Council (NERC

    An Evolutionary Approach to Active Robust Multiobjective Optimisation

    Get PDF
    An Active Robust Optimisation Problem (AROP) aims at finding robust adaptable solutions, i.e. solutions that actively gain robustness to environmental changes through adaptation. Existing AROP studies have considered only a single performance objective. This study extends the Active Robust Optimisation methodology to deal with problems with more than one objective. Once multiple objectives are considered, the optimal performance for every uncertain parameter setting is a set of configurations, offering different trade-offs between the objectives. To evaluate and compare solutions to this type of problems, we suggest a robustness indicator that uses a scalarising function combining the main aims of multi-objective optimisation: proximity, diversity and pertinence. The Active Robust Multi-objective Optimisation Problem is formulated in this study, and an evolutionary algorithm that uses the hypervolume measure as a scalarasing function is suggested in order to solve it. Proof-of-concept results are demonstrated using a simplified gearbox optimisation problem for an uncertain load demand
    • …
    corecore