1,167 research outputs found

    Evolutionary approaches to optimisation in rough machining

    Get PDF
    This thesis concerns the use of Evolutionary Computation to optimise the sequence and selection of tools and machining parameters in rough milling applications. These processes are not automated in current Computer-Aided Manufacturing (CAM) software and this work, undertaken in collaboration with an industrial partner, aims to address this. Related research has mainly approached tool sequence optimisation using only a single tool type, and machining parameter optimisation of a single-tool sequence. In a real world industrial setting, tools with different geometrical profiles are commonly used in combination on rough machining tasks in order to produce components with complex sculptured surfaces. This work introduces a new representation scheme and search operators to support the use of the three most commonly used tool types: end mill, ball nose and toroidal. Using these operators, single-objective metaheuristic algorithms are shown to find near-optimal solutions, while surveying only a small number of tool sequences. For the first time, a multi-objective approach is taken to tool sequence optimisation. The process of ‘multi objectivisation’ is shown to offer two benefits: escaping local optima on deceptive multimodal search spaces and providing a selection of tool sequence alternatives to a machinist. The multi-objective approach is also used to produce a varied set of near-Pareto optimal solutions, offering different trade-offs between total machining time and total tooling costs, simultaneously optimising tool sequences and the cutting speeds of individual tools. A challenge for using computationally expensive CAM software, important for real world machining, is the time cost of evaluations. An asynchronous parallel evolutionary optimisation system is presented that can provide a significant speed up, even in the presence of heterogeneous evaluation times produced by variable length tool sequences. This system uses a distributed network of processors that could be easily and inexpensively implemented on existing commercial hardware, and accessible to even small workshops

    Automated optimization of reconfigurable designs

    Get PDF
    Currently, the optimization of reconfigurable design parameters is typically done manually and often involves substantial amount effort. The main focus of this thesis is to reduce this effort. The designer can focus on the implementation and design correctness, leaving the tools to carry out optimization. To address this, this thesis makes three main contributions. First, we present initial investigation of reconfigurable design optimization with the Machine Learning Optimizer (MLO) algorithm. The algorithm is based on surrogate model technology and particle swarm optimization. By using surrogate models the long hardware generation time is mitigated and automatic optimization is possible. For the first time, to the best of our knowledge, we show how those models can both predict when hardware generation will fail and how well will the design perform. Second, we introduce a new algorithm called Automatic Reconfigurable Design Efficient Global Optimization (ARDEGO), which is based on the Efficient Global Optimization (EGO) algorithm. Compared to MLO, it supports parallelism and uses a simpler optimization loop. As the ARDEGO algorithm uses multiple optimization compute nodes, its optimization speed is greatly improved relative to MLO. Hardware generation time is random in nature, two similar configurations can take vastly different amount of time to generate making parallelization complicated. The novelty is efficient use of the optimization compute nodes achieved through extension of the asynchronous parallel EGO algorithm to constrained problems. Third, we show how results of design synthesis and benchmarking can be reused when a design is ported to a different platform or when its code is revised. This is achieved through the new Auto-Transfer algorithm. A methodology to make the best use of available synthesis and benchmarking results is a novel contribution to design automation of reconfigurable systems.Open Acces

    A flexible and efficient multi-purpose optimization library in python

    Get PDF
    Bakurov, I., Buzzelli, M., Castelli, M., Vanneschi, L., & Schettini, R. (2021). General purpose optimization library (Gpol): A flexible and efficient multi-purpose optimization library in python. Applied Sciences (Switzerland), 11(11), 1-34. [4774]. https://doi.org/10.3390/app11114774Several interesting libraries for optimization have been proposed. Some focus on individual optimization algorithms, or limited sets of them, and others focus on limited sets of problems. Frequently, the implementation of one of them does not precisely follow the formal definition, and they are difficult to personalize and compare. This makes it difficult to perform comparative studies and propose novel approaches. In this paper, we propose to solve these issues with the General Purpose Optimization Library (GPOL): a flexible and efficient multipurpose optimization library that covers a wide range of stochastic iterative search algorithms, through which flexible and modular implementation can allow for solving many different problem types from the fields of continuous and combinatorial optimization and supervised machine learning problem solving. Moreover, the library supports full-batch and mini-batch learning and allows carrying out computations on a CPU or GPU. The package is distributed under an MIT license. Source code, installation instructions, demos and tutorials are publicly available in our code hosting platform (the reference is provided in the Introduction).publishersversionpublishe

    An evolutionary non-Linear great deluge approach for solving course timetabling problems

    Get PDF
    The aim of this paper is to extend our non-linear great deluge algorithm into an evolutionary approach by incorporating a population and a mutation operator to solve the university course timetabling problems. This approach might be seen as a variation of memetic algorithms. The popularity of evolutionary computation approaches has increased and become an important technique in solving complex combinatorial optimisation problems. The proposed approach is an extension of a non-linear great deluge algorithm in which evolutionary operators are incorporated. First, we generate a population of feasible solutions using a tailored process that incorporates heuristics for graph colouring and assignment problems. The initialisation process is capable of producing feasible solutions even for large and most constrained problem instances. Then, the population of feasible timetables is subject to a steady-state evolutionary process that combines mutation and stochastic local search. We conducted experiments to evaluate the performance of the proposed algorithm and in particular, the contribution of the evolutionary operators. The results showed the effectiveness of the hybridisation between non-linear great deluge and evolutionary operators in solving university course timetabling problems

    An evolutionary non-Linear great deluge approach for solving course timetabling problems

    Get PDF
    The aim of this paper is to extend our non-linear great deluge algorithm into an evolutionary approach by incorporating a population and a mutation operator to solve the university course timetabling problems. This approach might be seen as a variation of memetic algorithms. The popularity of evolutionary computation approaches has increased and become an important technique in solving complex combinatorial optimisation problems. The proposed approach is an extension of a non-linear great deluge algorithm in which evolutionary operators are incorporated. First, we generate a population of feasible solutions using a tailored process that incorporates heuristics for graph colouring and assignment problems. The initialisation process is capable of producing feasible solutions even for large and most constrained problem instances. Then, the population of feasible timetables is subject to a steady-state evolutionary process that combines mutation and stochastic local search. We conducted experiments to evaluate the performance of the proposed algorithm and in particular, the contribution of the evolutionary operators. The results showed the effectiveness of the hybridisation between non-linear great deluge and evolutionary operators in solving university course timetabling problems

    Treasure hunt : a framework for cooperative, distributed parallel optimization

    Get PDF
    Orientador: Prof. Dr. Daniel WeingaertnerCoorientadora: Profa. Dra. Myriam Regattieri DelgadoTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 27/05/2019Inclui referências: p. 18-20Área de concentração: Ciência da ComputaçãoResumo: Este trabalho propõe um framework multinível chamado Treasure Hunt, que é capaz de distribuir algoritmos de busca independentes para um grande número de nós de processamento. Com o objetivo de obter uma convergência conjunta entre os nós, este framework propõe um mecanismo de direcionamento que controla suavemente a cooperação entre múltiplas instâncias independentes do Treasure Hunt. A topologia em árvore proposta pelo Treasure Hunt garante a rápida propagação da informação pelos nós, ao mesmo tempo em que provê simutaneamente explorações (pelos nós-pai) e intensificações (pelos nós-filho), em vários níveis de granularidade, independentemente do número de nós na árvore. O Treasure Hunt tem boa tolerância à falhas e está parcialmente preparado para uma total tolerância à falhas. Como parte dos métodos desenvolvidos durante este trabalho, um método automatizado de Particionamento Iterativo foi proposto para controlar o balanceamento entre explorações e intensificações ao longo da busca. Uma Modelagem de Estabilização de Convergência para operar em modo Online também foi proposto, com o objetivo de encontrar pontos de parada com bom custo/benefício para os algoritmos de otimização que executam dentro das instâncias do Treasure Hunt. Experimentos em benchmarks clássicos, aleatórios e de competição, de vários tamanhos e complexidades, usando os algoritmos de busca PSO, DE e CCPSO2, mostram que o Treasure Hunt melhora as características inerentes destes algoritmos de busca. O Treasure Hunt faz com que os algoritmos de baixa performance se tornem comparáveis aos de boa performance, e os algoritmos de boa performance possam estender seus limites até problemas maiores. Experimentos distribuindo instâncias do Treasure Hunt, em uma rede cooperativa de até 160 processos, demonstram a escalabilidade robusta do framework, apresentando melhoras nos resultados mesmo quando o tempo de processamento é fixado (wall-clock) para todas as instâncias distribuídas do Treasure Hunt. Resultados demonstram que o mecanismo de amostragem fornecido pelo Treasure Hunt, aliado à maior cooperação entre as múltiplas populações em evolução, reduzem a necessidade de grandes populações e de algoritmos de busca complexos. Isto é especialmente importante em problemas de mundo real que possuem funções de fitness muito custosas. Palavras-chave: Inteligência artificial. Métodos de otimização. Algoritmos distribuídos. Modelagem de convergência. Alta dimensionalidade.Abstract: This work proposes a multilevel framework called Treasure Hunt, which is capable of distributing independent search algorithms to a large number of processing nodes. Aiming to obtain joint convergences between working nodes, Treasure Hunt proposes a driving mechanism that smoothly controls the cooperation between the multiple independent Treasure Hunt instances. The tree topology proposed by Treasure Hunt ensures quick propagation of information, while providing simultaneous explorations (by parents) and exploitations (by children), on several levels of granularity, regardless the number of nodes in the tree. Treasure Hunt has good fault tolerance and is partially prepared to full fault tolerance. As part of the methods developed during this work, an automated Iterative Partitioning method is proposed to control the balance between exploration and exploitation as the search progress. A Convergence Stabilization Modeling to operate in Online mode is also proposed, aiming to find good cost/benefit stopping points for the optimization algorithms running within the Treasure Hunt instances. Experiments on classic, random and competition benchmarks of various sizes and complexities, using the search algorithms PSO, DE and CCPSO2, show that Treasure Hunt boosts the inherent characteristics of these search algorithms. Treasure Hunt makes algorithms with poor performances to become comparable to good ones, and algorithms with good performances to be capable of extending their limits to larger problems. Experiments distributing Treasure Hunt instances in a cooperative network up to 160 processes show the robust scaling of the framework, presenting improved results even when fixing a wall-clock time for the instances. Results show that the sampling mechanism provided by Treasure Hunt, allied to the increased cooperation between multiple evolving populations, reduce the need for large population sizes and complex search algorithms. This is specially important on real-world problems with time-consuming fitness functions. Keywords: Artificial intelligence. Optimization methods. Distributed algorithms. Convergence modeling. High dimensionality

    Machine Learning Approaches to Predict Learning Outcomes in Massive Open Online Courses

    Get PDF
    With the rapid advancements in technology, Massive Open Online Courses (MOOCs) have become the most popular form of online educational delivery, largely due to the removal of geographical and financial barriers for participants. A large number of learners globally enrol in such courses. Despite the flexible accessibility, results indicate that the completion rate is quite low. Educational Data Mining and Learning Analytics are emerging fields of research that aim to enhance the delivery of education through the application of various statistical and machine learning approaches. An extensive literature survey indicates that no significant research is available within the area of MOOC data analysis, in particular considering the behavioural patterns of users. In this paper, therefore, two sets of features, based on learner behavioural patterns, were compared in terms of their suitability for predicting the course outcome of learners participating in MOOCs. Our Exploratory Data Analysis demonstrates that there is strong correlation between click steam actions and successful learner outcomes. Various Machine Learning algorithms have been applied to enhance the accuracy of classifier models. Simulation results from our investigation have shown that Random Forest achieved viable performance for our prediction problem, obtaining the highest performance of the models tested. Conversely, Linear Discriminant Analysis achieved the lowest relative performance, though represented only a marginal reduction in performance relative to the Random Forest

    Developing novel meta-heuristic, hyper-heuristic and cooperative search for course timetabling problems

    Get PDF
    The research presented in this PhD thesis focuses on the problem of university course timetabling, and examines the various ways in which metaheuristics, hyperheuristics and cooperative heuristic search techniques might be applied to this sort of problem. The university course timetabling problem is an NP-hard and also highly constrained combinatorial problem. Various techniques have been developed in the literature to tackle this problem. The research work presented in this thesis approaches this problem in two stages. For the first stage, the construction of initial solutions or timetables, we propose four hybrid heuristics that combine graph colouring techniques with a well-known local search method, tabu search, to generate initial feasible solutions. Then, in the second stage of the solution process, we explore different methods to improve upon the initial solutions. We investigate techniques such as single-solution metaheuristics, evolutionary algorithms, hyper-heuristics with reinforcement learning, cooperative low-level heuristics and cooperative hyper-heuristics. In the experiments throughout this thesis, we mainly use a popular set of benchmark instances of the university course timetabling problem, proposed by Socha et al. [152], to assess the performance of the methods proposed in this thesis. Then, this research work proposes algorithms for each of the two stages, construction of initial solutions and solution improvement, and analyses the proposed methods in detail. For the first stage, we examine the performance of the hybrid heuristics on constructing feasible solutions. In our analysis of these algorithms we discovered that these hybrid approaches are capable of generating good quality feasible solutions in reasonable computation time for the 11 benchmark instances of Socha et al. [152]. Just for this first stage, we conducted a second set of experiments, testing the proposed hybrid heuristics on another set of benchmark instances corresponding to the international timetabling competition 2002 [91J. Our hybrid construction heuristics were also capable of producing feasible solutions for the 20 instances of the competition in reasonable computation time. It should be noted however, that most of the research presented here was focused on the 11 problem instances of Socha et al. [152]. For the second stage, we propose new metaheuristic algorithms and cooperative hyper-heuristics, namely a non-linear great deluge algorithm, an evolutionary nonlinear great deluge algorithm (with a number of new specialised evolutionary operators), a hyper-heuristic with a learning mechanism approach, an asynchronous cooperative low-level heuristic and an asynchronous cooperative hyper-heuristic. These two last algorithms were inspired by the particle swarm optimisation technique. Detailed analyses of the proposed algorithms are presented and their relative benefits discussed. Finally, we give our suggestions as to how our best performing algorithms might be modified in order to deal with a wide range of problem domains including more real-world constraints. We also discuss the drawbacks of our algorithms in the final section of this thesis

    Computational Properties of Cerebellar Nucleus Neurons: Effects of Stochastic Ion Channel Gating and Input Location

    Get PDF
    The function of the nervous system is shaped by the refined integration of synaptic inputs taking place at the single neuron level. Gain modulation is a computational principle that is widely used across the brain, in which the response of a neuronal unit to a set of inputs is affected in a multiplicative fashion by a second set of inputs, but without any effect on its selectivity. The arithmetic operations performed by pyramidal cells in cortical brain areas have been well characterised, along with the underlying mechanisms at the level of networks and cells, for instance background synaptic noise and dendritic saturation. However, in spite of the vast amount of research on the cerebellum and its function, little is known about neuronal computations carried out by its cellular components. A particular area of interest are the cerebellar nuclei, the main output gate of the cerebellum to the brain stem and cortical areas. The aim of this thesis is to contribute to an understanding of the arithmetic operations performed by neurons in the cerebellar nuclei. Focus is placed on two putative determinants, the location of the synaptic input and the presence of channel noise. To analyse the effect of channel noise, the known voltage-gated ion channels of a cerebellar nucleus neuron model are translated to stochastic Markov formalisms and their electrophysiologial behaviour is compared to their deterministic Hodgkin-Huxley counterparts. The findings demonstrate that in most cases, the behaviour of stochastic channels matches the reference deterministic models, with the notable exception of voltage-gated channels with fast kinetics. Two potential explanations are suggested for this discrepancy. Firstly, channels with fast kinetics are strongly affected by the artefactual loss of gating events in the simulation that is caused by the use of a finite-length time step. While this effect can be mitigated, in part, by using very small time steps, the second source of simulation artefacts is the rectification of the distribution of open channels, when channel kinetics characteristics allow the generation of a window current, with an temporal-averaged equilibrium close to zero. Further, stochastic gating is implemented in a realistic cerebellar nucleus neuronal model. The resulting stochastic model exhibits probabilistic spiking and a similar output rate as the corresponding deterministic cerebellar nucleus neuronal model. However, the outcomes of this thesis indicate the computational properties of the cerebellar nucleus neuronal model are independent of the presence of ion channel noise. The main result of this thesis is that the synaptic input location determines the single neuron computational properties, both in the cerebellar nucleus and layer Vb pyramidal neuronal models. The extent of multiplication increases systematically with the distance from the soma, for the cerebellar nucleus, but not for the layer Vb pyramidal neuron, where it is smaller than it would be expected for the distance from the soma. For both neurons, the underlying mechanism is related to the combined effect of nonlinearities introduced by dendritic saturation and the synaptic input noise. However, while excitatory inputs in the perisomatic areas in the cerebellar nucleus undergo additive operations and the distal areas multiplicative, in the layer Vb pyramidal neuron the integration of the excitatory driving input is always multiplicative. In addition, the change in gain is sensitive to the synchronicity of the excitatory synaptic input in the layer Vb pyramidal neuron, but not in the cerebellar nucleus neuron. These observations indicate that the same gain control mechanism might be utilized in distinct ways, in different computational contexts and across different areas, based on the neuronal type and its function
    corecore