53 research outputs found

    GEM-PSO: Particle Swarm Optimization Guided by Enhanced Memory

    Get PDF
    Particle Swarm Optimization (PSO) is a widely-used nature-inspired optimization technique in which a swarm of virtual particles work together with limited communication to find a global minimum or optimum. PSO has has been successfully applied to a wide variety of practical problems, such as optimization in engineering fields, hybridization with other nature-inspired algorithms, or even general optimization problems. However, PSO suffers from a phenomenon known as premature convergence, in which the algorithm\u27s particles all converge on a local optimum instead of the global optimum, and cannot improve their solution any further. We seek to improve upon the standard Particle Swarm PSO algorithm by fixing this premature convergence behavior. We do so by storing and exploiting increased information in the form of past bests, which we deem enhanced memory. We introduce three types of modifications to each new algorithm (which we call a GEM-PSO: Particle Swarm Optimization, Guided by Enhanced Memory, because our modifications all deal with enhancing the memory of each particle). These are procedures for saving a found best, for removing a best from memory when a new one is to be added, and for selecting one (or more) bests to be used from those saved in memory. By using different combinations of these modifications, we can create many different variants of GEM-PSO that have a wide variety of behaviors and qualities. We analyze the performance of GEM-PSO, discuss the impact of PSO\u27s parameters on the algorithms\u27 performances, isolate different modifications in order to closely study their impact on the performance of any given GEM-PSO variant, and finally look at how multiple modifications perform. Finally, we draw conclusions about the efficacy and potential of GEM-PSO variants, and provide ideas for further exploration in this area of study. Many GEM-PSO variants are able to consistently outperform standard PSO on specific functions, and GEM-PSO variants can be shown to be promising, with both general and specific use cases

    Hybrid railway vehicle trajectory optimisation using a non‐convex function and evolutionary hybrid forecast algorithm

    Get PDF
    AbstractThis paper introduces a novel optimisation algorithm for hybrid railway vehicles, combining a non‐linear programming solver with the highly efficient “Mayfly Algorithm” to address a non‐convex optimisation problem. The primary objective is to generate efficient trajectories that enable effective power distribution, optimal energy consumption, and economical use of multiple onboard power sources. By reducing unnecessary load stress on power sources during peak time, the algorithm contributes to lower maintenance costs, reduced downtime, and extended operational life of these sources. The algorithm's design considers various operational parameters, such as power demand, regenerative braking, velocity and additional power requirements, enabling it to optimise the energy consumption profile throughout the journey. Its adaptability to the unique characteristics of hybrid railway vehicles allows for efficient energy management by leveraging its hybrid powertrain capabilities.</jats:p

    Implementation of the Sine Cosine Algorithm and its variants for solving the tension compression spring design problem

    Get PDF
    Ο αλγόριθμος ημιτόνου και συνημίτονου εφευρέθηκε από τον Mirjalili το 2016. Χρησιμοποιεί τις συναρτήσεις ημιτόνου και συνημίτονου για να επιλύσει ένα μεγάλο εύρος προβλημάτων βελτιστοποίησης. Ανήκει σε μια κατηγορία μεταευρετικών διαδικασιών, που περιλαμβάνει στρατηγικές βασισμένες σε πληθυσμό, για επίτευξη βέλτιστου αποτελέσματος μιμούμενο φαινόμενα στη φύση. Έπειτα, έγινε εμβάθυνση σε ένα μεγάλο εύρος παραλλαγών του αλγορίθμου. Ειδικότερα, ασαφής, χαοτικός, βασισμένος σε αντίθετη μάθηση, άπληστος levy, προσαρμοστικός και πολλαπλών στόχων aquila είναι κάποιες από τις μεταλλάξεις του αλγορίθμου που βασίστηκε η εργασία και βελτιώνουν την απόδοση του σημαντικά. Η εργασία είναι στηριγμένη τόσο στο θεωρητικό όσο και στο πρακτικό κομμάτι του αλγορίθμου καθώς επιδιώχθηκε να ελεγχτεί η αποδοτικότητα του με πολλαπλές συναρτήσεις κριτηρίου. Επεκτείνεται η έρευνα στο αντικείμενο επιλύοντας ένα ευρέως γνωστό πρόβλημα μηχανικής, του σχεδιασμού τάσης ελατηρίου. Παρατηρείται ότι ο αλγόριθμος έχει εφαρμογή σε ποικιλία μηχανικών, μαθηματικών και ιατρικών θεμάτων. Είναι αντιληπτό ότι βρίσκει λύση εκεί που άλλες ντετερμινιστικές διαδικασίες δεν μπορούν να εφαρμοστούν. Πολλές παραλλαγές του αλγορίθμου ημιτόνου συνημίτονου έχουν εμφανιστεί για να ισορροπήσουν τις αδυναμίες του. Τέλος, παρουσιάζονται διαγράμματα για υπάρχει καλύτερη αντίληψη της απόδοσης του SCA.The Sine and Cosine Algorithm was created by Seyedali Mirjalili in 2015. It uses sine and cosine to solve various optimisation problems precisely. It belongs to a category of metaheuristics, which includes population-based strategies for obtaining the optimal result by mimicking natural phenomena. This thesis elaborates on a wide variety of its mutants. Specifically, fuzzy, chaotic, opposite-based-learning, greedy levy flight and adaptive multi-objective aquila are some of the variants the work focuses on. This work is based on both theoretical and practical aspects of the algorithm. First, tests of efficiency were pursued on multiple benchmark functions. The research on the topic was expanded by the solution of a widely known engineering problem, the tension/compression spring design. It can be observed that the algorithm has relevance to various engineering, mathematical and medical issues when other deterministic ways fail. Many variants of the procedure were introduced to balance its weaknesses. Finally, diagrams are presented to improve our understanding of the SCA’s accuracy

    Developing Efficient Strategies for Automatic Calibration of Computationally Intensive Environmental Models

    Get PDF
    Environmental simulation models have been playing a key role in civil and environmental engineering decision making processes for decades. The utility of an environmental model depends on how well the model is structured and calibrated. Model calibration is typically in an automated form where the simulation model is linked to a search mechanism (e.g., an optimization algorithm) such that the search mechanism iteratively generates many parameter sets (e.g., thousands of parameter sets) and evaluates them through running the model in an attempt to minimize differences between observed data and corresponding model outputs. The challenge rises when the environmental model is computationally intensive to run (with run-times of minutes to hours, for example) as then any automatic calibration attempt would impose a large computational burden. Such a challenge may make the model users accept sub-optimal solutions and not achieve the best model performance. The objective of this thesis is to develop innovative strategies to circumvent the computational burden associated with automatic calibration of computationally intensive environmental models. The first main contribution of this thesis is developing a strategy called “deterministic model preemption” which opportunistically evades unnecessary model evaluations in the course of a calibration experiment and can save a significant portion of the computational budget (even as much as 90% in some cases). Model preemption monitors the intermediate simulation results while the model is running and terminates (i.e., pre-empts) the simulation early if it recognizes that further running the model would not guide the search mechanism. This strategy is applicable to a range of automatic calibration algorithms (i.e., search mechanisms) and is deterministic in that it leads to exactly the same calibration results as when preemption is not applied. One other main contribution of this thesis is developing and utilizing the concept of “surrogate data” which is basically a reasonably small but representative proportion of a full set of calibration data. This concept is inspired by the existing surrogate modelling strategies where a surrogate model (also called a metamodel) is developed and utilized as a fast-to-run substitute of an original computationally intensive model. A framework is developed to efficiently calibrate hydrologic models to the full set of calibration data while running the original model only on surrogate data for the majority of candidate parameter sets, a strategy which leads to considerable computational saving. To this end, mapping relationships are developed to approximate the model performance on the full data based on the model performance on surrogate data. This framework can be applicable to the calibration of any environmental model where appropriate surrogate data and mapping relationships can be identified. As another main contribution, this thesis critically reviews and evaluates the large body of literature on surrogate modelling strategies from various disciplines as they are the most commonly used methods to relieve the computational burden associated with computationally intensive simulation models. To reliably evaluate these strategies, a comparative assessment and benchmarking framework is developed which presents a clear computational budget dependent definition for the success/failure of surrogate modelling strategies. Two large families of surrogate modelling strategies are critically scrutinized and evaluated: “response surface surrogate” modelling which involves statistical or data–driven function approximation techniques (e.g., kriging, radial basis functions, and neural networks) and “lower-fidelity physically-based surrogate” modelling strategies which develop and utilize simplified models of the original system (e.g., a groundwater model with a coarse mesh). This thesis raises fundamental concerns about response surface surrogate modelling and demonstrates that, although they might be less efficient, lower-fidelity physically-based surrogates are generally more reliable as they to-some-extent preserve the physics involved in the original model. Five different surface water and groundwater models are used across this thesis to test the performance of the developed strategies and elaborate the discussions. However, the strategies developed are typically simulation-model-independent and can be applied to the calibration of any computationally intensive simulation model that has the required characteristics. This thesis leaves the reader with a suite of strategies for efficient calibration of computationally intensive environmental models while providing some guidance on how to select, implement, and evaluate the appropriate strategy for a given environmental model calibration problem

    Meta-optimization of Bio-inspired Techniques for Object Recognition

    Get PDF
    Il riconoscimento di oggetti consiste nel trovare automaticamente un oggetto all'interno di un'immagine o in una sequenza video. Questo compito è molto importante in molti campi quali diagnosi mediche, assistenza di guida avanzata, visione artificiale, sorveglianza, realtà aumentata. Tuttavia, questo compito può essere molto impegnativo a causa di artefatti (dovuti al sistema di acquisizione, all'ambiente o ad altri effetti ottici quali prospettiva, variazioni di illuminazione, etc.) che possono influenzare l'aspetto anche di oggetti facili da identificare e ben definiti . Una possibile tecnica per il riconoscimento di oggetti consiste nell'utilizzare approcci basati su modello: in questo scenario viene creato un modello che rappresenta le proprietà dell'oggetto da individuare; poi, vengono generate possibili ipotesi sul posizionamento dell'oggetto, e il modello viene trasformato di conseguenza, fino a trovare la migliore corrispondenza con l'aspetto reale dell'oggetto. Per generare queste ipotesi in maniera intelligente, è necessario un buon algoritmo di ottimizzazione. Gli algoritmi di tipo bio-ispirati sono metodi di ottimizzazione che si basano su proprietà osservate in natura (quali cooperazione, evoluzione, socialità). La loro efficacia è stata dimostrata in molte attività di ottimizzazione, soprattutto in problemi di difficile soluzione, multi-modali e multi-dimensionali quali, per l'appunto, il riconoscimento di oggetti. Anche se queste euristiche sono generalmente efficaci, esse dipendono da molti parametri che influenzano profondamente le loro prestazioni; pertanto, è spesso richiesto uno sforzo significativo per capire come farle esprimere al massimo delle loro potenzialità. Questa tesi descrive un metodo per (i) individuare automaticamente buoni parametri per tecniche bio-ispirate, sia per un problema specifico che più di uno alla volta, e (ii) acquisire maggior conoscenza sul ruolo di un parametro in questi algoritmi. Inoltre, viene mostrato come le tecniche bio-ispirate possono essere applicate con successo in diversi ambiti nel riconoscimento di oggetti, e come è possibile migliorare ulteriormente le loro prestazioni mediante il tuning automatico dei loro parametri.Object recognition is the task of automatically finding a given object in an image or in a video sequence. This task is very important in many fields such as medical diagnosis, advanced driving assistance, image understanding, surveillance, virtual reality. Nevertheless, this task can be very challenging because of artefacts (related with the acquisition system, the environment or other optical effects like perspective, illumination changes, etc.) which may affect the aspect even of easy-to-identify and well-defined objects. A possible way to achieve object recognition is using model-based approaches: in this scenario a model (also called template) representing the properties of the target object is created; then, hypotheses on the position of the object are generated, and the model is transformed accordingly, until the best match with the actual appearance of the object is found. To generate these hypotheses intelligently, a good optimization algorithm is required. Bio-inspired techniques are optimization methods whose foundations rely on properties observed in nature (such as cooperation, evolution, emergence). Their effectiveness has been proved in many optimization tasks, especially in multi-modal, multi-dimensional hard problems like object recognition. Although these heuristics are generally effective, they depend on many parameters that strongly affect their performances; therefore, a significant effort must be spent to understand how to let them express their full potentialities. This thesis describes a method to (i) automatically find good parameters for bio-inspired techniques, both for a specific problem and for more than one at the same time, and (ii) acquire more knowledge of a parameter's role in such algorithms. Then, it shows how bio-inspired techniques can be successfully applied to different object recognition tasks, and how it is possible to further improve their performances by means of automatic parameter tuning

    Evaluation and optimisation of traction system for hybrid railway vehicles

    Get PDF
    Over the past decade, energy and environmental sustainability in urban rail transport have become increasingly important. Hybrid transportation systems present a multifaceted challenge, encompassing aspects such as hydrogen production, refuelling station infrastructure, propulsion system topology, power source sizing, and control. The evaluation and optimisation of these aspects are critical for the adaptation and commercialisation of hybrid railway vehicles. While there has been significant progress in the development of hybrid railway vehicles, further improvements in propulsion system design are necessary. This thesis explores strategies to achieve this ambitious goal by substituting diesel trains with hybrid trains. However, limited research has assessed the operational performance of replacing diesel trains with hybrid trains on the same tracks. This thesis develops various optimisation techniques for evaluating and refining the hybrid traction system to address this gap. In this research's first phase, the author developed a novel Hybrid Train Simulator designed to analyse driving performance and energy flow among multiple power sources, such as internal combustion engines, electrification, fuel cells, and batteries. The simulator incorporates a novel Automatic Smart Switching Control technique, which scales power among multiple power sources based on the route gradient for hybrid trains. This smart switching approach enhances battery and fuel cell life and reduces maintenance costs by employing it as needed, thereby eliminating the forced charging and discharging of excessively high currents. Simulation results demonstrate a 6% reduction in energy consumption for hybrid trains equipped with smart switching compared to those without it. In the second phase of this research, the author presents a novel technique to solve the optimisation problem of hybrid railway vehicle traction systems by utilising evolutionary and numerical optimisation techniques. The optimisation method employs a nonlinear programming solver, interpreting the problem via a non-convex function combined with an efficient "Mayfly algorithm." The developed hybrid optimisation algorithm minimises traction energy while using limited power to prevent unnecessary load on power sources, ensuring their prolonged life. The algorithm takes into account linear and non-linear variables, such as velocity, acceleration, traction forces, distance, time, power, and energy, to address the hybrid railway vehicle optimisation problem, focusing on the energy-time trade-off. The optimised trajectories exhibit an average reduction of 16.85% in total energy consumption, illustrating the algorithm's effectiveness across diverse routes and conditions, with an average increase in journey times of only 0.40% and a 15.18% reduction in traction power. The algorithm achieves a well-balanced energy-time trade-off, prioritising energy efficiency without significantly impacting journey duration, a critical aspect of sustainable transportation systems. In the third phase of this thesis, the author introduced artificial neural network models to solve the optimisation problem for hybrid railway vehicles. Based on time and power-based architecture, two ANN models are presented, capable of predicting optimal hybrid train trajectories. These models tackle the challenge of analysing large datasets of hybrid railway vehicles. Both models demonstrate the potential for efficiently predicting hybrid train target parameters. The results indicate that both ANN models effectively predict a hybrid train's critical parameters and trajectory, with mean errors ranging from 0.19% to 0.21%. However, the cascade-forward neural network topology in the time-based architecture outperforms the feed-forward neural network topology in terms of mean squared error and maximum error in the power-based architecture. Specifically, the cascade-forward neural network topology within the time-based structure exhibits a slightly lower MSE and maximum error than its power-based counterpart. Moreover, the study reveals the average percentage difference between the benchmark and FFNN/CNFN trajectories, highlighting that the time-based architecture exhibits lower differences (0.18% and 0.85%) compared to the power-based architecture (0.46% and 0.92%)

    Niching in Particole Swarm Optimization

    Get PDF
    The Particle Swarm Optimization (PSO) algorithm, like many optimization algorithms, is designed to find a single optimal solution. When dealing with multimodal functions, it needs some modifications to be able to locate multiple optima. In a parallel with Evolutionary Computation algorithms, these modifications can be grouped in the framework of Niching. In this thesis, we present a new approach to niching in PSO that is based on clustering particles to identify niches. The neighborhood structure, on which particles rely for communication, is exploited together with the niche information to perform parallel searches to locate multiple optima. The clustering approach was implemented in the k-means based PSO (kPSO), which employs the standard k-means clustering algorithm. We follow the development of kPSO, starting from a first, simple implementation, and then introducing several improvements, such as a mechanism to adaptively identify the number of clusters. The final kPSO algorithm proves to be a competitive solution when compared with other existing algorithms, since it shows better performance on most multimodal functions in a commonly used benchmark set

    Parallel optimization algorithms for high performance computing : application to thermal systems

    Get PDF
    The need of optimization is present in every field of engineering. Moreover, applications requiring a multidisciplinary approach in order to make a step forward are increasing. This leads to the need of solving complex optimization problems that exceed the capacity of human brain or intuition. A standard way of proceeding is to use evolutionary algorithms, among which genetic algorithms hold a prominent place. These are characterized by their robustness and versatility, as well as their high computational cost and low convergence speed. Many optimization packages are available under free software licenses and are representative of the current state of the art in optimization technology. However, the ability of optimization algorithms to adapt to massively parallel computers reaching satisfactory efficiency levels is still an open issue. Even packages suited for multilevel parallelism encounter difficulties when dealing with objective functions involving long and variable simulation times. This variability is common in Computational Fluid Dynamics and Heat Transfer (CFD & HT), nonlinear mechanics, etc. and is nowadays a dominant concern for large scale applications. Current research in improving the performance of evolutionary algorithms is mainly focused on developing new search algorithms. Nevertheless, there is a vast knowledge of sequential well-performing algorithmic suitable for being implemented in parallel computers. The gap to be covered is efficient parallelization. Moreover, advances in the research of both new search algorithms and efficient parallelization are additive, so that the enhancement of current state of the art optimization software can be accelerated if both fronts are tackled simultaneously. The motivation of this Doctoral Thesis is to make a step forward towards the successful integration of Optimization and High Performance Computing capabilities, which has the potential to boost technological development by providing better designs, shortening product development times and minimizing the required resources. After conducting a thorough state of the art study of the mathematical optimization techniques available to date, a generic mathematical optimization tool has been developed putting a special focus on the application of the library to the field of Computational Fluid Dynamics and Heat Transfer (CFD & HT). Then the main shortcomings of the standard parallelization strategies available for genetic algorithms and similar population-based optimization methods have been analyzed. Computational load imbalance has been identified to be the key point causing the degradation of the optimization algorithm¿s scalability (i.e. parallel efficiency) in case the average makespan of the batch of individuals is greater than the average time required by the optimizer for performing inter-processor communications. It occurs because processors are often unable to finish the evaluation of their queue of individuals simultaneously and need to be synchronized before the next batch of individuals is created. Consequently, the computational load imbalance is translated into idle time in some processors. Several load balancing algorithms have been proposed and exhaustively tested, being extendable to any other population-based optimization method that needs to synchronize all processors after the evaluation of each batch of individuals. Finally, a real-world engineering application that consists on optimizing the refrigeration system of a power electronic device has been presented as an illustrative example in which the use of the proposed load balancing algorithms is able to reduce the simulation time required by the optimization tool.El aumento de las aplicaciones que requieren de una aproximación multidisciplinar para poder avanzar se constata en todos los campos de la ingeniería, lo cual conlleva la necesidad de resolver problemas de optimización complejos que exceden la capacidad del cerebro humano o de la intuición. En estos casos es habitual el uso de algoritmos evolutivos, principalmente de los algoritmos genéticos, caracterizados por su robustez y versatilidad, así como por su gran coste computacional y baja velocidad de convergencia. La multitud de paquetes de optimización disponibles con licencias de software libre representan el estado del arte actual en tecnología de optimización. Sin embargo, la capacidad de adaptación de los algoritmos de optimización a ordenadores masivamente paralelos alcanzando niveles de eficiencia satisfactorios es todavía una tarea pendiente. Incluso los paquetes adaptados al paralelismo multinivel tienen dificultades para gestionar funciones objetivo que requieren de tiempos de simulación largos y variables. Esta variabilidad es común en la Dinámica de Fluidos Computacional y la Transferencia de Calor (CFD & HT), mecánica no lineal, etc. y es una de las principales preocupaciones en aplicaciones a gran escala a día de hoy. La investigación actual que tiene por objetivo la mejora del rendimiento de los algoritmos evolutivos está enfocada principalmente al desarrollo de nuevos algoritmos de búsqueda. Sin embargo, ya se conoce una gran variedad de algoritmos secuenciales apropiados para su implementación en ordenadores paralelos. La tarea pendiente es conseguir una paralelización eficiente. Además, los avances en la investigación de nuevos algoritmos de búsqueda y la paralelización son aditivos, por lo que el proceso de mejora del software de optimización actual se verá incrementada si se atacan ambos frentes simultáneamente. La motivación de esta Tesis Doctoral es avanzar hacia una integración completa de las capacidades de Optimización y Computación de Alto Rendimiento para así impulsar el desarrollo tecnológico proporcionando mejores diseños, acortando los tiempos de desarrollo del producto y minimizando los recursos necesarios. Tras un exhaustivo estudio del estado del arte de las técnicas de optimización matemática disponibles a día de hoy, se ha diseñado una librería de optimización orientada al campo de la Dinámica de Fluidos Computacional y la Transferencia de Calor (CFD & HT). A continuación se han analizado las principales limitaciones de las estrategias de paralelización disponibles para algoritmos genéticos y otros métodos de optimización basados en poblaciones. En el caso en que el tiempo de evaluación medio de la tanda de individuos sea mayor que el tiempo medio que necesita el optimizador para llevar a cabo comunicaciones entre procesadores, se ha detectado que la causa principal de la degradación de la escalabilidad o eficiencia paralela del algoritmo de optimización es el desequilibrio de la carga computacional. El motivo es que a menudo los procesadores no terminan de evaluar su cola de individuos simultáneamente y deben sincronizarse antes de que se cree la siguiente tanda de individuos. Por consiguiente, el desequilibrio de la carga computacional se convierte en tiempo de inactividad en algunos procesadores. Se han propuesto y testado exhaustivamente varios algoritmos de equilibrado de carga aplicables a cualquier método de optimización basado en una población que necesite sincronizar los procesadores tras cada tanda de evaluaciones. Finalmente, se ha presentado como ejemplo ilustrativo un caso real de ingeniería que consiste en optimizar el sistema de refrigeración de un dispositivo de electrónica de potencia. En él queda demostrado que el uso de los algoritmos de equilibrado de carga computacional propuestos es capaz de reducir el tiempo de simulación que necesita la herramienta de optimización
    corecore