70 research outputs found

    A flexible and efficient multi-purpose optimization library in python

    Get PDF
    Bakurov, I., Buzzelli, M., Castelli, M., Vanneschi, L., & Schettini, R. (2021). General purpose optimization library (Gpol): A flexible and efficient multi-purpose optimization library in python. Applied Sciences (Switzerland), 11(11), 1-34. [4774]. https://doi.org/10.3390/app11114774Several interesting libraries for optimization have been proposed. Some focus on individual optimization algorithms, or limited sets of them, and others focus on limited sets of problems. Frequently, the implementation of one of them does not precisely follow the formal definition, and they are difficult to personalize and compare. This makes it difficult to perform comparative studies and propose novel approaches. In this paper, we propose to solve these issues with the General Purpose Optimization Library (GPOL): a flexible and efficient multipurpose optimization library that covers a wide range of stochastic iterative search algorithms, through which flexible and modular implementation can allow for solving many different problem types from the fields of continuous and combinatorial optimization and supervised machine learning problem solving. Moreover, the library supports full-batch and mini-batch learning and allows carrying out computations on a CPU or GPU. The package is distributed under an MIT license. Source code, installation instructions, demos and tutorials are publicly available in our code hosting platform (the reference is provided in the Introduction).publishersversionpublishe

    Optimización de algoritmos bioinspirados en sistemas heterogéneos CPU-GPU.

    Get PDF
    Los retos científicos del siglo XXI precisan del tratamiento y análisis de una ingente cantidad de información en la conocida como la era del Big Data. Los futuros avances en distintos sectores de la sociedad como la medicina, la ingeniería o la producción eficiente de energía, por mencionar sólo unos ejemplos, están supeditados al crecimiento continuo en la potencia computacional de los computadores modernos. Sin embargo, la estela de este crecimiento computacional, guiado tradicionalmente por la conocida “Ley de Moore”, se ha visto comprometido en las últimas décadas debido, principalmente, a las limitaciones físicas del silicio. Los arquitectos de computadores han desarrollado numerosas contribuciones multicore, manycore, heterogeneidad, dark silicon, etc, para tratar de paliar esta ralentización computacional, dejando en segundo plano otros factores fundamentales en la resolución de problemas como la programabilidad, la fiabilidad, la precisión, etc. El desarrollo de software, sin embargo, ha seguido un camino totalmente opuesto, donde la facilidad de programación a través de modelos de abstracción, la depuración automática de código para evitar efectos no deseados y la puesta en producción son claves para una viabilidad económica y eficiencia del sector empresarial digital. Esta vía compromete, en muchas ocasiones, el rendimiento de las propias aplicaciones; consecuencia totalmente inadmisible en el contexto científico. En esta tesis doctoral tiene como hipótesis de partida reducir las distancias entre los campos hardware y software para contribuir a solucionar los retos científicos del siglo XXI. El desarrollo de hardware está marcado por la consolidación de los procesadores orientados al paralelismo masivo de datos, principalmente GPUs Graphic Processing Unit y procesadores vectoriales, que se combinan entre sí para construir procesadores o computadores heterogéneos HSA. En concreto, nos centramos en la utilización de GPUs para acelerar aplicaciones científicas. Las GPUs se han situado como una de las plataformas con mayor proyección para la implementación de algoritmos que simulan problemas científicos complejos. Desde su nacimiento, la trayectoria y la historia de las tarjetas gráficas ha estado marcada por el mundo de los videojuegos, alcanzando altísimas cotas de popularidad según se conseguía más realismo en este área. Un hito importante ocurrió en 2006, cuando NVIDIA (empresa líder en la fabricación de tarjetas gráficas) lograba hacerse con un hueco en el mundo de la computación de altas prestaciones y en el mundo de la investigación con el desarrollo de CUDA “Compute Unified Device Arquitecture. Esta arquitectura posibilita el uso de la GPU para el desarrollo de aplicaciones científicas de manera versátil. A pesar de la importancia de la GPU, es interesante la mejora que se puede producir mediante su utilización conjunta con la CPU, lo que nos lleva a introducir los sistemas heterogéneos tal y como detalla el título de este trabajo. Es en entornos heterogéneos CPU-GPU donde estos rendimientos alcanzan sus cotas máximas, ya que no sólo las GPUs soportan el cómputo científico de los investigadores, sino que es en un sistema heterogéneo combinando diferentes tipos de procesadores donde podemos alcanzar mayor rendimiento. En este entorno no se pretende competir entre procesadores, sino al contrario, cada arquitectura se especializa en aquella parte donde puede explotar mejor sus capacidades. Donde mayor rendimiento se alcanza es en estos clústeres heterogéneos, donde múltiples nodos son interconectados entre sí, pudiendo dichos nodos diferenciarse no sólo entre arquitecturas CPU-GPU, sino también en las capacidades computacionales dentro de estas arquitecturas. Con este tipo de escenarios en mente, se presentan nuevos retos en los que lograr que el software que hemos elegido como candidato se ejecuten de la manera más eficiente y obteniendo los mejores resultados posibles. Estas nuevas plataformas hacen necesario un rediseño del software para aprovechar al máximo los recursos computacionales disponibles. Se debe por tanto rediseñar y optimizar los algoritmos existentes para conseguir que las aportaciones en este campo sean relevantes, y encontrar algoritmos que, por su propia naturaleza sean candidatos para que su ejecución en dichas plataformas de alto rendimiento sea óptima. Encontramos en este punto una familia de algoritmos denominados bioinspirados, que utilizan la inteligencia colectiva como núcleo para la resolución de problemas. Precisamente esta inteligencia colectiva es la que les hace candidatos perfectos para su implementación en estas plataformas bajo el nuevo paradigma de computación paralela, puesto que las soluciones pueden ser construidas en base a individuos que mediante alguna forma de comunicación son capaces de construir conjuntamente una solución común. Esta tesis se centrará especialmente en uno de estos algoritmos bioinspirados que se engloba dentro del término metaheurísticas bajo el paradigma del Soft Computing, el Ant Colony Optimization “ACO”. Se realizará una contextualización, estudio y análisis del algoritmo. Se detectarán las partes más críticas y serán rediseñadas buscando su optimización y paralelización, manteniendo o mejorando la calidad de sus soluciones. Posteriormente se pasará a implementar y testear las posibles alternativas sobre diversas plataformas de alto rendimiento. Se utilizará el conocimiento adquirido en el estudio teórico-práctico anterior para su aplicación a casos reales, más en concreto se mostrará su aplicación sobre el plegado de proteínas. Todo este análisis es trasladado a su aplicación a un caso concreto. En este trabajo, aunamos las nuevas plataformas hardware de alto rendimiento junto al rediseño e implementación software de un algoritmo bioinspirado aplicado a un problema científico de gran complejidad como es el caso del plegado de proteínas. Es necesario cuando se implementa una solución a un problema real, realizar un estudio previo que permita la comprensión del problema en profundidad, ya que se encontrará nueva terminología y problemática para cualquier neófito en la materia, en este caso, se hablará de aminoácidos, moléculas o modelos de simulación que son desconocidos para los individuos que no sean de un perfil biomédico.Ingeniería, Industria y Construcció

    Generic Techniques in General Purpose GPU Programming with Applications to Ant Colony and Image Processing Algorithms

    Get PDF
    In 2006 NVIDIA introduced a new unified GPU architecture facilitating general-purpose computation on the GPU. The following year NVIDIA introduced CUDA, a parallel programming architecture for developing general purpose applications for direct execution on the new unified GPU. CUDA exposes the GPU's massively parallel architecture of the GPU so that parallel code can be written to execute much faster than its sequential counterpart. Although CUDA abstracts the underlying architecture, fully utilising and scheduling the GPU is non-trivial and has given rise to a new active area of research. Due to the inherent complexities pertaining to GPU development, in this thesis we explore and find efficient parallel mappings of existing and new parallel algorithms on the GPU using NVIDIA CUDA. We place particular emphasis on metaheuristics, image processing and designing reusable techniques and mappings that can be applied to other problems and domains. We begin by focusing on Ant Colony Optimisation (ACO), a nature inspired heuristic approach for solving optimisation problems. We present a versatile improved data-parallel approach for solving the Travelling Salesman Problem using ACO resulting in significant speedups. By extending our initial work, we show how existing mappings of ACO on the GPU are unable to compete against their sequential counterpart when common CPU optimisation strategies are employed and detail three distinct candidate set parallelisation strategies for execution on the GPU. By further extending our data-parallel approach we present the first implementation of an ACO-based edge detection algorithm on the GPU to reduce the execution time and improve the viability of ACO-based edge detection. We finish by presenting a new color edge detection technique using the volume of a pixel in the HSI color space along with a parallel GPU implementation that is able to withstand greater levels of noise than existing algorithms

    GPU parallelization strategies for metaheuristics: a survey

    Get PDF
    Metaheuristics have been showing interesting results in solving hard optimization problems. However, they become limited in terms of effectiveness and runtime for high dimensional problems. Thanks to the independency of metaheuristics components, parallel computing appears as an attractive choice to reduce the execution time and to improve solution quality. By exploiting the increasing performance and programability of graphics processing units (GPUs) to this aim, GPU-based parallel metaheuristics have been implemented using different designs. RecentresultsinthisareashowthatGPUstendtobeeffectiveco-processors forleveraging complex optimization problems.In thissurvey, mechanisms involvedinGPUprogrammingforimplementingparallelmetaheuristicsare presentedanddiscussedthroughastudyofrelevantresearchpapers. Metaheuristics can obtain satisfying results when solving optimization problems in a reasonable time. However, they suffer from the lack of scalability. Metaheuristics become limited ahead complex highdimensional optimization problems. To overcome this limitation, GPU based parallel computing appears as a strong alternative. Thanks to GPUs, parallelmetaheuristicsachievedbetterresultsintermsofcomputation,and evensolutionquality

    Ant Colony Optimization

    Get PDF
    Ant Colony Optimization (ACO) is the best example of how studies aimed at understanding and modeling the behavior of ants and other social insects can provide inspiration for the development of computational algorithms for the solution of difficult mathematical problems. Introduced by Marco Dorigo in his PhD thesis (1992) and initially applied to the travelling salesman problem, the ACO field has experienced a tremendous growth, standing today as an important nature-inspired stochastic metaheuristic for hard optimization problems. This book presents state-of-the-art ACO methods and is divided into two parts: (I) Techniques, which includes parallel implementations, and (II) Applications, where recent contributions of ACO to diverse fields, such as traffic congestion and control, structural optimization, manufacturing, and genomics are presented

    Optimizacija usmjeravanja vozila primjenom višestrukih poboljšanja u lokalnom pretraživanju

    Get PDF
    Combinatorial optimization problems on graphs arise in many practical applications. One of the most studied practical combinatorial optimization problem is the Vehicle Routing Problem (VRP). When coupled with modern in-car navigation and fleet management software, real world applications of VRP optimization result in significant cost savings. In this paper novel multiple improvements pivoting rule for Capacitated VRP (CVRP) is proposed. Its application significantly reduces computational time needed for CVRP optimization. A novel pivoting rule is implemented as part of the search step selection mechanism in the Iterated Local Search algorithm. Augmented iterated local search algorithm is tested on 4 large scale real-world problems in Croatia with up to 7, 065 customers and 236 vehicles, and on standard CVRP benchmark sets. Real-world problem data was obtained from a large Croatian logistics company. Comparison of well known first and best pivoting rules with proposed novel multiple improvements pivoting rule regarding travel distance, number of search moves and computational time is given. Achieved computational speed-ups are up to 29 times compared to the first improvement pivoting rule and 9 times compared to the best improvement pivoting rule, without any substantial degradation in quality of the obtained solution.Kombinatoričke optimizacije na grafu pojavljuju se u mnogim aplikacijama u praksi. Jedan od najviše proučavanih kombinatoričkih optimizacijskih problema je problem usmjeravanja vozila. Ukoliko se optimizacija usmjeravanja vozila poveže sa suvremenim u vozila ugrađenim sustavima navigacije i nadgledanja voznog parka moguće je postići značajne uštede u troškovima dostave. U ovom radu je predložen novi mehanizam odabira smjera lokalnog pretraživanja zasnovan na višestrukim poboljšanjima za rješavanje kapacitivnog problema usmjeravanja vozila. Predloženi novi mehanizam je implementiran kao dio mehanizma odabira smjera lokalnog pretraživanja u algoritmu iterativnog lokalnog pretraživanja. Prošireni algoritam iterativnog lokalnog pretraživanja je provjeren na 4 vrlo velika optimizacijska problema sa stvarnim podacima iz Hrvatske (skup od 7.065 kupaca i 236 dostavnih vozila) i na standardnim testnim skupovima. Stvarni testni podaci dobiveni su od jedne velike hrvatske logističke tvrtke. U radu je napravljena usporedba između mehanizama odabira smjera lokalnog pretraživanja zasnovanih na prvom i najboljem poboljšanju te predloženog mehanizma poboljšanja lokalne pretrage. Usporedba je napravljena prema prijeđenom putu, broju pomaka lokalnog pretraživanja i vremenu izračuna. Dobiveni rezultati pokazuju ubrzanje u vremenu izračuna za 29 puta u usporedbi sa prvim smjerom poboljšanja lokalne pretrage te 9 puta u usporedbi sa najboljim smjerom poboljšanja lokalne pretrage bez značajnijih degradacija u kvaliteti dobivenog rješenja

    Simulated Annealing

    Get PDF
    The book contains 15 chapters presenting recent contributions of top researchers working with Simulated Annealing (SA). Although it represents a small sample of the research activity on SA, the book will certainly serve as a valuable tool for researchers interested in getting involved in this multidisciplinary field. In fact, one of the salient features is that the book is highly multidisciplinary in terms of application areas since it assembles experts from the fields of Biology, Telecommunications, Geology, Electronics and Medicine

    Optimisation heuristics for solving technician and task scheduling problems

    Get PDF
    Motivated by an underlying industrial demand, solving intractable technician and task scheduling problems through the use of heuristic and metaheuristic approaches have long been an active research area within the academic community. Many solution methodologies, proposed in the literature, have either been developed to solve a particular variant of the technician and task scheduling problem or are only appropriate for a specific scale of the problem. The motivation of this research is to find general-purpose heuristic approaches that can solve variants of technician and task scheduling problems, at scale, balancing time efficiency and solution quality. The unique challenges include finding heuristics that are robust, easily adapted to deal with extra constraints, and scalable, to solve problems that are indicative of the real world. The research presented in this thesis describes three heuristic methodologies that have been designed and implemented: (1) the intelligent decision heuristic (which considers multiple team configuration scenarios and job allocations simultaneously), (2) the look ahead heuristic (characterised by its ability to consider the impact of allocation decisions on subsequent stages of the scheduling process), and (3) the greedy randomized heuristic (which has a flexible allocation approach and is computationally efficient). Datasets used to test the three heuristic methodologies include real world problem instances, instances from the literature, problem instances extended from the literature to include extra constraints, and, finally, instances created using a data generator. The datasets used include a broad array of real world constraints (skill requirements, teaming, priority, precedence, unavailable days, outsourcing, time windows, and location) on a range of problem sizes (5-2500 jobs) to thoroughly investigate the scalability and robustness of the heuristics. The key findings presented are that the constraints a problem features and the size of the problem heavily influence the design and behaviour of the solution approach used. The contributions of this research are; benchmark datasets indicative of the real world in terms of both constraints included and problem size, the data generators developed which enable the creation of data to investigate certain problem aspects, mathematical formulation of the multi period technician routing and scheduling problem, and, finally, the heuristics developed which have proved to be robust and scalable solution methodologies

    Preventing premature convergence and proving the optimality in evolutionary algorithms

    Get PDF
    http://ea2013.inria.fr//proceedings.pdfInternational audienceEvolutionary Algorithms (EA) usually carry out an efficient exploration of the search-space, but get often trapped in local minima and do not prove the optimality of the solution. Interval-based techniques, on the other hand, yield a numerical proof of optimality of the solution. However, they may fail to converge within a reasonable time due to their inability to quickly compute a good approximation of the global minimum and their exponential complexity. The contribution of this paper is a hybrid algorithm called Charibde in which a particular EA, Differential Evolution, cooperates with a Branch and Bound algorithm endowed with interval propagation techniques. It prevents premature convergence toward local optima and outperforms both deterministic and stochastic existing approaches. We demonstrate its efficiency on a benchmark of highly multimodal problems, for which we provide previously unknown global minima and certification of optimality

    Technology for Low Resolution Space Based RSO Detection and Characterisation

    Get PDF
    Space Situational Awareness (SSA) refers to all activities to detect, identify and track objects in Earth orbit. SSA is critical to all current and future space activities and protect space assets by providing access control, conjunction warnings, and monitoring status of active satellites. Currently SSA methods and infrastructure are not sufficient to account for the proliferations of space debris. In response to the need for better SSA there has been many different areas of research looking to improve SSA most of the requiring dedicated ground or space-based infrastructure. In this thesis, a novel approach for the characterisation of RSO’s (Resident Space Objects) from passive low-resolution space-based sensors is presented with all the background work performed to enable this novel method. Low resolution space-based sensors are common on current satellites, with many of these sensors being in space using them passively to detect RSO’s can greatly augment SSA with out expensive infrastructure or long lead times. One of the largest hurtles to overcome with research in the area has to do with the lack of publicly available labelled data to test and confirm results with. To overcome this hurtle a simulation software, ORBITALS, was created. To verify and validate the ORBITALS simulator it was compared with the Fast Auroral Imager images, which is one of the only publicly available low-resolution space-based images found with auxiliary data. During the development of the ORBITALS simulator it was found that the generation of these simulated images are computationally intensive when propagating the entire space catalog. To overcome this an upgrade of the currently used propagation method, Specialised General Perturbation Method 4th order (SGP4), was performed to allow the algorithm to run in parallel reducing the computational time required to propagate entire catalogs of RSO’s. From the results it was found that the standard facet model with a particle swarm optimisation performed the best estimating an RSO’s attitude with a 0.66 degree RMSE accuracy across a sequence, and ~1% MAPE accuracy for the optical properties. This accomplished this thesis goal of demonstrating the feasibility of low-resolution passive RSO characterisation from space-based platforms in a simulated environment
    corecore