32 research outputs found

    GPU-Based Parallel Particle Swarm Optimization Methods for Graph Drawing

    Get PDF
    Particle Swarm Optimization (PSO) is a population-based stochastic search technique for solving optimization problems, which has been proven to be effective in a wide range of applications. However, the computational efficiency on large-scale problems is still unsatisfactory. A graph drawing is a pictorial representation of the vertices and edges of a graph. Two PSO heuristic procedures, one serial and the other parallel, are developed for undirected graph drawing. Each particle corresponds to a different layout of the graph. The particle fitness is defined based on the concept of the energy in the force-directed method. The serial PSO procedure is executed on a CPU and the parallel PSO procedure is executed on a GPU. Two PSO procedures have different data structures and strategies. The performance of the proposed methods is evaluated through several different graphs. The experimental results show that the two PSO procedures are both as effective as the force-directed method, and the parallel procedure is more advantageous than the serial procedure for larger graphs

    Jaya optimization algorithm with GPU acceleration

    Get PDF
    Optimization methods allow looking for an optimal value given a specific function within a constrained or unconstrained domain. These methods are useful for a wide range of scientific and engineering applications. Recently, a new optimization method called Jaya has generated growing interest because of its simplicity and efficiency. In this paper, we present the Jaya GPU-based parallel algorithms we developed and analyze both parallel performance and optimization performance using a well-known benchmark of unconstrained functions. Results indicate that parallel Jaya implementation achieves significant speed-up for all benchmark functions, obtaining speed-ups of up to 190×, without affecting optimization performance.This research was supported by the Spanish Ministry of Economy and Competitiveness under Grant TIN2015-66972-C5-4-R, co-financed by FEDER funds (MINECO/FEDER/UE)

    Optimization of multi-objective land use model with genetic algorithm

    Get PDF
    The first task of the city planner is to effectively locate integrated land use types for various objectives. The Multi Objective Land Use Planning Model developed to achieve this goal, aims to maximize land value and minimize the transportation. The genetic algorithm method developed to find the optimum layout according to the Multi-Objective Land Use Planning Model has been explained, the success and performance of the process has been tested with artificial data, and its usability in real problems has been examined. According to the results of the study, using this method, it is revealed that layout plans that are very close to the maximum efficiency value can be found within 1 day in cities with a population of up to 1,000,000, within 1 week in cities up to 5,000,000, and within 1.5 months in cities close to 16,000,000. By examining the results, the deficiencies of this method are determined and the suggestions for improvement of this method are stated. The problem chosen in this study is a problem that most city planners have to solve and the developed application has been opened to the use of other experts. This makes this work unique as it allows planning experts who are incapable of developing such methods to experiment

    ARES:Adaptive receding-horizon synthesis of optimal plans

    Get PDF
    We introduce ARES, an efficient approximation algorithm for generating optimal plans (action sequences) that take an initial state of a Markov Decision Process (MDP) to a state whose cost is below a specified (convergence) threshold. ARES uses Particle Swarm Optimization, with adaptive sizing for both the receding horizon and the particle swarm. Inspired by Importance Splitting, the length of the horizon and the number of particles are chosen such that at least one particle reaches a next-level state, that is, a state where the cost decreases by a required delta from the previous-level state. The level relation on states and the plans constructed by ARES implicitly define a Lyapunov function and an optimal policy, respectively, both of which could be explicitly generated by applying ARES to all states of the MDP, up to some topological equivalence relation. We also assess the effectiveness of ARES by statistically evaluating its rate of success in generating optimal plans. The ARES algorithm resulted from our desire to clarify if flying in V-formation is a flocking policy that optimizes energy conservation, clear view, and velocity alignment. That is, we were interested to see if one could find optimal plans that bring a flock from an arbitrary initial state to a state exhibiting a single connected V-formation. For flocks with 7 birds, ARES is able to generate a plan that leads to a V-formation in 95% of the 8,000 random initial configurations within 63 s, on average. ARES can also be easily customized into a model-predictive controller (MPC) with an adaptive receding horizon and statistical guarantees of convergence. To the best of our knowledge, our adaptive-sizing approach is the first to provide convergence guarantees in receding-horizon techniques

    Heterogeneous architecture to process swarm optimization algorithms

    Get PDF
    Desde años recientes, el paralelismo hace parte de la arquitectura de las computadoras personales al incluir unidades de co-procesamiento como las unidades de procesamiento gráfico, para conformar así una arquitectura heterogénea. Este artículo presenta la implementación de algoritmos de enjambres sobre esta arquitectura para resolver problemas de optimización de funciones, destacando su estructura inherentemente paralela y sus propiedades de control distribuido. En estos algoritmos se paralelizan los individuos de la población y las dimensiones del problema gracias a la granuralidad del sistema de procesamiento, que además proporciona una baja latencia de comunicaciones entre los individuos debido al procesamiento embebido. Para evaluar las potencialidades de los algoritmos de enjambres sobre la plataforma heterogénea, son implementados dos de ellos: el algoritmo de enjambre de partículas y el algoritmo de enjambre de bacterias. Se utiliza la aceleración como métrica para contrastar los algoritmos en la arquitectura heterogénea compuesta por una GPU NVIDIA GTX480 y una unidad de procesamiento secuencial, donde el algoritmo de enjambre de partículas obtiene una aceleración de hasta 36,82x y el algoritmo de enjambre de bacterias logra una aceleración de hasta 9,26x. Además, se evalúa el efecto al incrementar el tamaño en las poblaciones donde la aceleración es significativamente diferenciable pero con riesgos en la calidad de las soluciones.Since few years ago, the parallel processing has been embedded in personal computers by including co-processing units as the graphics processing units resulting in a heterogeneous platform. This paper presents the implementation of swarm algorithms on this platform to solve several functions from optimization problems, where they highlight their inherent parallel processing and distributed control features. In the swarm algorithms, each individual and dimension problem are parallelized by the granularity of the processing system which also offer low communication latency between individuals through the embedded processing. To evaluate the potential of swarm algorithms on graphics processing units we have implemented two of them: the particle swarm optimization algorithm and the bacterial foraging optimization algorithm. The algorithms’ performance is measured using the acceleration where they are contrasted between a typical sequential processing platform and the NVIDIA GeForce GTX480 heterogeneous platform; the results show that the particle swarm algorithm obtained up to 36.82x and the bacterial foraging swarm algorithm obtained up to 9.26x. Finally, the effect to increase the size of the population is evaluated where we show both the dispersion and the quality of the solutions are decreased despite of high acceleration performance since the initial distribution of the individuals can converge to local optimal solution
    corecore