44,939 research outputs found

    Fast Ant Colony Optimization on Runtime Reconfigurable Processor Arrays

    Get PDF
    Ant Colony Optimization (ACO) is a metaheuristic used to solve combinatorial optimization problems. As with other metaheuristics, like evolutionary methods, ACO algorithms often show good optimization behavior but are slow when compared to classical heuristics. Hence, there is a need to find fast implementations for ACO algorithms. In order to allow a fast parallel implementation, we propose several changes to a standard form of ACO algorithms. The main new features are the non-generational approach and the use of a threshold based decision function for the ants. We show that the new algorithm has a good optimization behavior and also allows a fast implementation on reconfigurable processor arrays. This is the first implementation of the ACO approach on a reconfigurable architecture. The running time of the algorithm is quasi-linear in the problem size n and the number of ants on a reconfigurable mesh with n2 processors, each provided with only a constant number of memory words

    Contract-Based General-Purpose GPU Programming

    Get PDF
    Using GPUs as general-purpose processors has revolutionized parallel computing by offering, for a large and growing set of algorithms, massive data-parallelization on desktop machines. An obstacle to widespread adoption, however, is the difficulty of programming them and the low-level control of the hardware required to achieve good performance. This paper suggests a programming library, SafeGPU, that aims at striking a balance between programmer productivity and performance, by making GPU data-parallel operations accessible from within a classical object-oriented programming language. The solution is integrated with the design-by-contract approach, which increases confidence in functional program correctness by embedding executable program specifications into the program text. We show that our library leads to modular and maintainable code that is accessible to GPGPU non-experts, while providing performance that is comparable with hand-written CUDA code. Furthermore, runtime contract checking turns out to be feasible, as the contracts can be executed on the GPU

    PARMODS: A Parallel Framework for MODS Metaheuristics

    Get PDF
    In this paper, we propose a novel framework for the parallel solution of combinatorial problems based on MODS theory (PARMODS) This framework makes use of metaheuristics based on the Deterministic Swapping (MODS) theory. These approaches represents the feasible solution space of any combinatorial problem through a Deterministic Finite Automata. Some of those methods are the Metaheuristic Of Deterministic Swapping (MODS), the Simulated Annealing Deterministic Swapping (SAMODS), the Simulated Annealing Genetic Swapping (SAGAMODS) and the Evolutionary Deterministic Swapping (EMODS) Those approaches have been utilized in different contexts such as data base optimization, operational research [1–3, 8] and multi-objective optimization. The main idea of this framework is to exploit parallel computation in order to obtain a general view of the feasible solution space of  any combinatorial optimization problem. This is, all the MODS methods are used in a unique general optimization process. In parallel, each instance of MODS explores a different region of the solution space. This allows us to explore distant regions of the feasible solution which could not be explored making use of classical (sequential) MODS implementations. Some experiments are performed making use of well-known TSP instances. Partial results shows that PARMODS provides better solutions than sequential MODS based implementations
    corecore