5 research outputs found

    Étude de stratégies parallèles de coopération avec POSL

    Get PDF
    National audienceThe multi-core technology and massive parallel ar-chitectures are nowadays more accessible for a broadpublic through hardware like the Xeon Phi or GPUcards. This architecture strategy has been commonlyadopted by processor manufacturers to stick with Moo-re’s law. However, this new architecture implies newways to design and implement algorithms to exploit itsfull potential. This is in particular true for constraint-based solvers dealing with combinatorial optimizationproblems. In this paper we use Parallel-Oriented Sol-ver Language (POSL), a framework to build intercon-nected meta-heuristic-based solvers working in paral-lel, by using communications operators, to solve ins-tances ofSocial GolfersandCostas Arrayproblemsand measure its performance. We test many differentsolution’s strategies, thanks to a parallel-oriented lan-guage provided, based on operators.La technologie multi-coeur et les architecturesmassivement parallèles sont de plus en plus accessiblesà tous, à travers des matériaux comme le XeonPhi ou les cartes GPU. Cette stratégie d’architecture aété communément adoptée par les producteurs pourfaire face à la loi de Moore. Or, ces nouvelles architecturesimpliquent d’autres manières de concevoir etd’implémenter les algorithmes, pour exploiter complètementleur potentiel, en particulier dans le cas dessolveurs de contraintes traitant de problèmes d’optimisationcombinatoire. Dans cet article on utilise un Langagepour créer des Solveurs Orienté Parallèle (POSLpour Parallel-Oriented Solver Language) : cadre permettantde construire des solveurs basés sur desméta-heuristiques interconnectées travaillant en parallèle,dans le but de résoudre des instances des problèmesSocial Golfers et Costas Array et de mesurersa performance. Nous testons plusieurs stratégiesde résolution, grâce au langage orienté parallèle, basésur des opérateurs, que POSL fournis

    A Massively Parallel Combinatorial Optimization Algorithm for the Costas Array Problem

    Get PDF
    National audienceFor a few decades the family of Local Search methods and Metaheuristics has been quite successful in solving large real-life problems. Applying Local Search to Constraint Satisfaction Problems (CSPs) has also been attracting some interest as it can tackle CSPs instances far beyond the reach of classical propagation-based solvers. In this research we address the issue of parallelizing constraint solvers for massively parallel architectures, with the aim of tackling platforms with several thousands of CPUs. A design principle implied by this goal is to abandon the classical model of shared data structures which have been developed for shared-memory architectures or tightly controlled master-slave communication in cluster-based architectures and to first consider either purely independent parallelism or very limited communication between parallel processes, and then to see if we can improve runtime performance using some form of communication

    Large-Scale Parallelism for Constraint-Based Local Search: The Costas Array Case Study

    Get PDF
    Abstract We present the parallel implementation of a constraint-based Local Search algorithm and investigate its performance on several hardware platforms with several hundreds or thousands of cores. We chose as the basis for these experiments the Adaptive Search method, an efficient sequential Local Search method for Constraint Satisfaction Problems (CSP). After preliminary experiments on some CSPLib benchmarks, we detail the modeling and solving of a hard combinatorial problem related to radar and sonar applications: the Costas Array Problem. Performance evaluation on some classical CSP benchmarks shows that speedups are very good for a few tens of cores, and good up to a few hundreds of cores. However for a hard combinatorial search problem such as the Costas Array Problem, performance evaluation of the sequential version shows results outperforming previous Local Search implementations, while the parallel version shows nearly linear speedups up to 8,192 cores. The proposed parallel scheme is simple and based on independent multi-walks with no communication between processes during search. We also investigated a cooperative multi-walk scheme where processes share simple information, but this scheme does not seem to improve performance
    corecore