28 research outputs found

    Automatic Multi-GPU Code Generation applied to Simulation of Electrical Machines

    Get PDF
    The electrical and electronic engineering has used parallel programming to solve its large scale complex problems for performance reasons. However, as parallel programming requires a non-trivial distribution of tasks and data, developers find it hard to implement their applications effectively. Thus, in order to reduce design complexity, we propose an approach to generate code for hybrid architectures (e.g. CPU + GPU) using OpenCL, an open standard for parallel programming of heterogeneous systems. This approach is based on Model Driven Engineering (MDE) and the MARTE profile, standard proposed by Object Management Group (OMG). The aim is to provide resources to non-specialists in parallel programming to implement their applications. Moreover, thanks to model reuse capacity, we can add/change functionalities or the target architecture. Consequently, this approach helps industries to achieve their time-to-market constraints and confirms by experimental tests, performance improvements using multi-GPU environments.Comment: Compumag 201

    A Modeling Approach based on UML/MARTE for GPU Architecture

    Get PDF
    Nowadays, the High Performance Computing is part of the context of embedded systems. Graphics Processing Units (GPUs) are more and more used in acceleration of the most part of algorithms and applications. Over the past years, not many efforts have been done to describe abstractions of applications in relation to their target architectures. Thus, when developers need to associate applications and GPUs, for example, they find difficulty and prefer using API for these architectures. This paper presents a metamodel extension for MARTE profile and a model for GPU architectures. The main goal is to specify the task and data allocation in the memory hierarchy of these architectures. The results show that this approach will help to generate code for GPUs based on model transformations using Model Driven Engineering (MDE).Comment: Symposium en Architectures nouvelles de machines (SympA'14) (2011

    Task-based Conjugate-Gradient for multi-GPUs platforms

    Get PDF
    Whereas most today parallel High Performance Computing (HPC) software is written as highly tuned code taking care of low-level details, the advent of the manycore area forces the community to consider modular programming paradigms and delegate part of the work to a third party software. That latter approach has been shown to be very productive and efficient with regular algorithms, such as dense linear algebra solvers. In this paper we show that such a model can be efficiently applied to a much more irregular and less compute intensive algorithm. We illustrate our discussion with the standard unpreconditioned Conjugate Gradient (CG) that we carefully express as a task-based algorithm. We use the StarPU runtime system to assess the efficiency of the approach on a computational platform consisting of three NVIDIA Fermi GPUs. We show that almost optimum speed up (up to 2.89) may be reached (relatively to a mono-GPU execution) when processing large matrices and that the performance is portable when changing the low-level memory transfer mechanism.andis que la plupart des logiciels de calcul haute performance (HPC) actuels sont des codes extrêmement optimisés en prenant en compte les détails de bas-niveau, l'avènement de l'ère manycore incite la communauté à considèrer des paradigmes de programmation mod- ulaires et ainsi déléguer une partie du travail à des librairies tierces. Cette dernière approche s'est avérée très productive et efficace dans le cas d'algorithmes réguliers, tels que ceux issus de l'algèbre linéaire dense. Dans ce papier, nous démontrons qu'un tel modèle peut être effi- cacement appliqué à un problème beaucoup plus irrégulier et moins intensif en calcul. Nous illustrons notre discussion avec l'algorithme standard du Gradient Conjugué (CG) non précon- ditionné que nous exprimons sous la forme d'un algorithme en graphe de tâches. Nous utilisons le moteur d'exécution StarPU pour évaluer l'efficacité de notre approche sur une plate-forme de calcul composée de trois accélérateurs graphiques (GPU) NVIDIA Fermi. Nous démontrons qu'une accroissement de performance (jusqu'à un facteur 2, 89) quasi optimal (relativement au cas mono-GPU) peut être atteinte lorsque sont traitées des matrices creuses de grande taille. Nous montrons de surcroît que la performance est portable quand les mécanismes de transfert mémoire bas-niveau sont changés

    Book of Abstracts of the Sixth SIAM Workshop on Combinatorial Scientific Computing

    Get PDF
    Book of Abstracts of CSC14 edited by Bora UçarInternational audienceThe Sixth SIAM Workshop on Combinatorial Scientific Computing, CSC14, was organized at the Ecole Normale Supérieure de Lyon, France on 21st to 23rd July, 2014. This two and a half day event marked the sixth in a series that started ten years ago in San Francisco, USA. The CSC14 Workshop's focus was on combinatorial mathematics and algorithms in high performance computing, broadly interpreted. The workshop featured three invited talks, 27 contributed talks and eight poster presentations. All three invited talks were focused on two interesting fields of research specifically: randomized algorithms for numerical linear algebra and network analysis. The contributed talks and the posters targeted modeling, analysis, bisection, clustering, and partitioning of graphs, applied in the context of networks, sparse matrix factorizations, iterative solvers, fast multi-pole methods, automatic differentiation, high-performance computing, and linear programming. The workshop was held at the premises of the LIP laboratory of ENS Lyon and was generously supported by the LABEX MILYON (ANR-10-LABX-0070, Université de Lyon, within the program ''Investissements d'Avenir'' ANR-11-IDEX-0007 operated by the French National Research Agency), and by SIAM

    Sur la conception de solveurs linéaires hybrides pour les architectures parallèles modernes

    Get PDF
    In the context of this thesis, our focus is on numerical linear algebra, more precisely on solution of large sparse systems of linear equations. We focus on designing efficient parallel implementations of MaPHyS, an hybrid linear solver based on domain decomposition techniques. First we investigate the MPI+threads approach. In MaPHyS, the first level of parallelism arises from the independent treatment of the various subdomains. The second level is exploited thanks to the use of multi-threaded dense and sparse linear algebra kernels involved at the subdomain level. Such an hybrid implementation of an hybrid linear solver suitably matches the hierarchical structure of modern supercomputers and enables a trade-off between the numerical and parallel performances of the solver. We demonstrate the flexibility of our parallel implementation on a set of test examples. Secondly, we follow a more disruptive approach where the algorithms are described as sets of tasks with data inter-dependencies that leads to a directed acyclic graph (DAG) representation. The tasks are handled by a runtime system. We illustrate how a first task-based parallel implementation can be obtained by composing task-based parallel libraries within MPI processes throught a preliminary prototype implementation of our hybrid solver. We then show how a task-based approach fully abstracting the hardware architecture can successfully exploit a wide range of modern hardware architectures. We implemented a full task-based Conjugate Gradient algorithm and showed that the proposed approach leads to very high performance on multi-GPU, multicore and heterogeneous architectures.Dans le contexte de cette thèse, nous nous focalisons sur des algorithmes pour l’algèbre linéaire numérique, plus précisément sur la résolution de grands systèmes linéaires creux. Nous mettons au point des méthodes de parallélisation pour le solveur linéaire hybride MaPHyS. Premièrement nous considerons l'aproche MPI+threads. Dans MaPHyS, le premier niveau de parallélisme consiste au traitement indépendant des sous-domaines. Le second niveau est exploité grâce à l’utilisation de noyaux multithreadés denses et creux au sein des sous-domaines. Une telle implémentation correspond bien à la structure hiérarchique des supercalculateurs modernes et permet un compromis entre les performances numériques et parallèles du solveur. Nous démontrons la flexibilité de notre implémentation parallèle sur un ensemble de cas tests. Deuxièmement nous considérons un approche plus innovante, où les algorithmes sont décrits comme des ensembles de tâches avec des inter-dépendances, i.e., un graphe de tâches orienté sans cycle (DAG). Nous illustrons d’abord comment une première parallélisation à base de tâches peut être obtenue en composant des librairies à base de tâches au sein des processus MPI illustrer par un prototype d’implémentation préliminaire de notre solveur hybride. Nous montrons ensuite comment une approche à base de tâches abstrayant entièrement le matériel peut exploiter avec succès une large gamme d’architectures matérielles. À cet effet, nous avons implanté une version à base de tâches de l’algorithme du Gradient Conjugué et nous montrons que l’approche proposée permet d’atteindre une très haute performance sur des architectures multi-GPU, multicoeur ainsi qu’hétérogène

    Hybride 3D Simulationsmethoden zur Abbildung der Schädigungsvorgänge in Mehrphasen-Verbundwerkstoffen

    Get PDF
    Modern digital material approaches for the visualization and simulation of heterogeneous materials allow to investigate the behavior of complex multiphase materials with their physical nonlinear material response at various scales. However, these computational techniques require extensive hardware resources with respect to computing power and main memory to solve numerically large-scale discretized models in 3D. Due to a very high number of degrees of freedom, which may rapidly be increased to the two-digit million range, the limited hardware ressources are to be utilized in a most efficient way to enable an execution of the numerical algorithms in minimal computation time. Hence, in the field of computational mechanics, various methods and algorithms can lead to an optimized runtime behavior of nonlinear simulation models, where several approaches are proposed and investigated in this thesis. Today, the numerical simulation of damage effects in heterogeneous materials is performed by the adaption of multiscale methods. A consistent modeling in the three-dimensional space with an appropriate discretization resolution on each scale (based on a hierarchical or concurrent multiscale model), however, still contains computational challenges in respect to the convergence behavior, the scale transition or the solver performance of the weak coupled problems. The computational efficiency and the distribution among available hardware resources (often based on a parallel hardware architecture) can significantly be improved. In the past years, high-performance computing (HPC) and graphics processing unit (GPU) based computation techniques were established for the investigationof scientific objectives. Their application results in the modification of existing and the development of new computational methods for the numerical implementation, which enables to take advantage of massively clustered computer hardware resources. In the field of numerical simulation in material science, e.g. within the investigation of damage effects in multiphase composites, the suitability of such models is often restricted by the number of degrees of freedom (d.o.f.s) in the three-dimensional spatial discretization. This proves to be difficult for the type of implementation method used for the nonlinear simulation procedure and, simultaneously has a great influence on memory demand and computational time. In this thesis, a hybrid discretization technique has been developed for the three-dimensional discretization of a three-phase material, which is respecting the numerical efficiency of nonlinear (damage) simulations of these materials. The increase of the computational efficiency is enabled by the improved scalability of the numerical algorithms. Consequently, substructuring methods for partitioning the hybrid mesh were implemented, tested and adapted to the HPC computing framework using several hundred CPU (central processing units) nodes for building the finite element assembly. A memory-efficient iterative and parallelized equation solver combined with a special preconditioning technique for solving the underlying equation system was modified and adapted to enable combined CPU and GPU based computations. Hence, it is recommended by the author to apply the substructuring method for hybrid meshes, which respects different material phases and their mechanical behavior and which enables to split the structure in elastic and inelastic parts. However, the consideration of the nonlinear material behavior, specified for the corresponding phase, is limited to the inelastic domains only, and by that causes a decreased computing time for the nonlinear procedure. Due to the high numerical effort for such simulations, an alternative approach for the nonlinear finite element analysis, based on the sequential linear analysis, was implemented in respect to scalable HPC. The incremental-iterative procedure in finite element analysis (FEA) during the nonlinear step was then replaced by a sequence of linear FE analysis when damage in critical regions occured, known in literature as saw-tooth approach. As a result, qualitative (smeared) crack initiation in 3D multiphase specimens has efficiently been simulated

    New strategies for the aerodynamic design optimization of aeronautical configurations through soft-computing techniques

    Get PDF
    Premio Extraordinario de Doctorado de la UAH en 2013Lozano RodrĂ­guez, Carlos, codir.This thesis deals with the improvement of the optimization process in the aerodynamic design of aeronautical configurations. Nowadays, this topic is of great importance in order to allow the European aeronautical industry to reduce their development and operational costs, decrease the time-to-market for new aircraft, improve the quality of their products and therefore maintain their competitiveness. Within this thesis, a study of the state-of-the-art of the aerodynamic optimization tools has been performed, and several contributions have been proposed at different levels: -One of the main drawbacks for an industrial application of aerodynamic optimization tools is the huge requirement of computational resources, in particular, for complex optimization problems, current methodological approaches would need more than a year to obtain an optimized aircraft. For this reason, one proposed contribution of this work is focused on reducing the computational cost by the use of different techniques as surrogate modelling, control theory, as well as other more software-related techniques as code optimization and proper domain parallelization, all with the goal of decreasing the cost of the aerodynamic design process. -Other contribution is related to the consideration of the design process as a global optimization problem, and, more specifically, the use of evolutionary algorithms (EAs) to perform a preliminary broad exploration of the design space, due to their ability to obtain global optima. Regarding this, EAs have been hybridized with metamodels (or surrogate models), in order to substitute expensive CFD simulations. In this thesis, an innovative approach for the global aerodynamic optimization of aeronautical configurations is proposed, consisting of an Evolutionary Programming algorithm hybridized with a Support Vector regression algorithm (SVMr) as a metamodel. Specific issues as precision, dataset training size, geometry parameterization sensitivity and techniques for design of experiments are discussed and the potential of the proposed approach to achieve innovative shapes that would not be achieved with traditional methods is assessed. -Then, after a broad exploration of the design space, the optimization process is continued with local gradient-based optimization techniques for a finer improvement of the geometry. Here, an automated optimization framework is presented to address aerodynamic shape design problems. Key aspects of this framework include the use of the adjoint methodology to make the computational requirements independent of the number of design variables, and Computer Aided Design (CAD)-based shape parameterization, which uses the flexibility of Non-Uniform Rational B-Splines (NURBS) to handle complex configurations. The mentioned approach is applied to the optimization of several test cases and the improvements of the proposed strategy and its ability to achieve efficient shapes will complete this study

    New strategies for the aerodynamic design optimization of aeronautical configurations through soft-computing techniques

    Get PDF
    Premio Extraordinario de Doctorado de la UAH en 2013Lozano RodrĂ­guez, Carlos, codir.This thesis deals with the improvement of the optimization process in the aerodynamic design of aeronautical configurations. Nowadays, this topic is of great importance in order to allow the European aeronautical industry to reduce their development and operational costs, decrease the time-to-market for new aircraft, improve the quality of their products and therefore maintain their competitiveness. Within this thesis, a study of the state-of-the-art of the aerodynamic optimization tools has been performed, and several contributions have been proposed at different levels: -One of the main drawbacks for an industrial application of aerodynamic optimization tools is the huge requirement of computational resources, in particular, for complex optimization problems, current methodological approaches would need more than a year to obtain an optimized aircraft. For this reason, one proposed contribution of this work is focused on reducing the computational cost by the use of different techniques as surrogate modelling, control theory, as well as other more software-related techniques as code optimization and proper domain parallelization, all with the goal of decreasing the cost of the aerodynamic design process. -Other contribution is related to the consideration of the design process as a global optimization problem, and, more specifically, the use of evolutionary algorithms (EAs) to perform a preliminary broad exploration of the design space, due to their ability to obtain global optima. Regarding this, EAs have been hybridized with metamodels (or surrogate models), in order to substitute expensive CFD simulations. In this thesis, an innovative approach for the global aerodynamic optimization of aeronautical configurations is proposed, consisting of an Evolutionary Programming algorithm hybridized with a Support Vector regression algorithm (SVMr) as a metamodel. Specific issues as precision, dataset training size, geometry parameterization sensitivity and techniques for design of experiments are discussed and the potential of the proposed approach to achieve innovative shapes that would not be achieved with traditional methods is assessed. -Then, after a broad exploration of the design space, the optimization process is continued with local gradient-based optimization techniques for a finer improvement of the geometry. Here, an automated optimization framework is presented to address aerodynamic shape design problems. Key aspects of this framework include the use of the adjoint methodology to make the computational requirements independent of the number of design variables, and Computer Aided Design (CAD)-based shape parameterization, which uses the flexibility of Non-Uniform Rational B-Splines (NURBS) to handle complex configurations. The mentioned approach is applied to the optimization of several test cases and the improvements of the proposed strategy and its ability to achieve efficient shapes will complete this study
    corecore