7 research outputs found

    Performance Improvement in Kernels by Guiding Compiler Auto-Vectorization Heuristics

    Get PDF
    Vectorization support in hardware continues to expand and grow as well we still continue on superscalar architectures. Unfortunately, compilers are not always able to generate optimal code for the hardware;detecting and generating vectorized code is extremely complex. Programmers can use a number of tools to aid in development and tuning, but most of these tools require expert or domain-specific knowledge to use. In this work we aim to provide techniques for determining the best way to optimize certain codes, with an end goal of guiding the compiler into generating optimized code without requiring expert knowledge from the developer. Initally, we study how to combine vectorization reports with iterative comilation and code generation and summarize our insights and patterns on how the compiler vectorizes code. Our utilities for iterative compiliation and code generation can be further used by non-experts in the generation and analysis of programs. Finally, we leverage the obtained knowledge to design a Support Vector Machine classifier to predict the speedup of a program given a sequence of optimization underprediction, with 82% of these accurate within 15 % both ways

    10191 Abstracts Collection -- Program Composition and Optimization : Autotuning, Scheduling, Metaprogramming and Beyond

    Get PDF
    From May 9 to 12, 2010, the Dagstuhl Seminar 10191 ``Program Composition and Optimization: Autotuning, Scheduling, Metaprogramming and Beyond\u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    User-directed Vectorization in OmpSs

    Get PDF
    In the recent shift to the multi-core and many-core era, where systems tend to be heterogeneous even at chip level, SIMD instruction sets and accelerators that exploit parallelism in a similar way are coming into prominence in new multiprocessors and systems. This heterogeneity, even at chip level, is causing a lot of trouble to compilers and parallel programming models in terms of being able to maximize the profitability of the computational resources in an easy, generic, efficient and portable fashion. Although a lot of work on automatic vectorization/simdization techniques has been done over the years, compilers show important limitations when vectorizing code with pointers and function calls because of the traditional compiler analysis limitations, such as those in pointers aliasing analysis. Concerning parallel programming models, some of them are restricted to specific architectures while other portable ones, such as OpenCL, require programmers to face low-level architecture details and hard source code transformations, presenting important performance problems among different architectures, which requires new tuning efforts. In an attempt to offer a unified and generic solution to the auto-vectorization/simdization and portability problems, we propose User-directed Vectorization in OmpSs, a high-level programming model extension that offers developers the possibility to easily guide the compiler in the vectorization process just introducing some simple notations on the vectorizable areas of the code, such loops and functions. We focused our particular design, implementation and evaluation on the Intel SSE instruction set for CPUs, getting the same or higher speed-ups than using the GCC compiler auto-vectorization in easily-vectorizable codes, and a performance improvement of up to 2.30 in more complex codes where GCC is not able to apply auto-vectorization and the hand-coded OpenCL version reaches a speed-up of 2.23

    Parallel algorithms and efficient implementation techniques for finite element approximations

    Get PDF
    In this thesis we study the efficient implementation of the finite element method for the numerical solution of partial differential equations (PDE) on modern parallel computer archi- tectures, such as Cray and IBM supercomputers. The domain-decomposition (DD) method represents the basis of parallel finite element software and is generally implemented such that the number of subdomains is equal to the number of MPI processes. We are interested in breaking this paradigm by introducing a second level of parallelism. Each subdomain is assigned to more than one processor and either MPI processes or multiple threads are used to implement the parallelism on the second level. The thesis is devoted to the study of this second level of parallelism and includes the stages described below. The algebraic additive Schwarz (AAS) domain-decomposition preconditioner is an integral part of the solution process. We seek to understand its performance on the parallel computers which we target and we introduce an improved construction approach for the parallel precon- ditioner. We examine a novel strategy for solving the AAS subdomain problems, using multiple MPI processes. At the subdomain level, this is represented by the ShyLU preconditioner. We bring improvements to its algorithm in the form of a novel inexact solver based on an incomplete QR (IQR) factorization. The performance of the new preconditioner framework is studied for Laplacian and advection-diffusion-reaction (ADR) problems and for Navier-Stokes problems, as a component within a larger framework of specialized preconditioners. The partitioning of the computational mesh comes with considerable memory limitations, when done at runtime on parallel computers, due to the low amount of available memory per processor. We describe and implement a solution to this problem, based on offloading the partitioning process to a preliminary offline stage of the simulation process. We also present the efficient implementation, based on parallel MPI collective instructions, of the routines which load the mesh parts during the simulation. We discuss an alternative parallel implementation of the finite element system assembly based on multi-threading. This new approach is used to supplement the existing one based on MPI parallelism, in situations where MPI alone can not make use of all the available parallel hardware resources. The work presented in the thesis has been done in the framework of two software projects: the Trilinos project and the LifeV parallel finite element modeling library. All the new develop- ments have been contributed back to the respective projects, to be used freely in subsequent public releases of the software
    corecore